CN110490042A - Face identification device and access control equipment - Google Patents
Face identification device and access control equipment Download PDFInfo
- Publication number
- CN110490042A CN110490042A CN201910472703.7A CN201910472703A CN110490042A CN 110490042 A CN110490042 A CN 110490042A CN 201910472703 A CN201910472703 A CN 201910472703A CN 110490042 A CN110490042 A CN 110490042A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- exposure
- information
- infrared light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C9/00563—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
This application discloses a kind of face identification device and access control equipments, belong to computer vision field.Face identification device includes image acquisition units, image processor and human face analysis unit;The first optical filter in the filtering assembly that image acquisition units include passes through visible light and part near infrared light;Image acquisition units acquire the first picture signal and the second picture signal, first picture signal is generated according to the first default exposure, second picture signal is generated according to the second default exposure, wherein, near-infrared light filling at least is carried out within the Partial exposure period of the first default exposure, without near-infrared light filling in the exposure period of the second default exposure;Image processor handles at least one of the first picture signal and the second picture signal, obtains the first image information;Human face analysis unit carries out human face analysis to the first image information, obtains human face analysis result.The face recognition accuracy rate of face identification device in the application is higher.
Description
Technical field
This application involves computer vision field, in particular to a kind of face identification device and access control equipment.
Background technique
Currently, various capture apparatus are widely used in the fields such as intelligent transportation, security.Wherein, it is
Improve the quality of the image of shooting, the relevant technologies provide a kind of shooting including multi-spectrum filter device sensor array and set
It is standby.The partial pixel in multi-spectrum filter device sensor array in the capture apparatus is only used for induction near infrared light, remaining picture
Element is used for while incuding near infrared light and visible light.In this way, it includes visible optical information and close red that the capture apparatus, which can acquire,
The original image signal of external information, and isolated from the original image signal of acquisition while including visible optical information and close red
The RGB image of external information and only include Near Infrared Information near-infrared image.Later, by each pixel in RGB image
The Near Infrared Information removal for including, obtains visible images only comprising visible optical information.
However, above-mentioned capture apparatus including multi-spectrum filter device sensor array needs the original graph of acquisition in the later period
As in signal near infrared light information and visible optical information separated, process is more complicated, and the near-infrared figure obtained accordingly
The picture quality of picture and visible images is also relatively low.In this way, carrying out recognition of face in the image obtained based on the capture apparatus
When, the accuracy rate that will lead to recognition of face is lower.
Summary of the invention
The embodiment of the present application provides a kind of face identification device and access control equipment, can solve face knowledge in the related technology
The lower problem of other accuracy rate.The technical solution is as follows:
On the one hand, a kind of face identification device is provided, the face identification device includes: image acquisition units, image
Processor and human face analysis unit;
Described image acquisition unit includes filtering assembly, and the filtering assembly includes the first optical filter, and described first filters
Piece passes through visible light and part near infrared light;
Described image acquisition unit, for acquiring the first picture signal and the second picture signal, the first image signal
It is the picture signal generated according to the first default exposure, second picture signal is the image generated according to the second default exposure
Signal, wherein near-infrared light filling at least is carried out within the Partial exposure period of the described first default exposure, it is pre- described second
If without near-infrared light filling in the exposure period of exposure;
Described image processor, for at least one of the first image signal and second picture signal into
Row processing, obtains the first image information;
The human face analysis unit obtains human face analysis result for carrying out human face analysis to the first image information.
In a kind of possible implementation of the application, described image acquisition unit includes: imaging sensor and light aid,
Described image sensor is located at the light emission side of the filtering assembly;
Described image sensor, for the first image signal and second figure to be generated and exported by multiple exposure
As signal, the wherein double exposure that exposure is the multiple exposure is preset in the described first default exposure and described second;
The light aid includes the first light compensating apparatus, and first light compensating apparatus is for carrying out near-infrared light filling.
In a kind of possible implementation of the application,
The central wavelength that first light compensating apparatus carries out near-infrared light filling is to set characteristic wavelength or fall in set spy
When levying wave-length coverage, constraint condition is reached by the central wavelength and/or waveband width of the near infrared light of first optical filter.
In a kind of possible implementation of the application,
First light compensating apparatus carries out appointing in the wave-length coverage that the central wavelength of near-infrared light filling is 750 ± 10 nanometers
One wavelength;Or
First light compensating apparatus carries out appointing in the wave-length coverage that the central wavelength of near-infrared light filling is 780 ± 10 nanometers
One wavelength;Or
First light compensating apparatus carries out appointing in the wave-length coverage that the central wavelength of near-infrared light filling is 940 ± 10 nanometers
One wavelength.
In a kind of possible implementation of the application, the constraint condition includes:
Near-infrared benefit is carried out by the central wavelength of the near infrared light of first optical filter and first light compensating apparatus
Difference between the central wavelength of light is located within the scope of wavelength fluctuation, and the wavelength fluctuation range is 0~20 nanometer;Or
It is less than or equal to 50 nanometers by the half-band width of the near infrared light of first optical filter;Or
First band width is less than second band width;Wherein, the first band width refers to filters by described first
The waveband width of the near infrared light of mating plate, the second band width refer to by the near infrared light of first filter blocks
Waveband width;Or
Third waveband width, which is less than, refers to waveband width, and the third waveband width refers to that percent of pass is greater than setting ratio
The waveband width of near infrared light, described with reference to waveband width is any wide waveband in 50 nanometers~150 nanometers of wavelength band
Degree.
In a kind of possible implementation of the application, described image sensor includes multiple photosensitive channels, each photosensitive
Channel is used to incude the light of at least one visible light wave range, and the light of induction near infrared band.
In a kind of possible implementation of the application,
Described image sensor carries out multiple exposure using global Exposure mode, for any near-infrared light filling, closely
Intersection is not present in the period of infrared light filling and the exposure period of the closest described second default exposure, near-infrared light filling
Period is the subset of the exposure period of the described first default exposure, alternatively, the period of near-infrared light filling and described first
There are the sons that the exposure period of intersection or the first default exposure is near-infrared light filling for the exposure period of default exposure
Collection.
In a kind of possible implementation of the application,
Described image sensor carries out multiple exposure using roller shutter Exposure mode, for any near-infrared light filling, closely
Intersection is not present in the period of infrared light filling and the exposure period of the closest described second default exposure;
The exposure that last line effective image in the described first default exposure is no earlier than at the beginning of near-infrared light filling is opened
Begin the moment, when the finish time of near-infrared light filling is not later than the end exposure of the first row effective image in the described first default exposure
It carves;
Alternatively,
The default exposure of closest second being no earlier than at the beginning of near-infrared light filling before the described first default exposure
Last line effective image the end exposure moment and be not later than the exposure of the first row effective image in the described first default exposure
Light finish time, the finish time of near-infrared light filling are no earlier than the exposure of last line effective image in the described first default exposure
The first row effective image of start time and the default exposure of closest second being not later than after the described first default exposure
Expose start time;Or
The default exposure of closest second being no earlier than at the beginning of near-infrared light filling before the described first default exposure
Last line effective image the end exposure moment and be not later than the exposure of the first row effective image in the described first default exposure
Light start time, the finish time of near-infrared light filling are no earlier than the exposure of last line effective image in the described first default exposure
The first row effective image of finish time and the default exposure of closest second being not later than after the described first default exposure
Expose start time.
In a kind of possible implementation of the application,
Described first default exposure is different with described second default at least one exposure parameter exposed, it is described at least one
Exposure parameter is one of time for exposure, exposure gain, aperture size or a variety of, and the exposure gain includes analog gain,
And/or digital gain.
In a kind of possible implementation of the application,
Described first default exposure and described second default at least one exposure parameter exposed are identical, it is described at least one
Exposure parameter includes one of time for exposure, exposure gain, aperture size or a variety of, and the exposure gain includes that simulation increases
Benefit, and/or, digital gain.
In a kind of possible implementation of the application,
Described image processor, for being believed using the first processing parameter the first image signal and second image
Number at least one of handled, obtain the first image information;
Described image processor is also used to using second processing parameter to the first image signal and second image
At least one of signal is handled, and the second image information is obtained;
Described image processor is also used to second image information being transferred to display equipment, by the display equipment
Show second image information.
In a kind of possible implementation of the application, when the first image information and second image information are
When being obtained to the first image signal processing, alternatively, when the first image information and second image information are pair
When second image signal process obtains, alternatively, when the first image information and second image information are to institute
State the first picture signal and when second image signal process obtains, first processing parameter and the second processing parameter
It is different.
In a kind of possible implementation of the application, described image processor is to the first image signal and described
At least one of two picture signals carry out processing include black level, image interpolation, digital gain, white balance, image noise reduction,
At least one of image enhancement, image co-registration.
In a kind of possible implementation of the application, described image processor includes caching;
The caching, for storing at least one of the first image signal and second picture signal, alternatively,
For storing at least one of the first image information and second image information.
In a kind of possible implementation of the application, described image processor is also used to believe to the first image
Number and at least one of second picture signal in the process of processing, the exposure ginseng of adjustment described image acquisition unit
Number.
In a kind of possible implementation of the application, the human face analysis unit includes: Face datection subelement, face
Identify subelement and face database;
At least one is stored in the face database with reference to face information;
The Face datection subelement exports the people detected for carrying out Face datection to the first image information
Face image, and living body identification is carried out to the facial image;
The recognition of face subelement, for extracting the facial image when the facial image is identified by living body
Face information, by stored in the face information of the facial image and the face database at least one with reference to face letter
Breath is compared, and obtains human face analysis result.
In a kind of possible implementation of the application, the human face analysis unit includes: Face datection subelement, face
Identify subelement and face database;
At least one is stored in the face database with reference to face information;
The Face datection subelement exports the detected for carrying out Face datection to the first image information
One facial image carries out living body identification to first facial image, and carries out Face datection to second image information,
The second facial image detected is exported, living body identification is carried out to second facial image;
The recognition of face subelement, for passing through living body in first facial image and second facial image
When identification, the face information of first facial image is extracted, by the face information of first facial image and the face
At least one stored in database is compared with reference to face information, obtains human face analysis result.
In a kind of possible implementation of the application, the first image information is to the first image signal processing
Obtained gray level image information, second image information are the color image letters obtained to second image signal process
Breath, the human face analysis unit includes: Face datection subelement, recognition of face subelement and face database;
At least one is stored in the face database with reference to face information;
The Face datection subelement exports the coloured silk detected for carrying out Face datection to the color image information
Color facial image carries out living body identification to the colorized face images, and is identified in the colorized face images by living body
When, Face datection is carried out to the Gray Face image, exports the grey facial image detected;
The recognition of face subelement, for extracting the face information of the Gray Face image, by the Gray Face
At least one stored in the face information of image and the face database is compared with reference to face information, obtains face point
Analyse result.
In a kind of possible implementation of the application, the first image information is to the first image signal processing
Obtained gray level image information, second image information are carried out to the first image signal and second picture signal
The blending image information that image co-registration is handled, the human face analysis unit include: Face datection subelement, recognition of face
Unit and face database;
At least one is stored in the face database with reference to face information;
The Face datection subelement exports melting of detecting for carrying out Face datection to the blending image information
Facial image is closed, living body identification is carried out to the fusion facial image, and identify by living body in the fusion facial image
When, Face datection is carried out to the Gray Face image, exports the grey facial image detected;
The recognition of face subelement, for extracting the face information of the Gray Face image, by the Gray Face
At least one stored in the face information of image and the face database is compared with reference to face information, obtains face point
Analyse result.
In a kind of possible implementation of the application, the first image information is to the first image signal and institute
It states the second picture signal and carries out the blending image information that image co-registration is handled, second image information is to described first
The gray level image information that image signal process obtains, the human face analysis unit include: Face datection subelement, recognition of face
Unit and face database;
At least one is stored in the face database with reference to face information;
The Face datection subelement exports the ash detected for carrying out Face datection to the gray level image information
Facial image is spent, living body identification is carried out to the Gray Face image, and identify by living body in the Gray Face image
When, Face datection is carried out to the fusion facial image, exports the fusion facial image detected;
The recognition of face subelement, for extracting the face information of the fusion facial image, by the fusion face
At least one stored in the face information of image and the face database is compared with reference to face information, obtains face point
Analyse result.
In a kind of possible implementation of the application, the first image information is to the first image signal and institute
It states the second picture signal and carries out the blending image information that image co-registration is handled, second image information is to described second
The color image information that image signal process obtains, the human face analysis unit include: Face datection subelement, recognition of face
Unit and face database;
At least one is stored in the face database with reference to face information;
The Face datection subelement exports the coloured silk detected for carrying out Face datection to the color image information
Color facial image carries out living body identification to the colorized face images, and is identified in the colorized face images by living body
When, Face datection is carried out to the fusion facial image, exports the fusion facial image detected;
The recognition of face subelement, for extracting the face information of the fusion facial image, by the fusion face
At least one stored in the face information of image and the face database is compared with reference to face information, obtains face point
Analyse result.
In a kind of possible implementation of the application, the first image information is to the first image signal and institute
It states the second picture signal and carries out the first blending image information that image co-registration is handled, second image information is to described
First picture signal and second picture signal carry out the second blending image information that image co-registration is handled, the face
Analytical unit includes: Face datection subelement, recognition of face subelement and face database;
At least one is stored in the face database with reference to face information;
The Face datection subelement, for carrying out Face datection to the second blending image information, output is detected
Second fusion facial image, to it is described second fusion facial image carry out living body identification, and it is described second fusion face
When image is identified by living body, Face datection is carried out to the first fusion facial image, exports the first fusion people detected
Face image;
The recognition of face subelement, for extracting the face information of the first fusion facial image, by described first
It merges at least one stored in the face information and the face database of facial image to be compared with reference to face information, obtain
To human face analysis result.
In a kind of possible implementation of the application, the human face analysis unit is also used to the human face analysis knot
Fruit is transferred to display equipment, is shown by the display equipment to the human face analysis result.
On the one hand, a kind of access control equipment is provided, the access control equipment includes access controller and above-mentioned recognition of face
Device;
The face identification device, for the human face analysis result to be transferred to the access controller;
The access controller, for when the human face analysis result is to identify successfully, output to be used to open gate inhibition's
Control signal.
On the one hand, a kind of face identification method is provided, face identification device, the face identification device packet are applied to
Include: image acquisition units, image processor and human face analysis unit, described image acquisition unit include filtering assembly, the filter
Optical assembly includes the first optical filter, which comprises
Pass through visible light and part near infrared light by first optical filter;
The first picture signal is acquired by described image acquisition unit and the second picture signal, the first image signal are
According to the picture signal that the first default exposure generates, second picture signal is the image letter generated according to the second default exposure
Number, wherein near-infrared light filling at least is carried out within the Partial exposure period of the described first default exposure, it is default described second
Without near-infrared light filling in the exposure period of exposure;
By described image processor at least one of the first image signal and second picture signal into
Row processing, obtains the first image information;
Human face analysis is carried out to the first image information by the human face analysis unit, obtains human face analysis result.
Technical solution provided by the embodiments of the present application can at least bring it is following the utility model has the advantages that
In the embodiment of the present application, face identification device includes image acquisition units, image processor and human face analysis list
Member.Image acquisition units include filtering assembly, and filtering assembly includes the first optical filter, and the first optical filter keeps visible light and part close
Infrared light passes through.Image acquisition units can preset exposure by the first default exposure and second while collect comprising near-infrared
First picture signal of optical information (such as near infrared light luminance information) and the second picture signal comprising visible optical information.Relative to
Need by the later period by the original image signal of acquisition near infrared light information and the image that separates of visible optical information at
Reason mode, image acquisition units can directly collect the first picture signal and the second picture signal, collection process in the application
It is simple and effective.In this way, image processor obtains after handling at least one of the first picture signal and the second picture signal
The quality of the first image information arrived is higher, after then human face analysis unit carries out human face analysis to the first image information
More accurate human face analysis is obtained as a result, so as to effectively improve face recognition accuracy rate.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is the structural schematic diagram of the first face identification device provided by the embodiments of the present application.
Fig. 2 is the structural schematic diagram of the first image acquisition units provided by the embodiments of the present application.
Fig. 3 is the schematic illustration that a kind of image acquisition units provided by the embodiments of the present application generate the first picture signal.
Fig. 4 is the schematic illustration that a kind of image acquisition units provided by the embodiments of the present application generate the second picture signal.
Fig. 5 is the wavelength and relative intensity that a kind of first light compensating apparatus provided by the embodiments of the present application carries out near-infrared light filling
Between relation schematic diagram.
Fig. 6 is the relationship between the wavelength and percent of pass of a kind of light that first optical filter passes through provided by the embodiments of the present application
Schematic diagram.
Fig. 7 is the structural schematic diagram of second of image acquisition units provided by the embodiments of the present application.
Fig. 8 is a kind of schematic diagram of RGB sensor provided by the embodiments of the present application.
Fig. 9 is a kind of schematic diagram of RGBW sensor provided by the embodiments of the present application.
Figure 10 is a kind of schematic diagram of RCCB sensor provided by the embodiments of the present application.
Figure 11 is a kind of schematic diagram of RYYB sensor provided by the embodiments of the present application.
Figure 12 is a kind of induction curve schematic diagram of imaging sensor provided by the embodiments of the present application.
Figure 13 is a kind of schematic diagram of roller shutter Exposure mode provided by the embodiments of the present application.
Figure 14 is the first default exposure in the first near-infrared light filling provided by the embodiments of the present application and global Exposure mode
Sequential relationship schematic diagram between light and the second default exposure.
Figure 15 is the first default exposure in second of near-infrared light filling provided by the embodiments of the present application and global Exposure mode
Sequential relationship schematic diagram between light and the second default exposure.
Figure 16 is the first default exposure in the third near-infrared light filling provided by the embodiments of the present application and global Exposure mode
Sequential relationship schematic diagram between light and the second default exposure.
Figure 17 is the first default exposure in the first near-infrared light filling provided by the embodiments of the present application and roller shutter Exposure mode
And the second sequential relationship schematic diagram between default exposure.
Figure 18 is the first default exposure in second of near-infrared light filling provided by the embodiments of the present application and roller shutter Exposure mode
And the second sequential relationship schematic diagram between default exposure.
Figure 19 is the first default exposure in the third near-infrared light filling provided by the embodiments of the present application and roller shutter Exposure mode
And the second sequential relationship schematic diagram between default exposure.
Figure 20 is the structural schematic diagram of the third image acquisition units provided by the embodiments of the present application.
Figure 21 is the structural schematic diagram of second of face identification device provided by the embodiments of the present application.
Figure 22 is the structural schematic diagram of the third face identification device shown in the embodiment of the present application.
Figure 23 is the structural schematic diagram of the 4th kind of face identification device shown in the embodiment of the present application.
Figure 24 is a kind of structural schematic diagram of access control equipment shown in the embodiment of the present application.
Figure 25 is a kind of flow chart of face identification method shown in the embodiment of the present application.
Appended drawing reference:
1: image acquisition units, 2: image processor, 3: human face analysis unit, 01: imaging sensor, 02: light aid,
03: filtering assembly, 04: camera lens, 021: the first light compensating apparatus, 022: the second light compensating apparatus, 031: the first optical filter, 032: the second
Optical filter, 033: switching part, 311: Face datection subelement, 312: recognition of face subelement, 313, face database, 001:
Access controller, 002: face identification device.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the application embodiment party
Formula is described in further detail.
Fig. 1 is a kind of structural schematic diagram of face identification device provided by the embodiments of the present application.As shown in Figure 1, the face
Identification device includes: image acquisition units 1, image processor 2 and human face analysis unit 3.
Image acquisition units 1 are for acquiring the first picture signal and the second picture signal.Image processor 2 is used for first
At least one of picture signal and the second picture signal are handled, and the first image information is obtained.Human face analysis unit 3 is used for
Human face analysis is carried out to the first image information, obtains human face analysis result.
It should be noted that the first picture signal is the picture signal generated according to the first default exposure, the second image letter
It number is the picture signal generated according to the second default exposure.Wherein, at least within the Partial exposure period of the first default exposure
Near-infrared light filling is carried out, without near-infrared light filling in the exposure period of the second default exposure.
In the embodiment of the present application, image acquisition units 1 can expose simultaneously by the way that the first default exposure and second are default
Collect the first picture signal comprising near infrared light information (such as near infrared light luminance information) and the comprising visible optical information
Two picture signals.Relative to need by the later period by the original image signal of acquisition near infrared light information and visible optical information
Isolated image procossing mode, image acquisition units 1 can directly collect the first picture signal and second in the application
Picture signal, collection process are simple and effective.In this way, image processor 2 in the first picture signal and the second picture signal extremely
The quality of few first image information obtained after being handled is higher, and then human face analysis unit 3 is to the first image information
It can be obtained by more accurate human face analysis as a result, accurate so as to effectively improve recognition of face after carrying out human face analysis
Rate.
Image acquisition units 1, image processor 2 and the human face analysis unit 3 that the people's face identification device includes are divided below
It is not illustrated.
1, image acquisition units 1
As shown in Fig. 2, the image acquisition units include imaging sensor 01, light aid 02 and filtering assembly 03, image is passed
Sensor 01 is located at the light emission side of filtering assembly 03.Imaging sensor 01 by multiple exposure for generating and exporting the first image letter
Number and the second picture signal, the first default exposure and the second default exposure be the wherein double exposure of the multiple exposure.Light aid
02 includes the first light compensating apparatus 021, and the first light compensating apparatus 021 is for carrying out near-infrared light filling.Filtering assembly 03 includes the first filter
Mating plate 031, the first optical filter 031 pass through visible light and part near infrared light.Wherein, the first light compensating apparatus 021 carries out close red
The first light compensating apparatus 021 is higher than by the intensity of the near infrared light of the first optical filter 031 when outer smooth light filling and does not carry out near-infrared benefit
The intensity that light time passes through the near infrared light of the first optical filter 031.
In the embodiment of the present application, referring to fig. 2, image acquisition units 1 can also include camera lens 04, at this point, filtering assembly
03 can be between camera lens 04 and imaging sensor 01, and imaging sensor 01 is located at the light emission side of filtering assembly 03.Alternatively,
Camera lens 04 is between filtering assembly 03 and imaging sensor 01, and imaging sensor 01 is located at the light emission side of camera lens 04.As
A kind of example, the first optical filter 031 can be light filter film, in this way, when filtering assembly 03 is located at camera lens 04 and imaging sensor
When between 01, the first optical filter 031 can be attached to the surface of the light emission side of camera lens 04, alternatively, when camera lens 04 is located at filtering assembly
When between 03 and imaging sensor 01, the first optical filter 031 can be attached to the surface of the incident side of camera lens 04.
It should be noted is that light aid 02 can be located in image acquisition units 1, Image Acquisition list can also be located at
The outside of member 1.Light aid 02 can be a part of image acquisition units 1, or independently of the one of image acquisition units 1
A device.When light aid 02 is located at the outside of image acquisition units 1, light aid 02 can be led to image acquisition units 1
Letter connection, thereby may be ensured that the exposure time series of the imaging sensor 01 in image acquisition units 1 and light aid 02 includes the
There are certain relationships for the near-infrared light filling timing of one light compensating apparatus 021, such as at least in the Partial exposure of the first default exposure
Between near-infrared light filling is carried out in section, without near-infrared light filling in the exposure period of the second default exposure.
In addition, the first light compensating apparatus 021 is that can issue device, such as near-infrared light compensating lamp of near infrared light etc., first
Light compensating apparatus 021 can carry out near-infrared light filling with strobe mode, can also carry out near-infrared with the other modes of similar stroboscopic
Light filling, the embodiment of the present application do not limit this.In some instances, it is carried out when the first light compensating apparatus 021 with strobe mode close
When infrared light filling, the first light compensating apparatus 021 can be controlled manually with strobe mode and carry out near-infrared light filling, it can also
Near-infrared light filling, the application are carried out to control the first light compensating apparatus 021 by software program or particular device with strobe mode
Embodiment does not limit this.Wherein, the period that the first light compensating apparatus 021 carries out near-infrared light filling can be with the first default exposure
The exposure period of light is overlapped, and can also be greater than the exposure period of the first default exposure or the exposure less than the first default exposure
The light period, as long as the entire exposure period in the first default exposure or progress near-infrared benefit in the Partial exposure period
Light, and without near-infrared light filling in the exposure period of the second default exposure.
It should be noted that being exposed without near-infrared light filling for the overall situation in the exposure period of the second default exposure
For mode, the exposure period of the second default exposure, which can be, to be started the time of exposure and terminates the time between the time of exposure
Section, for roller shutter Exposure mode, it is effective that the exposure period of the second default exposure can be the second picture signal the first row
Image starts the period of the time of exposure and last line effective image terminated between the time of exposure, but is not limited to
This.For example, the exposure period of the second default exposure is also possible to the target image corresponding time for exposure in the second picture signal
Section, target image are with several rows effective image corresponding to target object or target area in the second picture signal, this is several
Period between the beginning time of exposure and the end time of exposure of row effective image is considered as the exposure of the second default exposure
Period.
It needs to illustrate on the other hand, since the first light compensating apparatus 021 is when carrying out near-infrared light filling to outer scene, to enter
The near infrared light for being mapped to body surface may be reflected by the object, hence into the first optical filter 031.And due to usual
In the case of, environment light may include visible light and near infrared light, and when the near infrared light in environment light is incident on body surface
It can be reflected by the object, hence into the first optical filter 031.Therefore, pass through the first optical filter when carrying out near-infrared light filling
031 near infrared light may include being reflected into the first optical filter through object when the first light compensating apparatus 021 carries out near-infrared light filling
031 near infrared light by the near infrared light of the first optical filter 031 may include the first light filling when without near-infrared light filling
The near infrared light of the first optical filter 031 is reflected into when device 021 does not carry out near-infrared light filling through object.It that is to say, carrying out
Include that the first light compensating apparatus 021 issues by the near infrared light of the first optical filter 031 when near-infrared light filling and is reflected through object
Near infrared light near infrared light and environment light afterwards after object reflects passes through first when without near-infrared light filling
The near infrared light of optical filter 031 includes the near infrared light in environment light after object reflects.
With in image acquisition units 1, filtering assembly 03 can be between camera lens 04 and imaging sensor 01, and image passes
Sensor 01 is located at for the structure feature of the light emission side of filtering assembly 03, and image acquisition units 1 acquire the first picture signal and the
The process of two picture signals are as follows: referring to Fig. 3, when imaging sensor 01 carries out the first default exposure, the first light compensating apparatus 021 into
Row near-infrared light filling, by object in scene when the environment light in photographed scene and the first light compensating apparatus carry out near-infrared light filling at this time
After the near infrared light of reflection is via camera lens 04, the first optical filter 031, generated by imaging sensor 01 by the first default exposure
First picture signal;Referring to fig. 4, when imaging sensor 01 carries out the second default exposure, the first light compensating apparatus 021 is without close
Infrared light filling is led to after the environment light in photographed scene is via camera lens 04, the first optical filter 031 at this time by imaging sensor 01
It crosses the second default exposure and generates the second picture signal, can there is M first default exposure within a frame period of Image Acquisition
With the N number of second default exposure, can be adopted there are many combined sequence in image between the first default exposure and the second default exposure
In one frame period of collection, the value and M of M and N and the size relation of N can be arranged according to actual needs, for example, M and N
Value can be equal, can not also be identical.
It should be noted that the first optical filter 031 can be such that the light of part near infrared light wave band passes through, in other words, lead to
The near infrared light wave band for crossing the first optical filter 031 can be part near infrared light wave band, or whole near infrared light wave bands,
The embodiment of the present application does not limit this.
In addition, since the intensity of the near infrared light in environment light is lower than the near infrared light that the first light compensating apparatus 021 issues
Intensity, therefore, the first light compensating apparatus 021 carry out high by the intensity of the near infrared light of the first optical filter 031 when near-infrared light filling
Pass through the intensity of the near infrared light of the first optical filter 031 when the first light compensating apparatus 021 does not carry out near-infrared light filling.
Wherein, the wavelength band that the first light compensating apparatus 021 carries out near-infrared light filling can refer to wavelength band for second, the
Two reference wavelength bands can be 700 nanometers~800 nanometers or 900 nanometers~1000 nanometers, in this way, can mitigate common
850 nanometers of nearly red light caused by interfere.In addition, the wavelength band for being incident on the near infrared light of the first optical filter 031 can be with
Wavelength band is referred to for first, the first reference wavelength band is 650 nanometers~1100 nanometers.
Since when carrying out near-infrared light filling, the near infrared light by the first optical filter 031 may include the first light filling dress
The warp in the near infrared light and environment light of the first optical filter 031 is reflected into through object when setting 021 carry out near infrared light light filling
Near infrared light after object reflection.So stronger into the intensity of the near infrared light of filtering assembly 03 at this time.But without
When near-infrared light filling, the near infrared light by the first optical filter 031 includes being reflected into filtering assembly 03 through object in environment light
Near infrared light.Due to carrying out the near infrared light of light filling without the first light compensating apparatus 021, so passing through the first optical filter at this time
The intensity of 031 near infrared light is weaker.Therefore, the first picture signal for generating and exporting according to the first default exposure includes close
The intensity of infrared light is higher than the strong of the near infrared light that the second picture signal for generating and exporting according to the second default exposure includes
Degree.
First light compensating apparatus 021 carries out the central wavelength of near-infrared light filling and/or wavelength band can be there are many selection, this
Apply in embodiment, it, can be to the first light filling in order to make the first light compensating apparatus 021 and the first optical filter 031 have better cooperation
The central wavelength that device 021 carries out near-infrared light filling is designed, and is selected the characteristic of the first optical filter 031, from
And to carry out the central wavelength of near-infrared light filling in the first light compensating apparatus 021 as setting characteristic wavelength or fall in setting feature
When wave-length coverage, constraint item can achieve by the central wavelength and/or waveband width of the near infrared light of the first optical filter 031
Part.The central wavelength that the constraint condition is primarily used to the near infrared light that constraint passes through the first optical filter 031 is as accurate as possible, with
And it is as narrow as possible by the waveband width of the near infrared light of the first optical filter 031, to avoid the occurrence of because near infrared light wave band is wide
It spends wide and introduces wavelength interference.
Wherein, the central wavelength that the first light compensating apparatus 021 carries out near-infrared light filling can issue for the first light compensating apparatus 021
Near infrared light spectrum in average value in the maximum wave-length coverage of energy, it is understood that sent out for the first light compensating apparatus 021
Energy is more than the wavelength of the middle position in the wave-length coverage of certain threshold value in the spectrum of near infrared light out.
Wherein, setting characteristic wavelength or setting characteristic wavelength range can preset.As an example, it first mends
The central wavelength that electro-optical device 021 carries out near-infrared light filling can be any wavelength in 750 ± 10 nanometers of wave-length coverage;Or
Person, the first light compensating apparatus 021 carry out any wave in the wave-length coverage that the central wavelength of near-infrared light filling is 780 ± 10 nanometers
It is long;Alternatively, the first light compensating apparatus 021 carries out appointing in the wave-length coverage that the central wavelength of near-infrared light filling is 940 ± 10 nanometers
One wavelength.It that is to say, setting characteristic wavelength may range from 750 ± 10 nanometers of wave-length coverage or 780 ± 10 nanometers of wave
Long range or 940 ± 10 nanometers of wave-length coverage.Illustratively, the first light compensating apparatus 021 carries out the center of near-infrared light filling
Wavelength is 940 nanometers, and the first light compensating apparatus 021 carries out relationship such as Fig. 5 institute between the wavelength and relative intensity of near-infrared light filling
Show.From fig. 5, it can be seen that the wavelength band that the first light compensating apparatus 021 carries out near-infrared light filling is 900 nanometers~1000 nanometers,
Wherein, in 940 nanometers, the relative intensity highest of near infrared light.
Since when carrying out near-infrared light filling, the near infrared light by the first optical filter 031 is largely the first light filling dress
The near infrared light of the first optical filter 031 is reflected into when setting 021 carry out near-infrared light filling through object, therefore, in some embodiments
In, above-mentioned constraint condition may include: the central wavelength and the first light compensating apparatus by the near infrared light of the first optical filter 031
Difference between the central wavelength of 021 progress near-infrared light filling is located within the scope of wavelength fluctuation, as an example, wavelength fluctuation
It may range from 0~20 nanometer.
It wherein, can be the close red of the first optical filter 031 by the central wavelength of the near-infrared light filling of the first optical filter 031
Wavelength within the scope of near infrared band in outer smooth percent of pass curve at crest location, it is understood that be the first optical filter 031
Near infrared light percent of pass curve in percent of pass be more than certain threshold value near infrared band within the scope of middle position wavelength.
In order to avoid introducing wavelength interference by the way that the waveband width of the near infrared light of the first optical filter 031 is wide, one
In a little embodiments, above-mentioned constraint condition may include: that first band width can be less than second band width.Wherein, first wave
Duan Kuandu refers to that the waveband width of the near infrared light by the first optical filter 031, second band width refer to by the first optical filter
The waveband width of 031 near infrared light stopped.It should be understood that waveband width refers to wave-length coverage locating for the wavelength of light
Width.It is received for example, passing through wave-length coverage locating for the wavelength of the near infrared light of the first optical filter 031 for 700 nanometers~800
Rice, then first band width, which is 800 nanometers, subtracts 700 nanometers, i.e., 100 nanometers.In other words, pass through the first optical filter 031
Near infrared light waveband width less than the near infrared light that the first optical filter 031 stops waveband width.
For example, with reference to Fig. 6, Fig. 6 be the first optical filter 031 can by light wavelength and percent of pass between relationship
A kind of schematic diagram.The wave band for being incident on the near infrared light of the first optical filter 031 is 650 nanometers~1100 nanometers, the first optical filter
031 visible light that wavelength can be made to be located at 380 nanometers~650 nanometers passes through and wavelength is positioned at 900 nanometers~1100 nanometers
Near infrared light passes through, and wavelength is stopped to be located at 650 nanometers~900 nanometers of near infrared light.It that is to say, first band width is 1000
Nanometer subtracts 900 nanometers, i.e., 100 nanometers.Second band width is 900 nanometers and subtracts 650 nanometers, in addition 1100 nanometers subtract
1000 nanometers, i.e., 350 nanometers.100 nanometers less than 350 nanometers, that is, pass through the waveband width of the near infrared light of the first optical filter 031
Less than the waveband width for the near infrared light that the first optical filter 031 stops.Relation above curve is only a kind of example, for different
Optical filter, can be different by the wavelength band of the nearly red spectral band of optical filter, by the near-infrared of filter blocks
The wavelength band of light can also be different.
In order to avoid passing through the wide waveband of the near infrared light of the first optical filter 031 within the period of non-near-infrared light filling
It spends wide and introduces wavelength interference, in some embodiments, above-mentioned constraint condition may include: by the first optical filter 031
The half-band width of near infrared light is less than or equal to 50 nanometers.Wherein, half-band width refers to the wave of near infrared light of the percent of pass greater than 50%
Duan Kuandu.
In order to avoid introducing wavelength interference by the way that the waveband width of the near infrared light of the first optical filter 031 is wide, one
In a little embodiments, above-mentioned constraint condition may include: that third waveband width can be less than with reference to waveband width.Wherein, third wave
Duan Kuandu refers to that percent of pass as an example can with reference to waveband width greater than the waveband width of the near infrared light of setting ratio
Think any waveband width in 50 nanometers~100 nanometers of wavelength band.Setting ratio can be times in 30%~50%
One ratio, certain setting ratio can also be set as other ratios according to use demand, and the embodiment of the present application does not limit this.
In other words, the waveband width that percent of pass is greater than the near infrared light of setting ratio, which can be less than, refers to waveband width.
For example, with reference to Fig. 6, the wave band for being incident on the near infrared light of the first optical filter 031 is 650 nanometers~1100 nanometers,
Setting ratio is 30%, is 100 nanometers with reference to waveband width.From fig. 6, it can be seen that close red at 650 nanometers~1100 nanometers
In the wave band of outer light, the waveband width of near infrared light of the percent of pass greater than 30% is significantly less than 100 nanometers.
It is mended since the first light compensating apparatus 021 at least provides near-infrared within the Partial exposure period of the first default exposure
Light does not provide near-infrared light filling in the entire exposure period that second presets exposure, and the first default exposure and second is preset
Exposure is the wherein double exposure in the multiple exposure of imaging sensor 01, be that is to say, the first light compensating apparatus 021 is in image sensing
Near-infrared light filling is provided in the exposure period of the Partial exposure of device 01, in the exposure that the another part of imaging sensor 01 exposes
Near-infrared light filling is not provided in the light period.So light filling number of first light compensating apparatus 021 in unit time length can be with
Lower than exposure frequency of the imaging sensor 01 in the unit time length, wherein the interval time section per adjacent light filling twice
It is interior, it is spaced one or many exposures.
In one possible implementation, since human eye is easy the first light compensating apparatus 021 carrying out near infrared light light filling
Color and traffic lights in the color of red light obscure, so, referring to Fig. 7, light aid 02 can also include the second light compensating apparatus
022, the second light compensating apparatus 022 is for carrying out visible light light filling.In this way, if the second light compensating apparatus 022 is at least default first
The Partial exposure time of exposure provides visible light light filling, that is to say, at least within the Partial exposure period of the first default exposure
The color of the red light in traffic lights can be different from by carrying out near-infrared light filling and visible light light filling, the blend color of both light,
The color that light aid 02 is carried out to the red light in the color and traffic lights of near-infrared light filling so as to avoid human eye is obscured.In addition,
If the second light compensating apparatus 022 provides visible light light filling in the exposure period of the second default exposure, due to the second default exposure
The intensity of visible light is not especially high in the exposure period of light, therefore, is carried out in the exposure period of the second default exposure
When visible light light filling, the brightness of the visible light in the second picture signal can also be improved, and then guarantee the quality of Image Acquisition.
In some embodiments, the second light compensating apparatus 022 can be used for carrying out visible light light filling in a manner of being always on;Alternatively,
Second light compensating apparatus 022 can be used for carrying out visible light light filling with strobe mode, wherein at least in the part of the first default exposure
There are visible light light fillings in exposure period, and visible light light filling is not present in the entire exposure period of the second default exposure;
Alternatively, the second light compensating apparatus 022 can be used for carrying out visible light light filling with strobe mode, wherein at least in the first default exposure
Entire exposure period in visible light light filling is not present, there are visible lights within the Partial exposure period of the second default exposure
Light filling.It, not only can be to avoid human eye by the first light compensating apparatus when the second light compensating apparatus 022, which is always on mode, carries out visible light light filling
The color of red light in the color and traffic lights of 021 progress near-infrared light filling is obscured, and can also improve in the second picture signal
The brightness of visible light, and then guarantee the quality of Image Acquisition.When the second light compensating apparatus 022 carries out visible light light filling with strobe mode
When, the color that the first light compensating apparatus 021 can be carried out to the red light in the color and traffic lights of near-infrared light filling to avoid human eye is mixed
Confuse, alternatively, the brightness of the visible light in the second picture signal can be improved, and then guarantees the quality of Image Acquisition, but also can
To reduce the light filling number of the second light compensating apparatus 022, to extend the service life of the second light compensating apparatus 022.
In some embodiments, above-mentioned multiple exposure refers to the multiple exposure in a frame period, that is to say, image sensing
Device 01 carries out multiple exposure within a frame period, to generate and export at least first picture signal of frame and an at least frame
Two picture signals.For example, including 25 frame periods in 1 second, imaging sensor 01 carries out multiple exposure within each frame period, from
And at least first picture signal of frame and at least second picture signal of frame are generated, the first image that will be generated in a frame period
Signal and the second picture signal are known as one group of picture signal, in this way, will generate 25 groups of picture signals in 25 frame periods.Its
In, the first default exposure and the second default exposure can be double exposure adjacent in multiple exposure in a frame period, can also
To be that non-conterminous double exposure in multiple exposure, the embodiment of the present application do not limit this in a frame period.
First picture signal is that the first default exposure is generated and exported, and the second picture signal is that the second default exposure generates
And export, it, can be to the first picture signal and second after generating and exporting the first picture signal and the second picture signal
Picture signal is handled.In some cases, the purposes of the first picture signal and the second picture signal may be different, so
In some embodiments, the first default exposure can be different from least one exposure parameter of the second default exposure.Show as one kind
Example, at least one exposure parameter can include but is not limited to the time for exposure, analog gain, digital gain, in aperture size
It is one or more.Wherein, exposure gain includes analog gain and/or digital gain.
In some embodiments.It is understood that compared with the second default exposure, when carrying out near-infrared light filling, figure
The intensity of the near infrared light sensed as sensor 01 is stronger, and the first picture signal for correspondingly generating and exporting includes close red
The brightness of outer light also can be higher.But the near infrared light of higher brightness is unfavorable for the acquisition of outer scene information.And some
In embodiment, exposure gain is bigger, and the brightness for the picture signal that imaging sensor 01 exports is higher, and exposure gain is smaller, image
The brightness for the picture signal that sensor 01 exports is lower, therefore, in order to guarantee the bright of near infrared light that the first picture signal includes
Degree is in suitable range, in the case where the first default exposure is different at least one exposure parameter of the second default exposure,
As an example, the exposure gain of the first default exposure can be less than the exposure gain of the second default exposure.In this way, first
When light compensating apparatus 021 carries out near-infrared light filling, imaging sensor 01 generates and the first picture signal for exporting includes near-infrared
The brightness of light, will not be excessively high because of the first light compensating apparatus 021 progress near-infrared light filling.
In further embodiments, the time for exposure is longer, and the brightness that the picture signal that imaging sensor 01 obtains includes is got over
Height, and motion smear of the object of the movement in outer scene in picture signal is longer;Time for exposure is shorter, image sensing
The brightness that the picture signal that device 01 obtains includes is lower, and movement of the object of the movement in outer scene in picture signal
It trails shorter.Therefore, in order to guarantee the brightness of near infrared light that the first picture signal includes in suitable range, and external field
Motion smear of the object of movement in scape in the first picture signal is shorter.Exposure is preset in the first default exposure and second
In the case that at least one exposure parameter is different, as an example, the time for exposure of the first default exposure can be less than second
The time for exposure of default exposure.In this way, imaging sensor 01 generates simultaneously when the first light compensating apparatus 021 carries out near-infrared light filling
The brightness for the near infrared light that first picture signal of output includes, will not be due to the first light compensating apparatus 021 carries out near-infrared light filling
It is excessively high.And the motion smear that the shorter time for exposure occurs the object of the movement in outer scene in the first picture signal
It is shorter, to be conducive to the identification to Moving Objects.Illustratively, the time for exposure of the first default exposure is 40 milliseconds, second
The time for exposure of default exposure is 60 milliseconds etc..
It is worth noting that, in some embodiments, when the exposure gain of the first default exposure is less than the second default exposure
Exposure gain when, the time for exposure of the first default exposure not only can be less than the time for exposure of the second default exposure, can be with
Equal to the time for exposure of the second default exposure.Similarly, when exposure of the time for exposure of the first default exposure less than the second default exposure
When between the light time, the exposure gain of the first default exposure can also be equal to second less than the exposure gain of the second default exposure
The exposure gain of default exposure.
In further embodiments, the purposes of the first picture signal and the second picture signal can be identical, such as the first figure
When being all used for intellectual analysis as signal and the second picture signal, in order to make the face for carrying out intellectual analysis or target during exercise
There can be same clarity, the first default exposure can be identical at least one exposure parameter of the second default exposure.As
A kind of example, the first default time for exposure exposed can be equal to the time for exposure of the second default exposure, if the first default exposure
The time for exposure of light is different with the time for exposure of the second default exposure, it may appear that the time for exposure, longer picture signal all the way existed
Motion smear causes the clarity of two-way picture signal different.Similarly, as another example, the exposure of the first default exposure
Gain can be equal to the exposure gain of the second default exposure.
It is worth noting that, in some embodiments, being equal to the second default exposure when the time for exposure of the first default exposure
Time for exposure when, the exposure gain of the first default exposure can also be equal to less than the exposure gain of the second default exposure
The exposure gain of second default exposure.Similarly, when the exposure that the exposure gain of the first default exposure is equal to the second default exposure increases
When beneficial, the time for exposure of the first default exposure it is default can also to be equal to second less than the time for exposure of the second default exposure
The time for exposure of exposure.
Wherein, imaging sensor 01 may include multiple photosensitive channels, and each photosensitive channel can be used for incuding at least one
The light of kind visible light wave range, and the light of induction near infrared band.It that is to say, each photosensitive channel can incude at least one can
The light of light-exposed wave band, and the light of near infrared band can be incuded, in this manner it is ensured that in the first picture signal and the second picture signal
With complete resolution ratio, not missing pixel values.In one possible implementation, multiple photosensitive channel can be used for feeling
Answer the light of at least two different visible light wave ranges.
In some embodiments, multiple photosensitive channel may include the photosensitive channel R, the photosensitive channel G, the photosensitive channel B, Y
At least two in photosensitive channel, the photosensitive channel W and the photosensitive channel C.Wherein, the photosensitive channel R is for incuding red spectral band and close
The light of infrared band, the photosensitive channel G are used to incude the light of green light band and near infrared band, and the photosensitive channel B is for incuding blue light
The light of wave band and near infrared band, the photosensitive channel Y are used to incude the light of yellow band and near infrared band.Due in some implementations
In example, the photosensitive channel for incuding full wave light can be indicated with W, in further embodiments, can be indicated with C
For incuding the photosensitive channel of full wave light, so when multiple photosensitive channel includes for incuding the photosensitive of full wave light
When channel, this photosensitive channel can be the photosensitive channel W, be also possible to the photosensitive channel C.It that is to say, it in practical applications, can be with
The photosensitive channel for incuding full wave light is selected according to use demand.Illustratively, imaging sensor 01 can be RGB
Sensor, RGBW sensor or RCCB sensor or RYYB sensor.Wherein, the photosensitive channel R in RGB sensor, G are photosensitive
The distribution mode in channel and the photosensitive channel B may refer to Fig. 8, and the photosensitive channel R, the photosensitive channel G in RGBW sensor, B are photosensitive
The distribution mode in channel and the photosensitive channel W may refer to Fig. 9, and the photosensitive channel R, the photosensitive channel C and B in RCCB sensor are photosensitive
Channel distribution mode may refer to Figure 10, the photosensitive channel R, the photosensitive channel Y and the photosensitive channel distribution mode of B in RYYB sensor
It may refer to Figure 11.
In further embodiments, some photosensitive channels can also only incude the light of near infrared band, visible without incuding
The light of optical band, in this manner it is ensured that having complete resolution ratio, not missing pixel values in the first picture signal.As one kind
Example, multiple photosensitive channel may include the photosensitive channel R, the photosensitive channel G, the photosensitive channel B, at least two in the photosensitive channel IR
Kind.Wherein, the photosensitive channel R is used to incude the light of red spectral band and near infrared band, the photosensitive channel G for incude green light band and
The light of near infrared band, the photosensitive channel B are used to incude the light of blue wave band and near infrared band, and the photosensitive channel IR is close for incuding
The light of infrared band.
Illustratively, imaging sensor 01 can be RGBIR sensor, wherein each IR in RGBIR sensor is photosensitive logical
Road can incude the light of near infrared band, the light without incuding visible light wave range.
Wherein, when imaging sensor 01 is RGB sensor, compared to other imaging sensors, such as RGBIR sensor
Deng, the RGB information of RGB sensor acquisition is more complete, some photosensitive channel of RGBIR sensor does not acquire visible light,
So the color detail of the image of RGB sensor acquisition is more acurrate.
It is worth noting that, multiple photosensitive channels that imaging sensor 01 includes can correspond to a plurality of induction curve.Example
Property, referring to Figure 12, R-curve representative image sensor 01 in Figure 12 represents the induction curve of the light of red spectral band, G curve
Induction curve of the imaging sensor 01 to the light of green light band, sense of the B curve representative image sensor 01 to the light of blue wave band
Curve is answered, W (or C) curve representative image sensor 01 incudes the induction curve of full wave light, NIR (Near
Infrared, near infrared light) curve representative image sensor 01 incude near infrared band light induction curve.
As an example, imaging sensor 01 can also use roller shutter Exposure mode using global Exposure mode.
Wherein, global Exposure mode refers to that the exposure start time of every a line effective image is all the same, and the exposure of every a line effective image
Light finish time is all the same.In other words, global Exposure mode is all row effective images while being exposed and at the same time tying
A kind of Exposure mode of beam exposure.Roller shutter Exposure mode refers to that the time for exposure for effective image of not going together not exclusively is overlapped, namely
It is that the exposure start time of a line effective image is all later than the exposure start time of lastrow effective image, and a line effectively figure
The end exposure moment of picture is all later than the end exposure moment of lastrow effective image.In addition, every a line in roller shutter Exposure mode
Effective image terminate exposure after can carry out data output, therefore, since the data of the first row effective image output time to
Time between the end of data output time of last line effective image can be expressed as readout time.
It illustratively, is a kind of schematic diagram of roller shutter Exposure mode referring to Figure 13, Figure 13.As can be seen from Figure 13, the 1st row
Effective image starts to expose at the T1 moment, terminates to expose at the T3 moment, and the 2nd row effective image starts to expose at the T2 moment, in T4
Moment terminates to expose, and the T2 moment has elapsed a period compared to the T1 moment backward, and the T4 moment is compared to the T3 moment to pusher
A period is moved.In addition, the 1st row effective image terminated to expose and start output data at the T3 moment, terminate at the T5 moment
The output of data, line n effective image terminated to expose and start output data at the T6 moment, terminated the defeated of data at the T7 moment
Out, then the time between T3~T7 moment is readout time.
In some embodiments, when imaging sensor 01 carries out multiple exposure using global Exposure mode, for any
The exposure period of near-infrared light filling, the period of near-infrared light filling and the closest second default exposure, which is not present, to be handed over
Collection, the period of near-infrared light filling is the subset of the exposure period of the first default exposure, alternatively, the period of near-infrared light filling
The exposure period for presetting exposure there are intersection or first with the exposure period of the first default exposure is near-infrared light filling
Subset.Near-infrared light filling at least is carried out within the Partial exposure period of the first default exposure in this way, can be realized, it is pre- second
If without near-infrared light filling in the entire exposure period of exposure, to will not be impacted to the second default exposure.
For example, with reference to Figure 14, for any near-infrared light filling, the period of near-infrared light filling and closest second
Intersection is not present in the exposure period of default exposure, and the period of near-infrared light filling is the exposure period of the first default exposure
Subset.Referring to Figure 15, for any near-infrared light filling, the period of near-infrared light filling and the closest second default exposure
Exposure period be not present intersection, there are intersections for the exposure period of the period of near-infrared light filling and the first default exposure.
Referring to Figure 16, for any near-infrared light filling, the exposure of the period of near-infrared light filling and the closest second default exposure
Intersection is not present in the light period, and the exposure period of the first default exposure is the subset of near-infrared light filling.Figure 14 to Figure 16 is only
The sequence of a kind of example, the first default exposure and the second default exposure can be not limited to these examples.
In further embodiments, when imaging sensor 01 carries out multiple exposure using roller shutter Exposure mode, for appointing
It anticipates a near-infrared light filling, there is no hand over for the exposure period of the period of near-infrared light filling and the closest second default exposure
Collection.Also, when the exposure for being no earlier than last line effective image in the first default exposure at the beginning of near-infrared light filling starts
It carves, the finish time of near-infrared light filling is not later than the end exposure moment of the first row effective image in the first default exposure.Alternatively,
The last line of the default exposure of closest second before being no earlier than the first default exposure at the beginning of near-infrared light filling has
It imitates the end exposure moment of image and is not later than the end exposure moment of the first row effective image in the first default exposure, near-infrared
The finish time of light filling is no earlier than the exposure start time of last line effective image in the first default exposure and is not later than first
The exposure start time of the first row effective image of the default exposure of closest second after default exposure.Alternatively, near-infrared
The last line effective image of the default exposure of closest second before the first default exposure is no earlier than at the beginning of light filling
The end exposure moment and be not later than exposure start time of the first row effective image in the first default exposure, near-infrared light filling
Finish time is no earlier than the end exposure moment of last line effective image in the first default exposure and is not later than the first default exposure
The exposure start time of the first row effective image of the default exposure of closest second after light.
For example, with reference to Figure 17, for any near-infrared light filling, the period of near-infrared light filling and closest second
Intersection is not present in the exposure period of default exposure, also, is no earlier than in the first default exposure at the beginning of near-infrared light filling
The exposure start time of last line effective image, the finish time of near-infrared light filling are not later than the first row in the first default exposure
The end exposure moment of effective image.Referring to Figure 18, for any near-infrared light filling, the period of near-infrared light filling with most
Intersection is not present in the exposure period of the default exposure of neighbouring second, also, is no earlier than first at the beginning of near-infrared light filling
The end exposure moment of the last line effective image of the default exposure of closest second before default exposure and it is not later than the
The end exposure moment of the first row effective image in one default exposure, the finish time of near-infrared light filling are no earlier than the first default exposure
Exposure start time of last line effective image and closest second being not later than after the first default exposure is default in light
The exposure start time of the first row effective image of exposure.Referring to Figure 19, for any near-infrared light filling, near-infrared light filling
Period and the exposure period of closest second default exposure intersection is not present, also, at the beginning of near-infrared light filling
When carving the end exposure of the last line effective image of the default exposure of closest second before being no earlier than the first default exposure
The exposure start time of the first row effective image in the first default exposure is carved and is not later than, the finish time of near-infrared light filling is late
It the end exposure moment of last line effective image and is not later than most adjacent after the first default exposure in the first default exposure
The exposure start time of the first row effective image of the default exposure of close second.Figure 17 is into Figure 19, for the first default exposure
With the second default exposure, inclined dashed line indicates exposure start time, and inclination solid line indicates the end exposure moment, default for first
It exposes, indicated for the first default period for exposing corresponding near-infrared light filling between vertical dotted line, Figure 17 to Figure 19 is only a kind of
The sequence of example, the first default exposure and the second default exposure can be not limited to these examples.
Wherein, multiple exposure may include odd-times exposure and even-times exposure, in this way, the first default exposure and second is in advance
If exposure can include but is not limited to following several modes:
The first possible implementation, first, which presets exposure, presets exposure for the single exposure in odd-times exposure, second
Light is the single exposure in even-times exposure.In this way, multiple exposure may include presetting to expose according to the first of odd even sequential arrangement
Light and the second default exposure.For example, expose for the 1st time in multiple exposure, the 3rd exposure, the odd-times exposure such as the 5th exposure it is equal
For the first default exposure, the even-times exposure such as the 2nd exposure, the 4th exposure, the 6th exposure is the second default exposure.
Second of possible implementation, first, which presets exposure, presets exposure for the single exposure in even-times exposure, second
Light is the single exposure in odd-times exposure, in this way, multiple exposure may include presetting to expose according to the first of odd even sequential arrangement
Light and the second default exposure.For example, expose for the 1st time in multiple exposure, the 3rd exposure, the odd-times exposure such as the 5th exposure it is equal
For the second default exposure, the even-times exposure such as the 2nd exposure, the 4th exposure, the 6th exposure is the first default exposure.
The third possible implementation, the first default exposure are the single exposure in specified odd-times exposure, second
Default exposure is the single exposure in other exposures in addition to specified odd-times exposure, be that is to say, the second default exposure can
Think the odd-times exposure in multiple exposure, or the even-times exposure in multiple exposure.
4th kind of possible implementation, the first default exposure are the single exposure in specified even-times exposure, second
Default exposure is the single exposure in other exposures in addition to specified even-times exposure, be that is to say, the second default exposure can
Think the odd-times exposure in multiple exposure, or the even-times exposure in multiple exposure.
5th kind of possible implementation, the first default exposure are the single exposure in the first exposure sequence, and second is default
Exposure is the single exposure in the second exposure sequence.
6th kind of possible implementation, the first default exposure are the single exposure in the second exposure sequence, and second is default
Exposure is the single exposure in the first exposure sequence.
Wherein, above-mentioned multiple exposure includes multiple exposure sequences, and the first exposure sequence and the second exposure sequence are classified as multiple
The same exposure sequence or two different exposure sequences in sequence are exposed, each exposure sequence includes n times exposure, the N
Secondary expose includes that 1 time first default exposes presets exposure with N-1 times second, alternatively, n times exposure presets exposure including 1 time second
Light and N-1 times second default exposure, N are the positive integer greater than 2.
For example, each exposure sequence includes 3 exposures, this 3 times exposures may include 1 time first default exposure and 2 times the
Two default exposures, in this way, the 1st exposure of each exposure sequence can be for the first default exposure, the 2nd time and the 3rd time exposure
Second default exposure.It that is to say, each exposure sequence can indicate are as follows: the first default exposure, the second default exposure, second are preset
Exposure.Alternatively, this 3 times exposures may include 1 time second default exposure and 2 times first default exposures, in this way, each exposure sequence
The 1st exposure can be the second default exposure, the 2nd time and the 3rd time exposure default exposes for first.It that is to say, each exposure
Sequence can indicate are as follows: the second default exposure, the first default exposure, the first default exposure.
The above-mentioned possible implementation for providing only six kind of first default exposure and the second default exposure, practical application
In, above-mentioned six kinds of possible implementations are not limited to, the embodiment of the present application does not limit this.
In some embodiments, referring to fig. 20, filtering assembly 03 further includes the second optical filter 032 and switching part 033, and
One optical filter 031 and the second optical filter 032 are connect with switching part 033.Switching part 033 is used for the second optical filter 032
It is switched to the incident side of imaging sensor 01, after the second optical filter 032 is switched to the incident side of imaging sensor 01, second
Optical filter 032 passes through the light of visible light wave range, stops the light of near infrared light wave band, imaging sensor 01, for passing through exposure
It generates and exports third picture signal.
It should be noted that switching part 033 be used to the second optical filter 032 being switched to imaging sensor 01 enter light
Side, it is understood that replace the first optical filter 031 in the position of the incident side of imaging sensor 01 for the second optical filter 032.In
Second optical filter 032 is switched to after the incident side of imaging sensor 01, and the first light compensating apparatus 021 may be at closed state
It may be at open state.
To sum up, when the visual intensity in environment light is weaker, such as night, can by the first light compensating apparatus 021 into
The flash light filling of line frequency generates imaging sensor 01 and exports the first picture signal comprising near-infrared luminance information, and
The second picture signal comprising visible light luminance information, and since the first picture signal and the second picture signal are by the same figure
As the acquisition of sensor 01, so the viewpoint of the first picture signal is identical as the viewpoint of the second picture signal, to pass through the first figure
As the information of signal and the available complete outer scene of the second picture signal.When visual intensity is stronger, such as daytime,
Daytime, the accounting of near infrared light was stronger, and the color rendition degree of the image of acquisition is bad, can be generated by imaging sensor 01
And the third picture signal comprising visible light luminance information exported, accordingly even when on daytime, can also collect color rendition degree
Relatively good image, can efficient, letter no matter daytime or night in other words no matter also can reach the power of visual intensity
Just the realistic colour information for obtaining outer scene improves the using flexible of image acquisition units 1, and can also facilitate
Ground carries out compatible with other image acquisition units.Also, in this case, image processor 2 can to third picture signal into
Row processing, exports third image information, and human face analysis unit 3 can carry out human face analysis to third image information, obtain face
Analyze result.
The application controls the near-infrared light filling timing of light compensating apparatus using the exposure time series of imaging sensor 01, so as to
Near-infrared light filling is carried out during first default exposure and generates the first picture signal, during the second default exposure not
Carry out near-infrared light filling and simultaneously generate the second picture signal, such data acquisition modes, can it is simple in structure, reduce cost
Luminance information different the first picture signal and the second picture signal are directly collected simultaneously, namely pass through an imaging sensor
01 can obtain two different picture signals, so that the image acquisition units 1 are easier, so that obtaining the first figure
As signal and the second picture signal are also more efficient.Also, the first picture signal and the second picture signal are by the same image
Sensor 01 is generated and is exported, so the corresponding viewpoint of the first picture signal viewpoint corresponding with the second picture signal is identical.Cause
This, the information of outer scene can be obtained by the first picture signal and the second picture signal, and there is no because of first jointly
The corresponding viewpoint of picture signal viewpoint corresponding with the second picture signal is not identical, and causes according to the first picture signal and second
The image that picture signal generates is misaligned.
2, image processor 2
Image processor 2 can be the logical platform comprising signal processing algorithm or program.For example, image processor
2 can be the computer based on X86 or ARM framework, and being also possible to FPGA, (Field-Programmable Gate Array shows
Field programmable gate array) logic circuit.
Referring to fig. 21, image processor 2 is used for using the first processing parameter to the first picture signal and the second picture signal
At least one of handled, obtain the first image information.Also, image processor 2 is also used to using second processing parameter
At least one of first picture signal and the second picture signal are handled, the second image information is obtained, then by second
Image information is transferred to display equipment, shows the second image information by display equipment.
The first image can be believed according to human face analysis and both different application demands of display in the embodiment of the present application
Number and the second picture signal carry out neatly combined treatment, so as to so that both different application demands can obtain preferably
Satisfaction.
It should be noted that image processor 2 carries out at least one of the first picture signal and the second picture signal
Processing may include in black level, image interpolation, digital gain, white balance, image noise reduction, image enhancement, image co-registration etc.
At least one.
In addition, the first processing parameter and second processing parameter may be the same or different.Optionally, when the first image
Information and the second image information are when obtaining to the first image signal process, alternatively, when the first image information and the second image
Information is when obtaining to the second image signal process, alternatively, when the first image information and the second image information are to first
When picture signal and the second image signal process obtain, the first processing parameter and second processing parameter can be different.First processing
Parameter can be configured in advance according to display demand, and second processing parameter can be set in advance according to human face analysis demand
It sets.First processing parameter and second processing parameter are black to the progress of at least one of the first picture signal and the second picture signal
Required parameter when the processing such as level, image interpolation, digital gain, white balance, image noise reduction, image enhancement, image co-registration.
Furthermore since the first image information is for carrying out human face analysis, thus image processor 2 can neatly choose compared with
It is combined for suitable first processing parameter and picture signal to obtain the first image information, to reach the figure of more conducively human face analysis
As effect, face recognition accuracy rate is improved.Similarly, since the second image information is for being shown, so image processor 2 can
The second image information is obtained neatly to choose more appropriate second processing parameter and picture signal combination, to reach quality
More preferably image display effect.
As an example, image processor 2 can be using the first processing parameter to including the of near infrared light information
One picture signal is handled, and output gray level image information is as the first image information.In this case, since the first image is believed
Number include near infrared light information, thus the picture quality for the gray level image information that the first picture signal is handled compared with
It is good, compare suitable for human face analysis is carried out, face recognition accuracy rate can be improved.
As an example, image processor 2 can be using second processing parameter to including the second of visible optical information
Picture signal is handled, and exports color image information as the second image information.In this case, due to the second picture signal
It include visible optical information, so the color rendition of the color image information handled to the second picture signal is compared with subject to
Really, compare suitable for being shown, image display effect can be improved.
As an example, image processor 2 can be using the first processing parameter to the first picture signal and the second image
Signal is handled, and the first image information is exported.In this case, image processor 2 is needed to the first picture signal and second
Picture signal carries out image co-registration processing.
As an example, image processor 2 can be using second processing parameter to the first picture signal and the second image
Signal is handled, and the second image information is exported.In this case, image processor 2 is needed to the first picture signal and second
Picture signal carries out image co-registration processing.
It should be noted that since image acquisition units 1 generate the time of the first picture signal and generate the second image letter
Number time it is different, so the first picture signal and the second picture signal are not the same time to enter image processor 2.If
Image processor 2 needs to carry out image co-registration processing to the first picture signal and the second picture signal, then needs first to the first figure
As signal and the second picture signal synchronize.
Thus, image processor 2 may include caching, and the caching is for storing the first picture signal and the second picture signal
At least one of, to realize that the first picture signal is synchronous with the second picture signal.In this case, image processor 2 can
With to after synchronizing the first picture signal and the second picture signal carry out image co-registration processing, to obtain the first image information.When
So, which can be used for storage other information, such as can be used for storing in the first image information and the second image information
At least one.
For example, when the first picture signal enters image processor 2 earlier than the second picture signal, image processor 2 can be with
First the first picture signal is stored in the caching, after the second picture signal also enters image processor 2, then to the first image
Signal and the second picture signal carry out image co-registration processing.In another example when the second picture signal enters earlier than the first picture signal
When image processor 2, the second picture signal can be first stored in the caching by image processor 2, to the first picture signal
Image co-registration processing is carried out into after image processor 2, then to the first picture signal and the second picture signal.
Further, image processor 2 is also used to at least one of the first picture signal and the second picture signal
In the process of processing, the exposure parameter of image acquisition units 1 is adjusted.Specifically, image processor 2 can be to the first figure
In the process of processing as at least one of signal and the second picture signal, joined according to the attribute generated in treatment process
Number, determines exposure parameter adjusted value, the control signal for carrying the exposure parameter adjusted value is then sent to Image Acquisition list
Member 1, is adjusted the exposure parameter of itself according to the exposure parameter adjusted value by image acquisition units 1.
It should be noted that in the process handled at least one of the first picture signal and the second picture signal
The property parameters of middle generation may include image resolution ratio, brightness of image, picture contrast etc..
In addition, image processor 2 is adjusted the exposure parameter of image acquisition units 1, it is to image acquisition units 1
In the exposure parameter of imaging sensor 01 be adjusted.
Furthermore exposure of the working condition of working condition and filtering assembly 03 due to light aid 02 with imaging sensor 01
Optical parameter tight association can also be with so image processor 2 is while the exposure parameter to imaging sensor 01 is adjusted
The working condition of working condition and filtering assembly 03 to light aid 02 controls.For example, image processor 2 can control benefit
The switch state of first light compensating apparatus 021 in light device 02, also can control the switch shape of the second light compensating apparatus 022 in light aid 02
State also can control the switching in filtering assembly 03 between first optical filter 031 and the second optical filter 032.
3, human face analysis unit 3
Human face analysis unit 3 is the logical platform comprising human face analysis algorithm or program.For example, human face analysis unit 3 can
To be the computer based on X86 or ARM framework, it is also possible to fpga logic circuit.Human face analysis unit 3 can be with image procossing
2 common hardware of device, as human face analysis unit 3 and image processor 2 can be run on same fpga logic circuit.Certainly, people
Face analysis unit 3 and image processor 2 can not also common hardware, the embodiment of the present application is not construed as limiting this.
Referring to fig. 22, human face analysis unit 3 may include: Face datection subelement 311,312 and of recognition of face subelement
Face database 313.
In a kind of possible implementation, at least one is stored in face database 313 with reference to face information.Face inspection
It surveys subelement 311 to be used to carry out Face datection to the first image information, exports the facial image detected, and to the facial image
Carry out living body identification.Recognition of face subelement 312 is used to extract the facial image when the facial image is identified by living body
Face information, by stored in the face information of the facial image and face database 313 at least one with reference to face information into
Row compares, and obtains human face analysis result.
It should be noted that at least one stored in face database 313 can be set in advance with reference to face information
It sets.For example, multiple can be the pre-set user for possessing certain permission (as opened access permission) with reference to face information
Facial image face information.
In addition, Face datection subelement 311 can carry out Face datection to the first image information, and to the face detected
Image carries out living body identification, to prevent the spoof attack of photo, video recording, mask etc..Also, when the facial image does not pass through living body
When identification, can direct end operation, determine human face analysis result be face recognition failures.
Furthermore recognition of face subelement 312 can be when the facial image be identified by living body, by the people of the facial image
At least one stored in face information and face database 313 is compared with reference to face information.If the people of the facial image
Face information and any reference face information comparison success, then can determine that human face analysis result is to identify successfully;If the face
The face information of image and this at least one with reference to face information compare failure, then human face analysis result can be determined for identification
Failure.
It should be noted that face information can may include shape of face song for face characteristic etc., face characteristic data
Rate, attribute (such as size, position and distance) of face contour point (such as eye iris, the wing of nose and the corners of the mouth) etc..
As an example, recognition of face subelement 312 is by the face information of the facial image and face database 313
When at least one of middle storage is compared with reference to face information, for this, at least one is with reference to any one in face information
With reference to face information, recognition of face subelement 312 can calculate this face information for referring to face information and the facial image
Between matching degree determine this with reference to face information and the face figure when the matching degree is greater than or equal to matching degree threshold value
The face information of picture compares successfully, when the matching degree is less than matching degree threshold value, determines this with reference to face information and the face
The face information of image compares failure.Matching degree threshold value can be configured in advance.
In alternatively possible implementation, at least one is stored in face database 313 with reference to face information.Face
Detection sub-unit 311 is used to carry out Face datection to the first image information, the first facial image detected is exported, to the first
Face image carries out living body identification, and carries out Face datection to the second image information, exports the second facial image detected, right
Second facial image carries out living body identification.Recognition of face subelement 312 is used for equal in the first facial image and the second facial image
When identifying by living body, the face information of the first facial image is extracted, by the face information and human face data of the first facial image
At least one stored in library 313 is compared with reference to face information, obtains human face analysis result.
It should be noted that at least one stored in face database 313 can be set in advance with reference to face information
It sets.For example, this at least one can be the facial image of the pre-set user for possessing certain permission with reference to face information
Face information.
In addition, Face datection subelement 311 can carry out Face datection to the first image information and the second image information,
And living body identification is carried out to the first facial image and the second facial image that detect, when the first facial image and the second face
In image any one not by living body identify when, can direct end operation, determine human face analysis result be recognition of face
Failure.In this way, Face datection subelement 311 is to realize multispectral living body by the first image information and the second image information
Identify, to effectively increase the accuracy rate of living body identification.
Furthermore recognition of face subelement 312 can be identified by living body in the first facial image and the second facial image
When, at least one stored in the face information of the first facial image and face database 313 is compared with reference to face information
It is right.If the face information of the first facial image and any reference face information comparison success, can determine human face analysis knot
Fruit is to identify successfully;If the face information of the first facial image and this at least one with reference to face information compare failure,
It can determine that human face analysis result is recognition failures.
It should be noted that face information can may include shape of face song for face characteristic etc., face characteristic data
Rate, attribute of face contour point etc..
As an example, recognition of face subelement 312 is by the face information and face database of the first facial image
When at least one stored in 313 is compared with reference to face information, for this, at least one is with reference to any in face information
One refers to face information, and recognition of face subelement 312 can calculate this people for referring to face information and the first facial image
Matching degree between face information determines this with reference to face information and the when the matching degree is greater than or equal to matching degree threshold value
The face information of one facial image compares successfully, when the matching degree is less than matching degree threshold value, determines this with reference to face information
Failure is compared with the face information of the first facial image.Matching degree threshold value can be configured in advance.
In another possible implementation, at least one is stored in face database 313 with reference to face information.Face
Detection sub-unit 311 is used to carry out Face datection to the second image information, the second facial image detected is exported, to the second people
Face image carries out living body identification, and when the second facial image is identified by living body, carries out face inspection to the first image information
It surveys, exports the first facial image detected.Recognition of face subelement 312 is used to extract the face information of the first facial image,
At least one stored in the face information of first facial image and face database 313 is compared with reference to face information,
Obtain human face analysis result.
For example, the first image information is the gray level image information obtained to the first image signal process, the second image information
It is the color image information obtained to the second image signal process, at least one is stored in face database and is believed with reference to face
Breath;Face datection subelement is used to carry out Face datection to color image information, the colorized face images detected is exported, to coloured silk
Color facial image carries out living body identification, and when colorized face images are identified by living body, carries out people to Gray Face image
Face detection, exports the grey facial image detected;Recognition of face subelement is used to extract the face information of Gray Face image,
At least one stored in the face information of Gray Face image and face database is compared with reference to face information, is obtained
Human face analysis result.
In another example the first image information is the gray level image information obtained to the first image signal process, the second image letter
Breath is to carry out the blending image information that image co-registration is handled, face database to the first picture signal and the second picture signal
In be stored at least one with reference to face information;Face datection subelement is used to carry out Face datection to blending image information, defeated
The fusion facial image detected out carries out living body identification to fusion facial image, and passes through living body in fusion facial image
When identification, Face datection is carried out to Gray Face image, exports the grey facial image detected;Recognition of face subelement is used for
Extract the face information of Gray Face image, at least one will stored in the face information of Gray Face image and face database
It is a to be compared with reference to face information, obtain human face analysis result.
In another example the first image information is to carry out image co-registration to the first picture signal and the second picture signal to handle to obtain
Blending image information, the second image information is the gray level image information obtained to the first image signal process, face database
In be stored at least one with reference to face information;Face datection subelement is used to carry out Face datection to gray level image information, defeated
The Gray Face image detected out carries out living body identification to Gray Face image, and passes through living body in Gray Face image
When identification, Face datection is carried out to fusion facial image, exports the fusion facial image detected;Recognition of face subelement is used for
Extract the face information of fusion facial image, at least one will stored in the face information and face database that merge facial image
It is a to be compared with reference to face information, obtain human face analysis result.
For another example the first image information is to carry out image co-registration to the first picture signal and the second picture signal to handle to obtain
Blending image information, the second image information is the color image information obtained to the second image signal process, face database
In be stored at least one with reference to face information;Face datection subelement is used to carry out Face datection to color image information, defeated
The colorized face images detected out carry out living body identification to colorized face images, and pass through living body in colorized face images
When identification, Face datection is carried out to fusion facial image, exports the fusion facial image detected;Recognition of face subelement is used for
Extract the face information of fusion facial image, at least one will stored in the face information and face database that merge facial image
It is a to be compared with reference to face information, obtain human face analysis result.
Further for example, the first image information is to carry out image co-registration to the first picture signal and the second picture signal to handle to obtain
The first blending image information, the second image information is to carry out image co-registration processing to the first picture signal and the second picture signal
The second obtained blending image information is stored at least one in face database with reference to face information;Face datection subelement
For carrying out Face datection to the second blending image information, the second fusion facial image detected is exported, to the second fusion people
Face image carry out living body identification, and second fusion facial image by living body identify when, to first merge facial image into
Row Face datection exports the first fusion facial image detected;Recognition of face subelement is for extracting the first fusion face figure
At least one stored in the face information of first fusion facial image and face database is referred to face by the face information of picture
Information is compared, and obtains human face analysis result.
It further, as shown in figure 23, not only can be by image processor 2 by the second image information in the embodiment of the present application
It is transferred to display equipment and is shown that human face analysis unit 3, can also be by human face analysis result after obtaining human face analysis result
It is transferred to display equipment, the human face analysis result is shown by display equipment.In this way, user can timely learning the people
Face analysis result.
In the embodiment of the present application, face identification device includes image acquisition units 1, image processor 2 and human face analysis
Unit 3.Image acquisition units 1 include filtering assembly 03, and filtering assembly includes the first optical filter 031, and the first optical filter 031 makes can
Light-exposed and part near infrared light passes through.Image acquisition units 1 can preset exposure by the first default exposure and second while adopt
Collect the first picture signal comprising near infrared light information (such as near infrared light luminance information) and second comprising visible optical information
Picture signal.Relative to need by the later period by the original image signal of acquisition near infrared light information and visible optical information into
The isolated image procossing mode of row, image acquisition units 1 can directly collect the first picture signal and the second figure in the application
As signal, collection process is simple and effective.In this way, image processor 2 in the first picture signal and the second picture signal at least
The quality of one the first image information obtained after being handled is higher, then human face analysis unit 3 to the first image information into
More accurate human face analysis be can be obtained by after pedestrian's face analysis as a result, so as to effectively improve face recognition accuracy rate.
Figure 24 is a kind of structural schematic diagram of access control equipment provided by the embodiments of the present application.Referring to fig. 24, the access control equipment
Including access controller 001 and above-mentioned Fig. 1-Figure 23 it is any shown in face identification device 002.
Face identification device 002 is used to human face analysis result being transferred to access controller 001.Access controller 001 is used
In when the human face analysis result is to identify successfully, output is used to open the control signal of gate inhibition.Access controller 001 is in the people
When face analysis result is recognition failures, operation is not executed.
In the embodiment of the present application, access control equipment includes access controller 001 and face identification device 002, recognition of face
The face recognition accuracy rate of device 002 is higher, thus can guarantee the precise control of access controller 001, guarantees gate inhibition's peace
Entirely.
It should be noted that face identification device provided by the embodiments of the present application can be applied not only to access control equipment,
It can be applied in other equipment for having recognition of face demand, such as payment devices, the embodiment of the present application are not construed as limiting this.
Below with the face identification device that is provided based on embodiment shown in above-mentioned Fig. 1-Figure 23 come to face identification method
It is illustrated.Referring to fig. 25, this method comprises:
Step 251: passing through visible light and part near infrared light by the first optical filter.
Step 252: acquiring the first picture signal by image acquisition units and the second picture signal, the first picture signal are
According to the picture signal that the first default exposure generates, the second picture signal is the picture signal generated according to the second default exposure,
Wherein, near-infrared light filling at least is carried out within the Partial exposure period of the first default exposure, in the exposure of the second default exposure
Without near-infrared light filling in period.
Step 253: at least one of the first picture signal and the second picture signal being located by image processor
Reason, obtains the first image information.
Step 254: human face analysis being carried out to the first image information by human face analysis unit, obtains human face analysis result.
In one possible implementation, image acquisition units include: imaging sensor and light aid, imaging sensor
Positioned at the light emission side of the filtering assembly, light aid includes the first light compensating apparatus;
Multiple exposure is carried out by imaging sensor, to generate and export the first picture signal and the second picture signal, the
One default exposure and the second default exposure are the wherein double exposure of the multiple exposure;It is carried out by the first light compensating apparatus close red
Outer light filling.
In one possible implementation, it is setting feature that the first light compensating apparatus, which carries out the central wavelength of near-infrared light filling,
Wavelength or fall in setting characteristic wavelength range when, pass through the central wavelength and/or wide waveband of the near infrared light of the first optical filter
Degree reaches constraint condition.
In one possible implementation,
First light compensating apparatus carries out any wave in the wave-length coverage that the central wavelength of near-infrared light filling is 750 ± 10 nanometers
It is long;Or
First light compensating apparatus carries out any wave in the wave-length coverage that the central wavelength of near-infrared light filling is 780 ± 10 nanometers
It is long;Or
First light compensating apparatus carries out any wave in the wave-length coverage that the central wavelength of near-infrared light filling is 940 ± 10 nanometers
It is long.
In one possible implementation, constraint condition includes:
The center of near-infrared light filling is carried out by the central wavelength of the near infrared light of the first optical filter and the first light compensating apparatus
Difference between wavelength is located within the scope of wavelength fluctuation, and wavelength fluctuation range is 0~20 nanometer;Or
It is less than or equal to 50 nanometers by the half-band width of the near infrared light of the first optical filter;Or
First band width is less than second band width;Wherein, first band width refers to through the close of the first optical filter
The waveband width of infrared light, second band width refer to by the waveband width of the near infrared light of the first filter blocks;Or
Third waveband width, which is less than, refers to waveband width, and third waveband width refers to that percent of pass is greater than the close red of setting ratio
Any waveband width in wavelength band that the waveband width of outer light is 50 nanometers~150 nanometers with reference to waveband width.
In one possible implementation, imaging sensor includes multiple photosensitive channels, and each photosensitive channel is for feeling
Answer the light of at least one visible light wave range, and the light of induction near infrared band.
In one possible implementation, imaging sensor carries out multiple exposure using global Exposure mode, for appointing
It anticipates a near-infrared light filling, there is no hand over for the exposure period of the period of near-infrared light filling and the closest second default exposure
Collection, the period of near-infrared light filling is the subset of the exposure period of the first default exposure, alternatively, the period of near-infrared light filling
The exposure period for presetting exposure there are intersection or first with the exposure period of the first default exposure is near-infrared light filling
Subset.
In one possible implementation, imaging sensor carries out multiple exposure using roller shutter Exposure mode, for appointing
It anticipates a near-infrared light filling, there is no hand over for the exposure period of the period of near-infrared light filling and the closest second default exposure
Collection;
When being no earlier than the exposure of last line effective image in the first default exposure at the beginning of near-infrared light filling and starting
It carves, the finish time of near-infrared light filling is not later than the end exposure moment of the first row effective image in the first default exposure;
Alternatively,
Closest second before being no earlier than the first default exposure at the beginning of near-infrared light filling presets exposure most
End exposure moment of a line effective image and when being not later than the end exposure of the first row effective image in the first default exposure afterwards
It carves, the finish time of near-infrared light filling is no earlier than in the first default exposure exposure start time of last line effective image and not
It is later than the exposure start time of the first row effective image of the default exposure of closest second after the first default exposure;Or
Closest second before being no earlier than the first default exposure at the beginning of near-infrared light filling presets exposure most
End exposure moment of a line effective image and when being not later than the exposure of the first row effective image in the first default exposure and starting afterwards
It carves, the finish time of near-infrared light filling is no earlier than in the first default exposure end exposure moment of last line effective image and not
It is later than the exposure start time of the first row effective image of the default exposure of closest second after the first default exposure.
In one possible implementation, the first default exposure presets at least one exposure parameter of exposure not with second
Together, at least one exposure parameter is one of time for exposure, exposure gain, aperture size or a variety of, and exposure gain includes mould
Quasi- gain, and/or, digital gain.
In one possible implementation, at least one exposure parameter phase of the first default exposure and the second default exposure
Together, at least one exposure parameter includes one of time for exposure, exposure gain, aperture size or a variety of, and exposure gain includes
Analog gain, and/or, digital gain.
In one possible implementation,
By image processor using the first processing parameter at least one in the first picture signal and the second picture signal
It is a to be handled, obtain the first image information;By image processor using second processing parameter to the first picture signal and the
At least one of two picture signals are handled, and the second image information is obtained;By image processor by the second image information
It is transferred to display equipment, the second image information is shown by display equipment.
In one possible implementation, when the first image information and the second image information are to the first picture signal
When processing obtains, alternatively, when the first image information and the second image information are obtained to the second image signal process, or
Person, when the first image information and the second image information are obtained to the first picture signal and the second image signal process,
One processing parameter is different with second processing parameter.
In one possible implementation, image processor in the first picture signal and the second picture signal at least
The processing of one progress includes black level, image interpolation, digital gain, white balance, image noise reduction, image enhancement, image co-registration
At least one of.
In one possible implementation, image processor includes caching;
By at least one of the first picture signal of buffer memory and second picture signal, alternatively, passing through buffer memory
At least one of first image information and the second image information.
In one possible implementation, through image processor in the first picture signal and the second picture signal
At least one in the process of processing, adjust the exposure parameter of image acquisition units.
In one possible implementation, human face analysis unit includes: Face datection subelement, recognition of face subelement
And face database;
At least one is stored with reference to face information by face database;
Face datection is carried out to the first image information by Face datection subelement, exports the facial image detected, and
Living body identification is carried out to facial image;
Through recognition of face subelement when facial image is identified by living body, the face information of facial image is extracted, it will
At least one stored in the face information of facial image and face database is compared with reference to face information, obtains face point
Analyse result.
In one possible implementation, human face analysis unit includes: Face datection subelement, recognition of face subelement
And face database;
At least one is stored with reference to face information by face database;
Face datection is carried out to the first image information by Face datection subelement, exports the first face figure detected
Picture carries out living body identification to the first facial image, and carries out Face datection to the second image information, exports second detected
Facial image carries out living body identification to the second facial image;
Through recognition of face subelement when the first facial image and the second facial image are identified by living body, the is extracted
The face information of one facial image, at least one reference that will be stored in the face information of the first facial image and face database
Face information is compared, and obtains human face analysis result.
In one possible implementation, the first image information is the gray level image obtained to the first image signal process
Information, the second image information are the color image informations obtained to the second image signal process, and human face analysis unit includes: face
Detection sub-unit, recognition of face subelement and face database;
At least one is stored with reference to face information by face database;
Face datection is carried out to color image information by Face datection subelement, exports the colored human face figure detected
Picture carries out living body identification to colorized face images, and when colorized face images are identified by living body, to Gray Face image
Face datection is carried out, the grey facial image detected is exported;
The face information that Gray Face image is extracted by recognition of face subelement, by the face information of Gray Face image
It is compared at least one stored in face database with reference to face information, obtains human face analysis result.
In one possible implementation, the first image information is the gray level image obtained to the first image signal process
Information, the second image information are to carry out the blending image that image co-registration is handled to the first picture signal and the second picture signal
Information, human face analysis unit include: Face datection subelement, recognition of face subelement and face database;
At least one is stored with reference to face information by face database;
Face datection is carried out to blending image information by Face datection subelement, exports the fusion face figure detected
Picture carries out living body identification to fusion facial image, and when merging facial image by living body identification, to Gray Face image
Face datection is carried out, the grey facial image detected is exported;
The face information that Gray Face image is extracted by recognition of face subelement, by the face information of Gray Face image
It is compared at least one stored in face database with reference to face information, obtains human face analysis result.
In one possible implementation, the first image information is carried out to the first picture signal and the second picture signal
The blending image information that image co-registration is handled, the second image information are the gray level images obtained to the first image signal process
Information, human face analysis unit include: Face datection subelement, recognition of face subelement and face database;
At least one is stored with reference to face information by face database;
Face datection is carried out to gray level image information by Face datection subelement, exports the Gray Face figure detected
Picture carries out living body identification to Gray Face image, and when Gray Face image is identified by living body, to fusion facial image
Face datection is carried out, the fusion facial image detected is exported;
The face information that fusion facial image is extracted by recognition of face subelement, will merge the face information of facial image
It is compared at least one stored in face database with reference to face information, obtains human face analysis result.
In one possible implementation, the first image information is carried out to the first picture signal and the second picture signal
The blending image information that image co-registration is handled, the second image information are the color images obtained to the second image signal process
Information, human face analysis unit include: Face datection subelement, recognition of face subelement and face database;
At least one is stored with reference to face information by face database;
Face datection is carried out to color image information by Face datection subelement, exports the colored human face figure detected
Picture carries out living body identification to colorized face images, and when colorized face images are identified by living body, to fusion facial image
Face datection is carried out, the fusion facial image detected is exported;
The face information that fusion facial image is extracted by recognition of face subelement, will merge the face information of facial image
It is compared at least one stored in face database with reference to face information, obtains human face analysis result.
In one possible implementation, the first image information is carried out to the first picture signal and the second picture signal
The first blending image information that image co-registration is handled, the second image information are to the first picture signal and the second picture signal
The second blending image information that image co-registration is handled is carried out, human face analysis unit includes: Face datection subelement, face knowledge
Small pin for the case unit and face database;
At least one is stored with reference to face information by face database;
Face datection is carried out to the second blending image information by Face datection subelement, exports the second fusion detected
Facial image carries out living body identification to the second fusion facial image, and when the second fusion facial image is identified by living body,
Face datection is carried out to the first fusion facial image, exports the first fusion facial image detected;
The face information that the first fusion facial image is extracted by recognition of face subelement, by the first fusion facial image
At least one stored in face information and face database is compared with reference to face information, obtains human face analysis result.
In one possible implementation, human face analysis result is transferred to by display equipment by human face analysis unit,
Human face analysis result is shown by display equipment.
It should be noted that the embodiment as shown in the present embodiment and above-mentioned Fig. 1-Figure 23 can be using same invention
Design, can be with reference to the solution of related content in above-mentioned Fig. 1-embodiment illustrated in fig. 23 accordingly, with respect to explaining for the present embodiment content
It releases, details are not described herein again.
In the embodiment of the present application, face identification device includes image acquisition units, image processor and human face analysis list
Member.Image acquisition units include filtering assembly, and filtering assembly includes the first optical filter, and the first optical filter keeps visible light and part close
Infrared light passes through.Image acquisition units can preset exposure by the first default exposure and second while collect comprising near-infrared
First picture signal of optical information (such as near infrared light luminance information) and the second picture signal comprising visible optical information.Relative to
Need by the later period by the original image signal of acquisition near infrared light information and the image that separates of visible optical information at
Reason mode, image acquisition units can directly collect the first picture signal and the second picture signal, collection process in the application
It is simple and effective.In this way, image processor obtains after handling at least one of the first picture signal and the second picture signal
The quality of the first image information arrived is higher, after then human face analysis unit carries out human face analysis to the first image information
More accurate human face analysis is obtained as a result, so as to effectively improve face recognition accuracy rate.
The foregoing is merely the preferred embodiments of the application, not to limit the application, it is all in spirit herein and
Within principle, any modification, equivalent replacement, improvement and so on be should be included within the scope of protection of this application.
Claims (25)
1. a kind of face identification device, which is characterized in that the face identification device includes: image acquisition units (1), at image
Manage device (2) and human face analysis unit (3);
Described image acquisition unit (1) includes filtering assembly (03), and the filtering assembly (03) includes the first optical filter (031),
First optical filter (031) passes through visible light and part near infrared light;
Described image acquisition unit (1) is used to acquire the first picture signal and the second picture signal, the first image signal is
According to the picture signal that the first default exposure generates, second picture signal is the image letter generated according to the second default exposure
Number, wherein near-infrared light filling at least is carried out within the Partial exposure period of the described first default exposure, it is default described second
Without near-infrared light filling in the exposure period of exposure;
Described image processor (2), for at least one of the first image signal and second picture signal into
Row processing, obtains the first image information;
The human face analysis unit (3) obtains human face analysis result for carrying out human face analysis to the first image information.
2. face identification device as described in claim 1, which is characterized in that described image acquisition unit (1) includes: that image passes
Sensor (01) and light aid (02), described image sensor (01) are located at the light emission side of the filtering assembly (03);
Described image sensor (01), for the first image signal and second figure to be generated and exported by multiple exposure
As signal, the wherein double exposure that exposure is the multiple exposure is preset in the described first default exposure and described second;
The light aid (02) includes the first light compensating apparatus (021), and first light compensating apparatus (021) is for carrying out near-infrared benefit
Light.
3. face identification device as claimed in claim 2, which is characterized in that
The central wavelength that first light compensating apparatus (021) carries out near-infrared light filling is to set characteristic wavelength or fall in set spy
When levying wave-length coverage, constraint is reached by the central wavelength and/or waveband width of the near infrared light of first optical filter (031)
Condition.
4. face identification device as claimed in claim 3, which is characterized in that
First light compensating apparatus (021) carries out in the wave-length coverage that the central wavelength of near-infrared light filling is 750 ± 10 nanometers
Any wavelength;Or
First light compensating apparatus (021) carries out in the wave-length coverage that the central wavelength of near-infrared light filling is 780 ± 10 nanometers
Any wavelength;Or
First light compensating apparatus (021) carries out in the wave-length coverage that the central wavelength of near-infrared light filling is 940 ± 10 nanometers
Any wavelength.
5. face identification device as claimed in claim 3, which is characterized in that the constraint condition includes:
It is carried out by the central wavelength of the near infrared light of first optical filter (031) and first light compensating apparatus (021) close
Difference between the central wavelength of infrared light filling is located within the scope of wavelength fluctuation, and the wavelength fluctuation range is 0~20 nanometer;Or
Person
It is less than or equal to 50 nanometers by the half-band width of the near infrared light of first optical filter (031);Or
First band width is less than second band width;Wherein, the first band width refers to through first optical filter
(031) waveband width of near infrared light, the second band width refer to by the close red of first optical filter (031) blocking
The waveband width of outer light;Or
Third waveband width, which is less than, refers to waveband width, and the third waveband width refers to that percent of pass is greater than the close red of setting ratio
The waveband width of outer light, described with reference to waveband width is any waveband width in 50 nanometers~150 nanometers of wavelength band.
6. face identification device as claimed in claim 2, which is characterized in that described image sensor (01) includes multiple photosensitive
Channel, each photosensitive channel are used to incude the light of at least one visible light wave range, and the light of induction near infrared band.
7. face identification device as claimed in claim 2, which is characterized in that
Described image sensor (01) carries out multiple exposure using global Exposure mode, for any near-infrared light filling, closely
Intersection is not present in the period of infrared light filling and the exposure period of the closest described second default exposure, near-infrared light filling
Period is the subset of the exposure period of the described first default exposure, alternatively, the period of near-infrared light filling and described first
There are the sons that the exposure period of intersection or the first default exposure is near-infrared light filling for the exposure period of default exposure
Collection.
8. face identification device as claimed in claim 2, which is characterized in that
Described image sensor (01) carries out multiple exposure using roller shutter Exposure mode, for any near-infrared light filling, closely
Intersection is not present in the period of infrared light filling and the exposure period of the closest described second default exposure;
When being no earlier than the exposure of last line effective image in the described first default exposure at the beginning of near-infrared light filling and starting
It carves, the finish time of near-infrared light filling is not later than the end exposure moment of the first row effective image in the described first default exposure;
Alternatively,
Closest second before being no earlier than the described first default exposure at the beginning of near-infrared light filling presets exposure most
The end exposure moment of a line effective image and it is not later than the exposure knot of the first row effective image in the described first default exposure afterwards
Beam moment, the exposure that the finish time of near-infrared light filling is no earlier than last line effective image in the described first default exposure start
The exposure of the first row effective image of moment and the default exposure of closest second being not later than after the described first default exposure
Start time;Or
Closest second before being no earlier than the described first default exposure at the beginning of near-infrared light filling presets exposure most
It the end exposure moment of a line effective image and is not later than the exposure of the first row effective image in the described first default exposure afterwards and opens
Begin the moment, the finish time of near-infrared light filling is no earlier than the end exposure of last line effective image in the described first default exposure
The exposure of the first row effective image of moment and the default exposure of closest second being not later than after the described first default exposure
Start time.
9. face identification device as described in claim 1, which is characterized in that
Described first default exposure is different from least one exposure parameter of the described second default exposure, at least one described exposure
Parameter is one of time for exposure, exposure gain, aperture size or a variety of, and the exposure gain includes analog gain, and/
Or, digital gain.
10. face identification device as described in claim 1, which is characterized in that
Described first default exposure is identical at least one exposure parameter of the described second default exposure, at least one described exposure
Parameter includes one of time for exposure, exposure gain, aperture size or a variety of, and the exposure gain includes analog gain, and/
Or, digital gain.
11. such as face identification device of any of claims 1-10, which is characterized in that
Described image processor (2), for being believed using the first processing parameter the first image signal and second image
Number at least one of handled, obtain the first image information;
Described image processor (2) is also used to using second processing parameter to the first image signal and second image
At least one of signal is handled, and the second image information is obtained;
Described image processor (2) is also used to second image information being transferred to display equipment, is shown by the display equipment
Show second image information.
12. face identification device as claimed in claim 11, which is characterized in that when the first image information and described second
Image information is when obtaining to the first image signal processing, alternatively, working as the first image information and second figure
As information is when obtaining to second image signal process, alternatively, working as the first image information and second image
Information is when obtaining to the first image signal and second image signal process, first processing parameter and described
Second processing parameter is different.
13. face identification device as claimed in claim 11, which is characterized in that described image processor (2) is to described first
At least one of picture signal and second picture signal carry out processing include black level, image interpolation, digital gain,
At least one of white balance, image noise reduction, image enhancement, image co-registration.
14. face identification device as claimed in claim 11, which is characterized in that described image processor (2) includes caching;
The caching, for storing at least one of the first image signal and second picture signal, alternatively, being used for
Store at least one of the first image information and second image information.
15. such as face identification device of any of claims 1-10, which is characterized in that described image processor (2),
It is also used in the process of processing, adjusting at least one of the first image signal and second picture signal
The exposure parameter of described image acquisition unit (1).
16. such as face identification device of any of claims 1-10, which is characterized in that the human face analysis unit
It (3) include: Face datection subelement (311), recognition of face subelement (312) and face database (313);
At least one is stored in the face database (313) with reference to face information;
The Face datection subelement (311) exports the people detected for carrying out Face datection to the first image information
Face image, and living body identification is carried out to the facial image;
The recognition of face subelement (312), for extracting the facial image when the facial image is identified by living body
Face information, at least one reference that will be stored in the face information of the facial image and the face database (313)
Face information is compared, and obtains human face analysis result.
17. face identification device as claimed in claim 11, which is characterized in that the human face analysis unit (3) includes: face
Detection sub-unit (311), recognition of face subelement (312) and face database (313);
At least one is stored in the face database (313) with reference to face information;
The Face datection subelement (311) exports the detected for carrying out Face datection to the first image information
One facial image carries out living body identification to first facial image, and carries out Face datection to second image information,
The second facial image detected is exported, living body identification is carried out to second facial image;
The recognition of face subelement (312), for passing through work in first facial image and second facial image
When body identifies, the face information of first facial image is extracted, by the face information of first facial image and the people
At least one stored in face database (313) is compared with reference to face information, obtains human face analysis result.
18. face identification device as claimed in claim 11, which is characterized in that the first image information is to described first
The gray level image information that image signal process obtains, second image information are obtained to second image signal process
Color image information, the human face analysis unit (3) include: Face datection subelement (311), recognition of face subelement (312)
With face database (313);
At least one is stored in the face database (313) with reference to face information;
The Face datection subelement (311) exports the coloured silk detected for carrying out Face datection to the color image information
Color facial image carries out living body identification to the colorized face images, and is identified in the colorized face images by living body
When, Face datection is carried out to the Gray Face image, exports the grey facial image detected;
The recognition of face subelement (312), for extracting the face information of the Gray Face image, by the Gray Face
The face information of image is compared at least one stored in the face database (313) with reference to face information, obtains
Human face analysis result.
19. face identification device as claimed in claim 11, which is characterized in that the first image information is to described first
The gray level image information that image signal process obtains, second image information are to the first image signal and described second
Picture signal carries out the blending image information that image co-registration is handled, and the human face analysis unit (3) includes: Face datection
Unit (311), recognition of face subelement (312) and face database (313);
At least one is stored in the face database (313) with reference to face information;
The Face datection subelement (311) exports melting of detecting for carrying out Face datection to the blending image information
Facial image is closed, living body identification is carried out to the fusion facial image, and identify by living body in the fusion facial image
When, Face datection is carried out to the Gray Face image, exports the grey facial image detected;
The recognition of face subelement (312), for extracting the face information of the Gray Face image, by the Gray Face
The face information of image is compared at least one stored in the face database (313) with reference to face information, obtains
Human face analysis result.
20. face identification device as claimed in claim 11, which is characterized in that the first image information is to described first
Picture signal and second picture signal carry out the blending image information that image co-registration is handled, second image information
It is the gray level image information obtained to the first image signal processing, the human face analysis unit (3) includes: Face datection
Unit (311), recognition of face subelement (312) and face database (313);
At least one is stored in the face database (313) with reference to face information;
The Face datection subelement (311) exports the ash detected for carrying out Face datection to the gray level image information
Facial image is spent, living body identification is carried out to the Gray Face image, and identify by living body in the Gray Face image
When, Face datection is carried out to the fusion facial image, exports the fusion facial image detected;
The recognition of face subelement (312), for extracting the face information of the fusion facial image, by the fusion face
The face information of image is compared at least one stored in the face database (313) with reference to face information, obtains
Human face analysis result.
21. face identification device as claimed in claim 11, which is characterized in that the first image information is to described first
Picture signal and second picture signal carry out the blending image information that image co-registration is handled, second image information
It is the color image information obtained to second image signal process, the human face analysis unit (3) includes: Face datection
Unit (311), recognition of face subelement (312) and face database (313);
At least one is stored in the face database (313) with reference to face information;
The Face datection subelement (311) exports the coloured silk detected for carrying out Face datection to the color image information
Color facial image carries out living body identification to the colorized face images, and is identified in the colorized face images by living body
When, Face datection is carried out to the fusion facial image, exports the fusion facial image detected;
The recognition of face subelement (312), for extracting the face information of the fusion facial image, by the fusion face
The face information of image is compared at least one stored in the face database (313) with reference to face information, obtains
Human face analysis result.
22. face identification device as claimed in claim 11, which is characterized in that the first image information is to described first
Picture signal and second picture signal carry out the first blending image information that image co-registration is handled, second image
Information is to carry out the second blending image that image co-registration is handled to the first image signal and second picture signal
Information, the human face analysis unit (3) include: Face datection subelement (311), recognition of face subelement (312) and face number
According to library (313);
At least one is stored in the face database (313) with reference to face information;
The Face datection subelement (311), for carrying out Face datection to the second blending image information, output is detected
Second fusion facial image, to it is described second fusion facial image carry out living body identification, and it is described second fusion face
When image is identified by living body, Face datection is carried out to the first fusion facial image, exports the first fusion people detected
Face image;
The recognition of face subelement (312), for extracting the face information of the first fusion facial image, by described first
The face information for merging facial image is compared at least one stored in the face database (313) with reference to face information
It is right, obtain human face analysis result.
23. such as face identification device of any of claims 1-10, which is characterized in that the human face analysis unit
(3), be also used to the human face analysis result being transferred to display equipment, by the display equipment to the human face analysis result into
Row display.
24. a kind of access control equipment, which is characterized in that the access control equipment includes in access controller and the claims 1-23
Described in any item face identification devices;
The face identification device, for the human face analysis result to be transferred to the access controller;
The access controller, for when the human face analysis result is to identify successfully, output to be used to open the control of gate inhibition
Signal.
25. a kind of face identification method, be applied to face identification device, the face identification device include: image acquisition units,
Image processor and human face analysis unit, described image acquisition unit include filtering assembly, and the filtering assembly includes the first filter
Mating plate, which is characterized in that the described method includes:
Pass through visible light and part near infrared light by first optical filter;
The first picture signal is acquired by described image acquisition unit and the second picture signal, the first image signal are bases
The picture signal that first default exposure generates, second picture signal are the picture signals generated according to the second default exposure,
Wherein, near-infrared light filling at least is carried out within the Partial exposure period of the described first default exposure, in the described second default exposure
Without near-infrared light filling in the exposure period of light;
At least one of the first image signal and second picture signal are located by described image processor
Reason, obtains the first image information;
Human face analysis is carried out to the first image information by the human face analysis unit, obtains human face analysis result.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910472703.7A CN110490042B (en) | 2019-05-31 | 2019-05-31 | Face recognition device and entrance guard's equipment |
PCT/CN2020/091910 WO2020238805A1 (en) | 2019-05-31 | 2020-05-22 | Facial recognition apparatus and door access control device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910472703.7A CN110490042B (en) | 2019-05-31 | 2019-05-31 | Face recognition device and entrance guard's equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110490042A true CN110490042A (en) | 2019-11-22 |
CN110490042B CN110490042B (en) | 2022-02-11 |
Family
ID=68546292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910472703.7A Active CN110490042B (en) | 2019-05-31 | 2019-05-31 | Face recognition device and entrance guard's equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110490042B (en) |
WO (1) | WO2020238805A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020238804A1 (en) * | 2019-05-31 | 2020-12-03 | 杭州海康威视数字技术股份有限公司 | Image acquisition apparatus and image acquisition method |
WO2020238806A1 (en) * | 2019-05-31 | 2020-12-03 | 杭州海康威视数字技术股份有限公司 | Image collection apparatus and photography method |
WO2020238805A1 (en) * | 2019-05-31 | 2020-12-03 | 杭州海康威视数字技术股份有限公司 | Facial recognition apparatus and door access control device |
WO2021109458A1 (en) * | 2019-12-02 | 2021-06-10 | 浙江宇视科技有限公司 | Object recognition method and apparatus, electronic device and readable storage medium |
CN113128259A (en) * | 2019-12-30 | 2021-07-16 | 杭州海康威视数字技术股份有限公司 | Face recognition device and face recognition method |
CN116978104A (en) * | 2023-08-11 | 2023-10-31 | 泰智达(北京)网络科技有限公司 | Face recognition system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203193649U (en) * | 2013-04-16 | 2013-09-11 | 北京天诚盛业科技有限公司 | Electronic signature device |
CN105187727A (en) * | 2015-06-17 | 2015-12-23 | 广州市巽腾信息科技有限公司 | Image information acquisition device, image acquisition method and use of image information acquisition device |
CN105868753A (en) * | 2016-04-05 | 2016-08-17 | 浙江宇视科技有限公司 | Color identification method and apparatus of blue license plate |
CN106449617A (en) * | 2015-08-05 | 2017-02-22 | 杭州海康威视数字技术股份有限公司 | Light source device used for generating light, light supplement method thereof, and light supplement device thereof |
CN107005639A (en) * | 2014-12-10 | 2017-08-01 | 索尼公司 | Image pick up equipment, image pickup method, program and image processing equipment |
US20170237887A1 (en) * | 2014-11-13 | 2017-08-17 | Panasonic Intellectual Property Management Co. Ltd. | Imaging device and imaging method |
CN108234898A (en) * | 2018-02-07 | 2018-06-29 | 信利光电股份有限公司 | Sync pulse jamming method, filming apparatus, mobile terminal and the readable storage medium storing program for executing of multi-cam |
CN109635760A (en) * | 2018-12-18 | 2019-04-16 | 深圳市捷顺科技实业股份有限公司 | A kind of face identification method and relevant device |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100576231C (en) * | 2007-01-15 | 2009-12-30 | 中国科学院自动化研究所 | Image collecting device and use the face identification system and the method for this device |
KR101165415B1 (en) * | 2010-05-24 | 2012-07-13 | 주식회사 다음커뮤니케이션 | Method for recognizing human face and recognizing apparatus |
CN101931755B (en) * | 2010-07-06 | 2012-05-23 | 上海洪剑智能科技有限公司 | Modulated light filtering device for face recognition and filtering method |
CN107220621A (en) * | 2017-05-27 | 2017-09-29 | 北京小米移动软件有限公司 | Terminal carries out the method and device of recognition of face |
CN108289179A (en) * | 2018-02-08 | 2018-07-17 | 深圳泰华安全技术工程有限公司 | A method of improving video signal collection anti-interference ability |
CN110312079A (en) * | 2018-03-20 | 2019-10-08 | 北京中科奥森科技有限公司 | Image collecting device and its application system |
CN110490041B (en) * | 2019-05-31 | 2022-03-15 | 杭州海康威视数字技术股份有限公司 | Face image acquisition device and method |
CN110490187B (en) * | 2019-05-31 | 2022-04-15 | 杭州海康威视数字技术股份有限公司 | License plate recognition device and method |
CN110490042B (en) * | 2019-05-31 | 2022-02-11 | 杭州海康威视数字技术股份有限公司 | Face recognition device and entrance guard's equipment |
-
2019
- 2019-05-31 CN CN201910472703.7A patent/CN110490042B/en active Active
-
2020
- 2020-05-22 WO PCT/CN2020/091910 patent/WO2020238805A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203193649U (en) * | 2013-04-16 | 2013-09-11 | 北京天诚盛业科技有限公司 | Electronic signature device |
US20170237887A1 (en) * | 2014-11-13 | 2017-08-17 | Panasonic Intellectual Property Management Co. Ltd. | Imaging device and imaging method |
CN107005639A (en) * | 2014-12-10 | 2017-08-01 | 索尼公司 | Image pick up equipment, image pickup method, program and image processing equipment |
CN105187727A (en) * | 2015-06-17 | 2015-12-23 | 广州市巽腾信息科技有限公司 | Image information acquisition device, image acquisition method and use of image information acquisition device |
CN106449617A (en) * | 2015-08-05 | 2017-02-22 | 杭州海康威视数字技术股份有限公司 | Light source device used for generating light, light supplement method thereof, and light supplement device thereof |
CN105868753A (en) * | 2016-04-05 | 2016-08-17 | 浙江宇视科技有限公司 | Color identification method and apparatus of blue license plate |
CN108234898A (en) * | 2018-02-07 | 2018-06-29 | 信利光电股份有限公司 | Sync pulse jamming method, filming apparatus, mobile terminal and the readable storage medium storing program for executing of multi-cam |
CN109635760A (en) * | 2018-12-18 | 2019-04-16 | 深圳市捷顺科技实业股份有限公司 | A kind of face identification method and relevant device |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020238804A1 (en) * | 2019-05-31 | 2020-12-03 | 杭州海康威视数字技术股份有限公司 | Image acquisition apparatus and image acquisition method |
WO2020238806A1 (en) * | 2019-05-31 | 2020-12-03 | 杭州海康威视数字技术股份有限公司 | Image collection apparatus and photography method |
WO2020238805A1 (en) * | 2019-05-31 | 2020-12-03 | 杭州海康威视数字技术股份有限公司 | Facial recognition apparatus and door access control device |
US11889032B2 (en) | 2019-05-31 | 2024-01-30 | Hangzhou Hikvision Digital Technology Co., Ltd. | Apparatus for acquiring image and method for acquiring image |
WO2021109458A1 (en) * | 2019-12-02 | 2021-06-10 | 浙江宇视科技有限公司 | Object recognition method and apparatus, electronic device and readable storage medium |
EP4071660A4 (en) * | 2019-12-02 | 2023-12-06 | Zhejiang Uniview Technologies Co., Ltd. | Object recognition method and apparatus, electronic device and readable storage medium |
CN113128259A (en) * | 2019-12-30 | 2021-07-16 | 杭州海康威视数字技术股份有限公司 | Face recognition device and face recognition method |
CN113128259B (en) * | 2019-12-30 | 2023-08-29 | 杭州海康威视数字技术股份有限公司 | Face recognition device and face recognition method |
CN116978104A (en) * | 2023-08-11 | 2023-10-31 | 泰智达(北京)网络科技有限公司 | Face recognition system |
Also Published As
Publication number | Publication date |
---|---|
CN110490042B (en) | 2022-02-11 |
WO2020238805A1 (en) | 2020-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110490042A (en) | Face identification device and access control equipment | |
CN110493494A (en) | Image fusion device and image interfusion method | |
CN110493491A (en) | A kind of image collecting device and image capture method | |
WO2020238903A1 (en) | Device and method for acquiring face images | |
CN110519489B (en) | Image acquisition method and device | |
CN110490044A (en) | Face modelling apparatus and human face model building | |
CN110505377A (en) | Image co-registration device and method | |
CN108419061A (en) | Based on multispectral image co-registration equipment, method and imaging sensor | |
CN110493536A (en) | The method of image collecting device and Image Acquisition | |
CN110490811A (en) | Image noise reduction apparatus and image denoising method | |
CN110490187A (en) | Car license recognition equipment and method | |
CN110493535A (en) | The method of image collecting device and Image Acquisition | |
CN106101549A (en) | Automatic switching method, Apparatus and system round the clock | |
CN110493496A (en) | Image collecting device and method | |
CN110706178A (en) | Image fusion device, method, equipment and storage medium | |
CN110493495A (en) | The method of image collecting device and Image Acquisition | |
CN110493537B (en) | Image acquisition device and image acquisition method | |
CN105915869A (en) | Color self-adaptive compression calculation ghost imaging system and method | |
CN110493492A (en) | Image collecting device and image-pickup method | |
CN110493533B (en) | Image acquisition device and image acquisition method | |
CN110505376A (en) | Image collecting device and method | |
CN110493493A (en) | Panorama details video camera and the method for obtaining picture signal | |
CN105934941B (en) | Photographic device, video signal processing method and storage medium | |
CN114374776A (en) | Camera and camera control method | |
AU709844B2 (en) | Method for replacing the background of an image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |