CN109394170B - A kind of iris information measuring system of no-reflection - Google Patents

A kind of iris information measuring system of no-reflection Download PDF

Info

Publication number
CN109394170B
CN109394170B CN201811212959.6A CN201811212959A CN109394170B CN 109394170 B CN109394170 B CN 109394170B CN 201811212959 A CN201811212959 A CN 201811212959A CN 109394170 B CN109394170 B CN 109394170B
Authority
CN
China
Prior art keywords
iris
image
acquisition
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811212959.6A
Other languages
Chinese (zh)
Other versions
CN109394170A (en
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Love Vision (beijing) Technology Co Ltd
Original Assignee
Tianmu Love Vision (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Love Vision (beijing) Technology Co Ltd filed Critical Tianmu Love Vision (beijing) Technology Co Ltd
Priority to CN201911145926.9A priority Critical patent/CN111126145A/en
Priority to CN201811212959.6A priority patent/CN109394170B/en
Publication of CN109394170A publication Critical patent/CN109394170A/en
Application granted granted Critical
Publication of CN109394170B publication Critical patent/CN109394170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Abstract

The present invention provides iris information measuring system, the acquisition system of a kind of no-reflection, measuring system includes: light source, for providing illumination to non-targeted iris;Isolating device, for preventing the light of light source from entering target iris;Acquisition device, for acquiring target iris information;The non-targeted iris and target iris are belonging respectively to the different eyes of same people;Measuring device measures its size according to iris information.When in iris capturing, the eyes of finder receive illumination for the first time, another eyes also will appear with by the similar reaction of illumination eyes, and recognize the light filling process when phenomenon can be used for iris capturing for the first time.

Description

A kind of iris information measuring system of no-reflection
Technical field
The present invention relates to field of measuring technique, in particular to iris size, appearance and size field of measuring technique.
Background technique
In iris capturing measurement in order to increase effective area, need myosis.Pupil contracting common at present Small mode carries out light filling when being included in iris measurement, that is, utilizes certain illumination, so that pupil light-inletting quantity increases, so as to cause life The myosis of rationality.This process certainly will be needed using light source.
However the crystalline lens before eyeball is transparence object, meeting reflection source light when light source irradiates eyes, thus The picture of light source is formed on eyes, the picture of this light source will affect the acquisition of iris.When the image position of light source is overlapped with iris, rainbow Film information will be unable to collect, and can only collect a highlighted light source picture.
This problem in order to prevent, in the prior art usually by adjusting light source, collected iris, camera position, thus So that the image position of light source is in non-acquired region, such as on pupil, or be located at iris edge etc..But even if in this way, light source Picture can still influence collection effect to a certain extent, and need complicated adjustment process could make the image position of light source in Designated position.
Therefore, be badly in need of a kind of device that light filling can be carried out to iris collection device now so that when acquisition iris not by Reflective influence of the light source in eyes, and simple and stable.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kind State the iris information measuring system and acquisition system of a kind of no-reflection of problem.
The present invention provides a kind of iris information measuring systems of no-reflection, including
Light source, for providing illumination to non-targeted iris;
Isolating device, for preventing the light of light source from entering target iris;
Acquisition device, for acquiring target iris information;
The non-targeted iris and target iris are belonging respectively to the different eyes of same people;
Measuring device measures its size according to iris information.
The present invention also provides a kind of iris information acquisition systems of no-reflection, including
Light source, for providing illumination to non-targeted iris;
Isolating device, for preventing the light of light source from entering target iris;
Acquisition device, for acquiring target iris information;
The non-targeted iris and target iris are belonging respectively to the different eyes of same people.
Optionally, the isolating device includes baffle.
Optionally, the isolating device includes beam director.
Optionally, acquisition device is image collecting device, for providing pickup area, acquires target iris image.
Optionally, image collecting device obtains the image of the different directions of target iris from multiple pickup areas.
Optionally, the baffle is at least not through the light of specific feature.
Optionally, target iris information is the image information of target iris.
Optionally, acquire multiple images when image collecting device position at least meet two neighboring position at least conform to as Lower condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is Two neighboring location drawing picture acquisition device optical axis included angle, m are coefficient.
Optionally, adjacent three positions satisfaction of image collecting device acquires on corresponding position when acquiring multiple images At least there is the part for indicating object the same area in three images.
The present invention provides a kind of iris information acquisition systems of no-reflection, including
Light source, for providing illumination to non-targeted iris;
Isolating device, the light for weakening light source enter target iris;
Image collecting device, for acquiring target iris image information;
The non-targeted iris and target iris are belonging respectively to the different eyes of same people.
Optionally, the isolating device includes baffle.
Optionally, the baffle is at least not through the light of specific feature.
Optionally, the isolating device includes beam director.
Optionally, which is orienting device, the orienting device in light source or the light source rotating device outside light source.
Optionally, described image acquisition device is one camera or single camera.
Optionally, image collecting device is along center axis rotation.
Optionally, image collecting device obtains the image of the different directions of object from multiple pickup areas.
Optionally, the position of image collecting device at least meets two neighboring position and at least accords with when acquiring described multiple images Close following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is Two neighboring location drawing picture acquisition device optical axis included angle, m are coefficient.
Optionally, adjacent three positions of image collecting device meet when acquiring described multiple images adopts on corresponding position At least there is the part for indicating object the same area in three images of collection.
Inventive point and technical effect
1, there are such technology prejudice for the prior art: illumination must be used to make myosis when acquisition iris, expanded Iris capturing area, therefore need to carry out illumination to iris while acquisition.And the present invention overcomes above-mentioned technology prejudice, mentions for the first time The technology of collected iris is irradiated without using light source out.
2, when in iris capturing, the eyes of finder receive illumination for the first time, another eyes also will appear and by light According to the similar reaction of eyes, and the light filling process when phenomenon can be used for iris capturing is recognized for the first time, to propose to use The mode that the light of baffle light source enters target iris prevents reflective.
3, existing light-supplementing system be by controlling the position of light source picture, light source luminescent feature prevents on collected iris The picture of light source does not influence to acquire, and present invention firstly provides the light for weakening light source using isolating device to enter target iris, prevents out Existing light source picture influences iris capturing.
4, the technical problem for causing polyphaser matrix acquisition resolution ratio lower due to camera volume is recognized and proposed for the first time, It is not particularly suited for iris capturing.And propose that acquisition is improved by way of forming virtual camera matrix within a certain period of time differentiates Rate, resolution ratio can achieve Pixel-level.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is the schematic diagram of the iris information acquisition system of the no-reflection of embodiment 1 in the present invention;
Fig. 2 is a kind of schematic diagram of implementation of the iris information acquisition system of the no-reflection of embodiment 1 in the present invention;
Fig. 3 is the signal of another implementation of the iris information acquisition system of the no-reflection of embodiment 1 in the present invention Figure;
Fig. 4 is a kind of schematic diagram of iris 3D information collection/measuring system in the embodiment of the present invention 2;
The schematic diagram of camera follow shot status requirement in the position Fig. 5 embodiment of the present invention 2;
Fig. 6 is a kind of realization side that iris 3D information acquisition system uses one camera rotating acquisition in the embodiment of the present invention 3 The schematic diagram of formula;
Fig. 7 is that iris 3D information acquisition system is realized using second of one camera rotating acquisition in the embodiment of the present invention 3 The schematic diagram of mode;
Fig. 8 is that iris 3D information acquisition system is realized using the third of one camera rotating acquisition in the embodiment of the present invention 3 The schematic diagram of mode;
Fig. 9 is that iris 3D information acquisition system is realized using the 4th kind of one camera rotating acquisition in the embodiment of the present invention 3 The schematic diagram of mode;
Figure 10 is that iris 3D information acquisition system is realized using the 5th kind of one camera rotating acquisition in the embodiment of the present invention 3 The schematic diagram of mode;
Figure 11 is that iris 3D information acquisition system is realized using the 6th kind of one camera rotating acquisition in the embodiment of the present invention 3 The schematic diagram of mode;
Figure 12 is the kind realization side for acquiring iris 3D information acquisition system in the embodiment of the present invention 4 using light deflection The schematic diagram of formula;
Figure 13 is the second various realizations for acquiring iris 3D information acquisition system in the embodiment of the present invention 4 using light deflection The schematic diagram of mode;
Figure 14 is the third the realization side for acquiring iris 3D information acquisition system in the embodiment of the present invention 4 using light deflection The schematic diagram of formula.
201 image collecting devices, 500 isolating devices, 600 light sources, 400 processors, 301 target irises, 302 non-targeted rainbows Film, 501 isolation boards, 601 left light sources, 602 right light sources, 502 light orienting devices, 101 tracks, 100 image processing apparatus, 102 Mechanical mobile device, 202 rotary shafts, 203 shaft driving devices, 204 lifting devices, 205 lifting drives, 4 controlling terminals, 211 light deflection units, 212 light deflection driving units, 3 objects.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure It is fully disclosed to those skilled in the art.
Fig. 1 is please referred to Fig. 5.
Embodiment 1 (spacer)
Including image collecting device 201, isolating device 500, light source 600, processor 400, target iris 301 is non-targeted Iris 302 is the different eyes of same people.
Image collecting device 201 can be that polyphaser matrix, fixed one camera, video camera, rotation one camera etc. can be real The equipment of existing Image Acquisition.Its image for being used to acquire target iris 301.It only needs to acquire rainbow when carrying out two-dimentional iris capturing Film two dimensional image, and be sent in processor 400 and carry out image procossing, measurement and identification.But two-dimentional iris measurement and identification Current high-precision, the acquisition of high accuracy, measurement, identification requirement, therefore the present invention be can no longer meet it is also proposed that using virtual Camera matrix realizes three-dimensional iris capturing.Collected plurality of pictures is sent into processor 400 by image collecting device 201 at this time It carries out image procossing synthesis (specific method is referring to following embodiments), forms 3-D image and point cloud data.
Light source 600, which is used to provide light to human eye, expands the area of iris so as to cause myosis.Light source 600 can be with For single light source, or distributed light source.Light source 600 can be the light source of fixed-illumination, or controlled intelligent light Source.
When carrying out iris capturing, in order to increase effective area, need myosis.Pupil contracting common at present Small mode carries out light filling when being included in iris measurement, that is, utilizes certain illumination, so that pupil light-inletting quantity increases, so as to cause life The myosis of rationality.This process certainly will be needed using light source.However the crystalline lens before eyeball is transparence object, light source Meeting reflection source light when irradiating eyes, to form the picture of light source on eyes, the picture of this light source will affect adopting for iris Collection.When the image position of light source 600 is overlapped with iris, iris information will be unable to collect, and can only collect a highlighted light Source image.Isolating device 500 is arranged in the present invention between target iris 301 and non-targeted iris 302, and isolating device 500 can here Think isolation board 501, stops the light of light source 600 to enter in target iris 301, prevent it from occurring on target iris 301 instead Image.The position of isolation board 501 and size can be arranged according to 600 position of light source, target iris position, as long as hindering light source 600 light enters in target iris 501.But meanwhile also to guarantee that the light of light source 600 is able to enter non-targeted iris, from And pupil of human is caused to reduce.Such as isolation board 501 is located between two iris of human eye, light source 600 is located at non-targeted iris 302 1 Side, therefore the light of light source 600 enters in non-targeted iris 302, without entering in target iris 301.In non-targeted iris 302 1 Side human eye perceive light it is stronger when, the pupil of two eyes can all reduce, even if thus realize there is no light on target iris 301 The light in source 600, can also cause the myosis at 301 center of target iris, to increase the area of target iris 301.
Isolating device 500 can prevent light from entering in target iris to be lighttight;Or it is semi-transparent or only Through the light of specific wavelength.Such as isolating device 500 can reveal the green light, but light source 600 is red-light source, isolating device 500 at this time Can block red light enter in target iris.In addition light source 600 can be linearly polarized light source, and isolating device 500 is and polarization at this time The vertical polarizing film of polarization state.
System includes isolation board there may also be two light sources 600, respectively left light source 601, right light source 602, centre 501.When left light source 601 shines, the acquisition of image collecting device 201 right side iris image, after acquisition, in right light source 602 When shining, the acquisition of image collecting device 201 left side iris image.
Non-targeted iris 302 and target iris 301 are belonging respectively to the different eyes of same people.
Isolating device 500 or light orienting device 502, enable the light of light source 600 to orient to non-targeted Iris 302 irradiates, to reduce it into target iris 301, to prevent the picture for occurring light source 600 on target iris 301. The orienting device 502 can be the orienting device outside light source, i.e., after light source 600 shines, guidance light enters non-targeted iris 302, such as can be reflecting mirror or diaphotoscope.Or the orienting device in light source, i.e., luminescence unit issues in light source The light that light is shaped as specific direction enters in non-targeted iris 302.It can also be simultaneously light source rotating device, by driving light Source rotation, controls its light emission direction and only enters in non-targeted iris 302.Such as light source 600 is array intelligent light source, Ke Yitong It crosses and adjusts each light source luminescent intensity and rotational angle to realize that non-targeted iris 302 is illuminated, and do not have on target iris 301 The picture of light source.
Embodiment 2
In order to solve the above technical problems, one embodiment of the invention provides a kind of iris 3D acquisition of information/measuring device. As shown in figure 4, specifically including: track 101, image collecting device 201, image processing apparatus 100, mechanical mobile device 102, figure As acquisition device 201 is mounted on mechanical mobile device 102, mechanical mobile device 102 can be moved along track 101, to make The pickup area for obtaining image collecting device 201 constantly changes, and is formd on the scale of a period of time in space different location Multiple pickup areas constitute acquisition matrix, but in only one pickup area of some moment, therefore acquisition matrix is " virtual " 's.Since image collecting device 201 is usually made of camera, also referred to as virtual camera matrix.But image collecting device 201 can also Think video camera, CCD, CMOS, camera, the mobile phone with image collecting function, plate and other electronic equipments.
The matrix dot of above-mentioned virtual matrix determines by the position of image collecting device 201 when acquisition target object image, phase Adjacent two positions at least meet following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<1.5
Wherein L is the distance that image collecting device 201 arrives object (iris), and usually image collecting device 201 is the Apart from the distance in object face collected region when one position, m is coefficient.
H is object actual size in acquired image, and image is usually image collecting device 201 at first position The picture of shooting, the object in the picture has true geometric dimension (not being the size in picture), when measuring the size Along the orientation measurement of first position to the second position.Such as first position and the second position are the relationships moved horizontally, then The size is measured along the horizontal cross of object.Such as the object left end that can show that in picture is A, right end is B then measures the linear distance of A to B on object, is H.Measurement method can be according to A, B distance in picture, combining camera camera lens Focal length carries out actual distance calculation, and A, B can also be identified on object, directly measures AB straight line using other measurement means Distance.
A is two neighboring location drawing picture acquisition device optical axis included angle.
M is coefficient.
Since article size, concave-convex situation are different, the value of a can not be limited with strict formula, needs rule of thumb to carry out It limits.According to many experiments, the value of m preferably can be within 0.8 within 1.5.Specific experiment data are referring to such as Lower table:
Object M value Synthetic effect Synthetic ratio
Body iris 0.11、0.29、0.4 It is very good > 90%
Body iris 0.48、0.65 It is good > 85%
Body iris 0.71、0.83 It is relatively good > 80%
Body iris 0.92、1.0 Generally > 70%
Body iris 1.15、1.23 Generally > 60%
Body iris 1.3、1.43、1.54 Synthesis reluctantly > 50%
Body iris 1.69 It is difficult to synthesize < 40%
After object and image collecting device 201 determine, the value of a can be calculated according to above-mentioned empirical equation, according to a Value is that can determine the parameter of virtual matrix, i.e. positional relationship between matrix dot.
In general, virtual matrix is one-dimensional matrix, such as along the multiple matrix dots of horizontal direction arrangement (acquisition position It sets).But when some target objects are larger, two-dimensional matrix is needed, then two adjacent in vertical direction positions equally meet Above-mentioned a value condition.
Under some cases, even from above-mentioned empirical equation, also it is not easy to determine matrix parameter (a value) under some occasions, this When need to adjust matrix parameter according to experiment, experimental method is as follows: prediction matrix parameter a is calculated according to above-mentioned formula, and according to Matrix parameter control camera is moved to corresponding matrix dot, such as camera shoots picture P1 in position W1, after being moved to position W2 Picture P2 is shot, whether in picture P1 and picture P2 have the part that indicates object the same area, i.e. P1 ∩ P2 is non-if comparing at this time Empty (such as simultaneously including human eye angle part, but photograph taking angle is different), if readjusting a value without if, re-moves To position W2 ', above-mentioned comparison step is repeated.If P1 ∩ P2 non-empty, phase is continued to move to according to a value (adjustment or unadjusted) Machine shoots picture P3, comparing whether to have in picture P1, picture P2 and picture P3 again indicates object the same area to the position W3 Part, i.e. P1 ∩ P2 ∩ P3 non-empty please refers to Fig. 5.It recycles plurality of pictures to synthesize 3D, tests 3D synthetic effect, meet 3D Information collection and measurement request.That is, the structure of matrix is by image collecting device 201 when acquisition multiple images What position determined, it is at least same in the presence of expression object that adjacent three positions meet three images acquired on corresponding position The part in region.
After virtual matrix obtains multiple target object images, the above-mentioned image of image processing apparatus processing synthesizes 3D.It utilizes The multiple images synthesis 3D point cloud or image of multiple angles of camera shooting can be used and carry out figure according to adjacent image characteristic point As the method for splicing, other methods also can be used.
The method of image mosaic includes:
(1) multiple images are handled, extracts respective characteristic point;The feature of respective characteristic point can in multiple images To be retouched using SIFT (Scale-Invariant Feature Transform, scale invariant feature conversion) Feature Descriptor It states.SIFT feature description has 128 feature description vectors, and the 128 of any characteristic point can be described on direction and scale The feature of a aspect significantly improves the precision to feature description, while Feature Descriptor has independence spatially.
(2) characteristic point of the multiple images based on extraction, feature point cloud data and the iris for generating face characteristic respectively are special The feature point cloud data of sign.It specifically includes:
(2-1) carries out the spy of plurality of pictures according to the feature of the respective characteristic point of each image in the multiple images of extraction The matching for levying point, establishes matched facial feature points data set;According to the respective feature of each image in the multiple images of extraction The feature of point, carries out the matching of the characteristic point of plurality of pictures, establishes matched iris feature point data collection;
(2-2) according to the optical information of camera, obtain multiple images when camera different location, calculate each position phase Relative position of the machine relative to characteristic point spatially, and the space of the characteristic point in multiple images is calculated depending on the relative position Depth information.Similarly, the spatial depth information of the characteristic point in multiple images can be calculated.Bundle adjustment can be used in calculating Method.
The spatial depth information for calculating characteristic point may include: spatial positional information and colouring information, that is, can be feature Point is in the X axis coordinate of spatial position, characteristic point in the Y axis coordinate of spatial position, characteristic point in the Z axis coordinate of spatial position, spy Levy the channel B of the colouring information of the value in the channel R of the colouring information of point, the value in the channel G of the colouring information of characteristic point, characteristic point Value, the value in the channel Alpha of colouring information of characteristic point etc..In this way, containing feature in the feature point cloud data generated The spatial positional information and colouring information of point, the format of feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, Xn indicates characteristic point in the X axis coordinate of spatial position;Yn indicates characteristic point in the Y axis coordinate of spatial position; Zn indicates characteristic point in the Z axis coordinate of spatial position;Rn indicates the value in the channel R of the colouring information of characteristic point;Gn indicates feature The value in the channel G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;The color of An expression characteristic point The value in the channel Alpha of information.
(2-3) generates object according to the spatial depth information of multiple images matched characteristic point data collection and characteristic point The feature point cloud data of feature.
(2-4) constructs object 3D model according to feature point cloud data, to realize the acquisition of object point cloud data.
Collected object color, texture are attached on point cloud data by (2-5), form object 3D rendering.
Wherein it is possible to 3D rendering is synthesized using all images in one group of image, it can also be higher from wherein selection quality Image synthesized.
Above-mentioned joining method is limited citing, however it is not limited to which this, several with good grounds multi-angle two dimensional images of institute generate three The method of dimension image can be used.
Embodiment 3 (single-shaft-rotation iris capturing)
Small range, small depth targets object 3 are smaller compared with camera acquisition range for lateral dimension, and along camera depth of field direction Size is smaller, i.e., object 3 is less in depth direction information.Under this application, although passing through the side such as track, mechanical arm The single camera system that formula moves on a large scale can equally acquire 3 multi-angle image of object to synthesize 3D point cloud or image, but These equipment are complex, so that reliability reduces.And significantly movement causes acquisition time to extend.And due to Volume is larger, can not be suitable for many occasions (such as access control system).
And small range, small depth targets object 3 have the characteristics that oneself is peculiar, it is required that acquisition/measuring device volume it is small, can It is high by property, acquisition speed is fast, especially it requires acquisition range that lower (object 3 of big depth then needs large range of Acquisition, all information can be acquired by being in different location in particular for camera).Applicant be put forward for the first time the application and Occasion, and be the 3D point cloud and Image Acquisition for realizing object 3 with most succinct rotating device for its feature, it makes full use of The object 3 requires acquisition range small feature.
3D information acquisition system includes: image collecting device 201, for passing through the pickup area of image collecting device 201 3 one groups of images of object are acquired with 3 relative motion of object;Pickup area mobile device, for driving image collecting device 201 Pickup area and object 3 generate relative motion;Pickup area mobile device is turning gear, so that image collecting device 201 along a central axis rotation;
Referring to Fig. 6 to Figure 11, image collecting device 201 is a camera, and camera passes through the phase that is fixedly mounted on turn seat On machine fixed frame, rotary shaft 202 is connected under turn seat, rotary shaft 202 is controlled by shaft driving device 203 and rotated, and shaft is driven Dynamic device 203 and camera are all connected with controlling terminal 4, and controlling terminal 4 implements driving and camera for controlling shaft driving device 203 Shooting.In addition, rotary shaft 202 can also be directly fixedly connected with image collecting device 201, camera rotation is driven.
Due to different from traditional 3D acquisition, 3 iris of implementation goal object of the application belongs to small-scale 3D object.Cause This, without being reappeared on a large scale to target, but need to carry out high-precision acquisition, measurement and comparison to its surface main feature, I.e. measurement accuracy requires high.Camera rotational angle does not need accurate control that is excessive, but needing to guarantee rotational angle.Invention passes through Angle acquisition device is set in driving rotary shaft 202 and/or turn seat, and shaft driving device 203 drives rotary shaft 202, phase Machine is rotated according to the degree of setting, and angle acquisition device measures degree of rotation and by measurement feedback to controlling terminal 4, and is set Fixed degree is compared, and guarantees rotation precision.Shaft driving device 203 drives rotary shaft 202 to turn over two or more angles, Camera circumferentially rotates around central axis under the drive of turn seat and completes the shooting of different angle, by the bat of different angle The image taken the photograph is sent to controlling terminal 4, and terminal log generates final 3-D image according to being handled.It can also be sent to Processing unit, realizes the synthesis (specific synthetic method see below image split-joint method) of 3D, and processing unit can be independent dress It sets, or with other devices with processing function, or remote equipment.Wherein, it is pre- can also to connect image for camera Processing unit pre-processes image.Object 3 is iris, guarantees object 3 in the acquisition of shooting in camera rotation process In region.
Controlling terminal 4 is chosen as processor, computer, remote control center etc..
Image collecting device 201 could alternatively be video camera, CCD, other image acquisition devices such as infrared camera.Meanwhile Image collecting device 201 can be with integral installation on bracket, such as tripod, fixed platform etc..
Shaft driving device 203 is chosen as brushless motor, high-accuracy stepper motor, angular encoder, rotating electric machine etc..
Referring to Fig. 7, rotary shaft 202 is located at 201 lower section of image collecting device, and rotary shaft 202 and image collecting device 201 are straight It connects in succession, central axis intersects with image collecting device 201 at this time;Central axis shown in Fig. 8 is located at the phase of image collecting device 201 The camera lens side of machine is provided with rotation at this point, camera is around center axis rotation and is shot between rotary shaft 202 and turn seat Linking arm;Central axis shown in Fig. 9 is located at the reversed side of camera lens of the camera of image collecting device 201, at this point, camera is in Mandrel is rotated and is shot, and rotation link arm is provided between rotary shaft 202 and turn seat, and can according to need and will connect Arm is set as having curved structure upward or downward;Central axis shown in Figure 10 is located at the mirror of the camera of image collecting device 201 Reversed side, and central axis be it is horizontally disposed, which allows camera to carry out angular transformation in vertical direction, can fit There should be the object 3 of special characteristic to shoot in vertical direction, wherein shaft driving device 203 drives rotary shaft 202 to rotate, band Movable pendulum moves linking arm and moves up and down;Shaft driving device 203 shown in Figure 11 further includes lifting device 204 and goes up and down for controlling The lifting drive 205 that device 204 moves, lifting drive 205 are connect with controlling terminal 4, increase 3D information collection The shooting area range of system.
The 3D information acquisition system occupies little space, and the system of shooting efficiency mobile camera a wide range of compared with needs obviously mentions Height, especially suitable for small range, the application scenarios of small depth targets high-precision 3D acquisition of information.
Embodiment 4 (light deflection iris capturing)
Referring to Figure 12 and 14, iris 3D information acquisition system includes: image collecting device 201, for passing through Image Acquisition The pickup area of device 201 and 3 relative motion of object acquire 3 one groups of images of object;Pickup area mobile device, for driving The pickup area and object 3 of motion video acquisition device 201 generate relative motion;Pickup area mobile device is optical scanner dress It sets, so that in the case that image collecting device 201 is not moved or rotated, the pickup area and object 3 of image collecting device 201 Generate relative motion.
Referring to Figure 12, pickup area mobile device further includes light deflection unit 211, optionally, light deflection unit 211 It is driven by light deflection driving unit 212, image collecting device 201 is a camera, and camera is fixedly mounted, and physical location is not sent out Changing is not moved and is not rotated yet, make the pickup area of camera that certain variation occur by light deflection unit 211, To realize that object 3 and pickup area change, during being somebody's turn to do, light deflection unit 211 can be driven single by light deflection 212 driving of member is so that the light of different directions enters image collecting device 201.Light deflection driving unit 212 can be control The linear motion of light deflection unit 211 or the driving device of rotation.Light deflection driving unit 212 and camera are all connected with control eventually End 4, controlling terminal 4 implement driving and camera shooting for controlling shaft driving device 203.
It can also be appreciated that the implementation goal object 3 of the application belongs to small due to different from traditional 3D acquisition technique The 3D object of range.It is therefore not necessary to be reappeared on a large scale to target, but high-precision obtain need to be carried out to its surface main feature It takes, measures and compare, i.e., measurement accuracy requires high.Therefore the displacement of light deflection unit 211 of the present invention or amount of spin are not necessarily to It is excessive, but need to guarantee the requirement of precision and object 3 in coverage.Invention on light deflection unit 211 by setting Angle acquisition device and/or displacement acquisition device are set, when light deflection driving unit 212 drives light deflection unit 211 to move When, angle acquisition device and/or displacement acquisition device measure degree of rotation and/or straight-line displacement amount and give measurement feedback Controlling terminal 4 is compared with preset parameter, guarantees precision.When light deflection driving unit 212 drives light deflection When unit 211 is rotated and/or is displaced, camera correspond to light deflection unit 211 different location state complete two or The image of two or more shootings is sent to controlling terminal 4 by multiple shootings, and terminal log generates final according to being handled 3-D image.Wherein, camera can also connect image pre-processing unit, pre-process to image.
Controlling terminal 4 is chosen as processor, computer, remote control center etc..
Image collecting device 201 could alternatively be video camera, CCD, other image acquisition devices such as infrared camera.Meanwhile Image collecting device 201 is fixed on mounting platform, and position fixation does not change.
Light deflection driving unit 212 is chosen as brushless motor, high-accuracy stepper motor, angular encoder, rotating electric machine etc..
Referring to Figure 12, light deflection unit 211 is reflecting mirror, it is to be understood that is needed to can be set one according to measurement One or more can be correspondingly arranged in a or multiple reflecting mirrors, light deflection driving unit 212, and controls plane mirror angle hair Changing is so that the light of different directions enters image collecting device 201;Light deflection unit 211 shown in Figure 13 is lens Group, the lens in lens group may be configured as one or more, and light deflection driving unit 212 can correspondingly be arranged one or more It is a, and control lens angle and change so that the light of different directions enters image collecting device 201;Light shown in Figure 14 is inclined Turning unit 211 includes multiple surface rotating mirror.
In addition, light deflection unit 211 can be DMD, i.e., it can control the deflection side of DMD reflecting mirror using electric signal To so that the light of different directions enters image collecting device 201.And since DMD size is very small, can show It lands and reduces the size of whole equipment, and since DMD can greatly improve measurement and acquisition speed with high-speed rotation Degree.This is also one of inventive point of the invention.
Although realizing camera rotation and light deflection simultaneously it is appreciated that above-mentioned two embodiment is separately write It is possible.
3D information measurement apparatus including 3D information acquisition system, wherein 3D information acquisition system obtains 3D information, will believe Breath is sent to controlling terminal 4, and the information of 4 pairs of controlling terminal acquisitions, which calculate analyzing, obtains whole characteristic points on object 3 Space coordinate.Including, 3D information image splicing module, 3D information pre-processing module, 3D information algorithms selection module, 3D letter Cease computing module, space coordinate point 3D information reconstruction module.The data that above-mentioned module is used to obtain 3D information acquisition system into Row calculation processing simultaneously generates measurement result, and wherein measurement result can be 3D point cloud image.Measurement include length, profile, area, The geometric parameters such as volume.
3D information comparison device including 3D information acquisition system, wherein 3D information acquisition system obtains 3D information, will believe Breath is sent to controlling terminal 4, and the information of 4 pairs of controlling terminal acquisitions, which calculate analyzing, obtains whole characteristic points on object 3 Space coordinate, and be compared with preset value, judge the state of measured target.Except the module in aforementioned 3D information measurement apparatus Outside, 3D information comparison device further includes default 3D information extraction modules, information comparison module, comparison result output module and prompt Module.Comparison device the measurement result of measured target object 3 can be compared with preset value, in order to produce result examine and It processes again.For finding the case where measured target object 3 and preset value are significantly greater than threshold value there are deviation in comparison result, issue Warning prompt.
At least the one of the object 3 of 3D information acquisition system acquisition may be implemented in the mating object generating means of object 3 The 3D information in a region generates the mating object matched with 3 corresponding region of object.Specifically, the present invention is applied to sports apparatus Or medical auxiliary apparatus production, there are individual differences for organization of human body, and therefore, unified mating object is unable to satisfy everyone need It asks, 3D information acquisition system of the present invention obtains someone ancon image, its three-dimensional structure is inputted mating object generating means, for giving birth to Produce the elbow rest set for restoring rehabilitation convenient for its ancon.Mating object generating means can for industrial molding machine, 3D printer or its He is all those skilled in the art will understand that production equipment.Its 3D information acquisition system for configuring the application is fast to realize Speed customizes production.
Although The present invention gives above-mentioned a variety of applications (measurement compares, generation), it is to be understood that, the present invention can be only It is vertical to be used as 3D information collecting device.
A kind of 3D information acquisition method, comprising:
S1. in the pickup area of image collecting device 201 and 3 relative movement of object, image collecting device 201 Acquire 3 one groups of images of object;
S2 pickup area mobile device by one of the following two kinds scheme drive the pickup area of image collecting device 201 with Object 3 generates relative motion:
S21. pickup area mobile device is turning gear, so that image collecting device 201 is along a central axis rotation;
S22. pickup area mobile device is optical scanner, so that image collecting device 201 was not moved or rotated In the case of, the pickup area and object 3 of image collecting device 201 generate relative motion.
It can be used using the multiple images synthesis 3D point cloud or image of multiple angles of camera shooting according to adjacent image The method that characteristic point carries out image mosaic, also can be used other methods.
The method of image mosaic includes:
(1) multiple images are handled, extracts respective characteristic point;The feature of respective characteristic point can in multiple images To be retouched using SIFT (Scale-Invariant Feature Transform, scale invariant feature conversion) Feature Descriptor It states.SIFT feature description has 128 feature description vectors, and the 128 of any characteristic point can be described on direction and scale The feature of a aspect significantly improves the precision to feature description, while Feature Descriptor has independence spatially.
(2) characteristic point of the multiple images based on extraction, feature point cloud data and the iris for generating face characteristic respectively are special The feature point cloud data of sign.It specifically includes:
(2-1) carries out the spy of multiple images according to the feature of the respective characteristic point of each image in the multiple images of extraction The matching for levying point, establishes matched facial feature points data set;According to the respective feature of each image in the multiple images of extraction The feature of point, carries out the matching of the characteristic point of multiple images, establishes matched iris feature point data collection;
(2-2) according to the optical information of camera, obtain multiple images when camera different location, calculate each position phase Relative position of the machine relative to characteristic point spatially, and the space of the characteristic point in multiple images is calculated depending on the relative position Depth information.Similarly, the spatial depth information of the characteristic point in multiple images can be calculated.Bundle adjustment can be used in calculating Method.
The spatial depth information for calculating characteristic point may include: spatial positional information and colouring information, that is, can be feature Point is in the X axis coordinate of spatial position, characteristic point in the Y axis coordinate of spatial position, characteristic point in the Z axis coordinate of spatial position, spy Levy the channel B of the colouring information of the value in the channel R of the colouring information of point, the value in the channel G of the colouring information of characteristic point, characteristic point Value, the value in the channel Alpha of colouring information of characteristic point etc..In this way, containing feature in the feature point cloud data generated The spatial positional information and colouring information of point, the format of feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, Xn indicates characteristic point in the X axis coordinate of spatial position;Yn indicates characteristic point in the Y axis coordinate of spatial position; Zn indicates characteristic point in the Z axis coordinate of spatial position;Rn indicates the value in the channel R of the colouring information of characteristic point;Gn indicates feature The value in the channel G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;The color of An expression characteristic point The value in the channel Alpha of information.
(2-3) generates object 3 according to the spatial depth information of multiple images matched characteristic point data collection and characteristic point The feature point cloud data of feature.
(2-4) constructs object 3D model according to feature point cloud data, to realize the acquisition of 3 point cloud data of object.
Collected 3 color of object, texture are attached on point cloud data by (2-5), form object 3D rendering.
Wherein it is possible to 3D rendering is synthesized using all images in one group of image, it can also be higher from wherein selection quality Image synthesized.
Embodiment 5
When forming matrix, it is also necessary to guarantee that the ratio of article size that camera is shot in matrix dot in picture is closed It is suitable, and shoot apparent.So during forming matrix, camera needs to carry out zoom and focusing in matrix dot.
(1) zoom
After camera photographic subjects object, object is estimated in the ratio of camera view, and be compared with predetermined value.It is excessive Or it is too small require carry out zoom.Zooming method can be with are as follows: using additional gearshift image collecting device 201 diameter Image collecting device 201 is moved up, allows image collecting device 201 close to or far from target object, to guarantee Each matrix dot, object accounting holding in picture are basically unchanged.
Further include range unit, the real-time range (object distance) that image collecting device 201 arrives object can be measured.It can be by object It is tabulating away from, accounting, focal length triadic relation data of the object in picture, according to focal length, object accounting in picture Size more right than determining object distance of tabling look-up, so that it is determined that matrix dot.
In some cases, change in the region of different matrix dot objects or object with respect to camera, can also pass through Focal length is adjusted to realize that accounting of the object in picture is kept constant.
(2) auto-focusing
During forming virtual matrix, distance (object distance) h (x) of range unit real-time measurement camera to object, and Measurement result is sent to image processing apparatus 100, image processing apparatus 100 looks into object distance-focal length table, finds corresponding focal length Value, Xiang Xiangji 201 issue focusing signal, and control camera ultrasonic motor driving camera lens is mobile to carry out rapid focus.In this way, can be with In the case where the position for not adjusting image collecting device 201 does not also adjust its lens focus significantly, rapid focus is realized, Guarantee that image collecting device 201 shoots apparent.This is also one of inventive point of the invention.Certainly, in addition to distance measuring method into Row can also focus to afocal by the way of picture contrast comparison.
Object can be a physical objects in the present invention, or multiple objects constituent.
The 3D information of object include 3D rendering, 3D point cloud, 3D grid, part 3D feature, 3D size and all with mesh Mark the parameter of object 3D feature.
So-called 3D, three-dimensional refer to tri- directional informations of XYZ in the present invention, especially have depth information, and only There is two-dimensional surface information that there is essential distinction.Also it is known as 3D, panorama, holography, three-dimensional with some, but actually only includes two-dimentional letter Breath, does not especially include that the definition of depth information has essential distinction.
Pickup area described in the present invention refers to the range that image collecting device (such as camera) can be shot.
Image collecting device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera shooting Head, mobile phone, plate, notebook, mobile terminal, wearable device, smart glasses, smart watches, Intelligent bracelet and with figure As acquisition function all devices.
For example, the iris information acquisition system of no-reflection uses commercially available industrial camera in a kind of specific embodiment WP-UC2000, design parameter are as shown in the table:
Processor or controlling terminal use shelf computer, and such as Dell/ Dell Precision3530, design parameter is as follows:
Mechanical mobile device is using customization moving guide rail system TM-01, design parameter are as follows:
Holder: three axis holders reserve camera mechanical interface, computer control interface;
Guide rail: arc-shaped guide rail is mechanically connected with holder;
Servo motor: brand: vertical dimension, model: 130-06025, nominal torque: 6Nm, encoder type: 2500 lines increase Amount formula, wire length: 300cm, rated power: 1500W, voltage rating: 220V, rated current: 6A, rated speed: 2500rpm;
Control mode: it is controlled by PC control either other modes.
The 3D information for the object multiple regions that above embodiments obtain can be used for being compared, such as identity Identification.The 3D information of human body face and iris is obtained first with the solution of the present invention, and is stored it in server, as Normal data.When in use, operating such as needing to carry out authentication and paid, opened the door, 3D acquisition device can be used It is compared the 3D information for acquiring and obtaining human body face and iris again with normal data, compare successfully then allow into Row acts in next step.It is appreciated that this compare the identification that can be used for the fixtures such as antique, the art work, i.e., first obtain Antique, art work multiple regions 3D information as normal data, when needing to identify, again obtain multiple regions 3D letter Breath, and be compared with normal data, it discerns the false from the genuine.
The 3D information for the object multiple regions that above embodiments obtain can be used for designing for the object, production, make Make mating object.For example, obtaining human body head 3D data, it can be human design, manufacture more particularly suitable cap;Obtain human body head Portion's data and eyes 3D data can be human design, the suitable glasses of manufacture.
Above embodiments obtain object 3D information can be used for the geometric dimension to the object, appearance profile into Row measurement.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects, In Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments in this include institute in other embodiments Including certain features rather than other feature, but the combination of the feature of different embodiment means in the scope of the present invention Within and form different embodiments.For example, in detail in the claims, the one of any of embodiment claimed all may be used Come in a manner of in any combination using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice Microprocessor or digital signal processor (DSP) are according to an embodiment of the present invention biological special based on Visible Light Camera to realize Levy some or all functions of some or all components in 4 D data acquisition device.The present invention is also implemented as using In executing some or all device or device programs of method as described herein (for example, computer program and meter Calculation machine program product).It is such to realize that program of the invention can store on a computer-readable medium, or can have one The form of a or multiple signals.Such signal can be downloaded from an internet website to obtain, or mention on the carrier signal For, or be provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame Claim.
So far, although those skilled in the art will appreciate that present invention has been shown and described in detail herein multiple shows Example property embodiment still without departing from the spirit and scope of the present invention, still can according to the present disclosure directly Determine or deduce out many other variations or modifications consistent with the principles of the invention.Therefore, the scope of the present invention is understood that and recognizes It is set to and covers all such other variations or modifications.

Claims (10)

1. a kind of iris information measuring system of no-reflection, it is characterised in that: including
Light source, for providing illumination to non-targeted iris;
Isolating device, for preventing the light of light source from entering target iris;
Acquisition device, for acquiring target iris information;
The non-targeted iris and target iris are belonging respectively to the different eyes of same people;
Measuring device measures its size according to iris information;
The isolating device includes baffle;Or, the isolating device includes beam director;
Acquisition device is image collecting device, for providing pickup area, acquires target iris image;Image collecting device is from more A pickup area obtains the image of the different directions of target iris.
2. the iris information measuring system of no-reflection as described in claim 1, it is characterised in that: the baffle is at least not through The light of specific feature.
3. the iris information measuring system of no-reflection as described in claim 1, it is characterised in that: image when acquisition multiple images The position of acquisition device at least meets two neighboring position and at least conforms to following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is adjacent Two location drawing picture acquisition device optical axis included angles, m is coefficient.
4. the iris information measuring system of no-reflection as described in claim 1, it is characterised in that: image when acquisition multiple images It is at least same in the presence of expression object that adjacent three positions of acquisition device meet three images acquired on corresponding position The part in region.
5. the iris information measuring system of no-reflection as described in claim 1, it is characterised in that: target iris information is target The image information of iris.
6. a kind of iris information acquisition system of no-reflection, it is characterised in that: including
Light source, for providing illumination to non-targeted iris;
Isolating device, for preventing the light of light source from entering target iris;
Acquisition device, for acquiring target iris information;
The non-targeted iris and target iris are belonging respectively to the different eyes of same people;
The isolating device includes baffle;Or, the isolating device includes beam director;
Acquisition device is image collecting device, for providing pickup area, acquires target iris image;Image collecting device is from more A pickup area obtains the image of the different directions of target iris.
7. the iris information acquisition system of no-reflection as claimed in claim 6, it is characterised in that: the baffle is at least not through The light of specific feature.
8. the iris information acquisition system of no-reflection as claimed in claim 6, it is characterised in that: image when acquisition multiple images The position of acquisition device at least meets two neighboring position and at least conforms to following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is adjacent Two location drawing picture acquisition device optical axis included angles, m is coefficient.
9. the iris information acquisition system of no-reflection as claimed in claim 6, it is characterised in that: image when acquisition multiple images It is at least same in the presence of expression object that adjacent three positions of acquisition device meet three images acquired on corresponding position The part in region.
10. the iris information acquisition system of no-reflection as claimed in claim 6, it is characterised in that: target iris information is mesh Mark the image information of iris.
CN201811212959.6A 2018-10-18 2018-10-18 A kind of iris information measuring system of no-reflection Active CN109394170B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911145926.9A CN111126145A (en) 2018-10-18 2018-10-18 Iris 3D information acquisition system capable of avoiding light source image influence
CN201811212959.6A CN109394170B (en) 2018-10-18 2018-10-18 A kind of iris information measuring system of no-reflection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811212959.6A CN109394170B (en) 2018-10-18 2018-10-18 A kind of iris information measuring system of no-reflection

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201911145926.9A Division CN111126145A (en) 2018-10-18 2018-10-18 Iris 3D information acquisition system capable of avoiding light source image influence

Publications (2)

Publication Number Publication Date
CN109394170A CN109394170A (en) 2019-03-01
CN109394170B true CN109394170B (en) 2019-11-26

Family

ID=65468367

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811212959.6A Active CN109394170B (en) 2018-10-18 2018-10-18 A kind of iris information measuring system of no-reflection
CN201911145926.9A Pending CN111126145A (en) 2018-10-18 2018-10-18 Iris 3D information acquisition system capable of avoiding light source image influence

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201911145926.9A Pending CN111126145A (en) 2018-10-18 2018-10-18 Iris 3D information acquisition system capable of avoiding light source image influence

Country Status (1)

Country Link
CN (2) CN109394170B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380912B (en) * 2020-10-21 2023-05-02 天津中科智能识别产业技术研究院有限公司 Iris recognition device in advancing
CN112200113B (en) * 2020-10-21 2022-08-16 天津中科智能识别产业技术研究院有限公司 Self-adaptive light source device

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU727389B2 (en) * 1996-08-25 2000-12-14 Sarnoff Corporation, The Apparatus for the iris acquiring images
NL1021847C2 (en) * 2002-11-05 2004-05-11 Joh Enschede B V Iris detection.
JP2005182521A (en) * 2003-12-19 2005-07-07 Matsushita Electric Ind Co Ltd Iris imaging camera and iris authentication system
CN101366047B (en) * 2006-01-07 2011-10-19 依瑞技术株式会社 Iris identification system and method using mobile device with stereo camera
US8014571B2 (en) * 2006-05-15 2011-09-06 Identix Incorporated Multimodal ocular biometric system
CN103020612A (en) * 2013-01-05 2013-04-03 南京航空航天大学 Device and method for acquiring iris images
EP3074924A4 (en) * 2013-10-08 2017-11-22 Princeton Identity, Inc. Iris biometric recognition module and access control assembly
CN104573676B (en) * 2015-02-02 2018-01-05 深圳眼神智能科技有限公司 The iris identification device of the reflective interference of safety goggles
US10049272B2 (en) * 2015-09-24 2018-08-14 Microsoft Technology Licensing, Llc User authentication using multiple capture techniques
CN106960173A (en) * 2016-01-11 2017-07-18 苏州思源科安信息技术有限公司 The system that safety goggles are disturbed in a kind of iris identification device
KR20180014627A (en) * 2016-08-01 2018-02-09 삼성전자주식회사 A method for controlling an opeartion of an iris sensor and an electronic device therefor
IL251596A0 (en) * 2017-04-05 2017-06-29 Robin Koch A system and method for capturing an iris of a subject
CN207037685U (en) * 2017-07-11 2018-02-23 北京中科虹霸科技有限公司 One kind illuminates adjustable iris collection device
CN108446596A (en) * 2018-02-14 2018-08-24 天目爱视(北京)科技有限公司 Iris 3D 4 D datas acquisition system based on Visible Light Camera matrix and method
CN108401118B (en) * 2018-03-05 2020-10-23 北京中科虹霸科技有限公司 Iris acquisition device and method for controlling lamp-on time of infrared lamp

Also Published As

Publication number Publication date
CN109394170A (en) 2019-03-01
CN111126145A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN109394168B (en) A kind of iris information measuring system based on light control
CN109443199B (en) 3D information measuring system based on intelligent light source
CN109218702B (en) Camera rotation type 3D measurement and information acquisition device
CN109269405B (en) A kind of quick 3D measurement and comparison method
CN109141240B (en) A kind of measurement of adaptive 3 D and information acquisition device
CN109285109B (en) A kind of multizone 3D measurement and information acquisition device
CN109146961A (en) A kind of 3D measurement and acquisition device based on virtual matrix
CN111060023A (en) High-precision 3D information acquisition equipment and method
CN208653401U (en) Adapting to image acquires equipment, 3D information comparison device, mating object generating means
CN209279885U (en) Image capture device, 3D information comparison and mating object generating means
CN208795174U (en) Camera rotation type image capture device, comparison device, mating object generating means
CN109146949B (en) A kind of 3D measurement and information acquisition device based on video data
CN111780682A (en) 3D image acquisition control method based on servo system
CN109394170B (en) A kind of iris information measuring system of no-reflection
CN211178345U (en) Three-dimensional acquisition equipment
CN109084679B (en) A kind of 3D measurement and acquisition device based on spatial light modulator
CN208653473U (en) Image capture device, 3D information comparison device, mating object generating means
CN209103318U (en) A kind of iris shape measurement system based on illumination
CN208795167U (en) Illumination system for 3D information acquisition system
CN209203221U (en) A kind of iris dimensions measuring system and information acquisition system based on light control
US20210012531A1 (en) Imaging system and method for producing images using means for adjusting optical focus
CN215300796U (en) Binocular stereo vision processing device and system
CN111207690B (en) Adjustable iris 3D information acquisition measuring equipment
CN213072921U (en) Multi-region image acquisition equipment, 3D information comparison and matching object generation device
CN209279884U (en) Image capture device, 3D information comparison device and mating object generating means

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant