CN209203221U - A kind of iris dimensions measuring system and information acquisition system based on light control - Google Patents

A kind of iris dimensions measuring system and information acquisition system based on light control Download PDF

Info

Publication number
CN209203221U
CN209203221U CN201821687786.9U CN201821687786U CN209203221U CN 209203221 U CN209203221 U CN 209203221U CN 201821687786 U CN201821687786 U CN 201821687786U CN 209203221 U CN209203221 U CN 209203221U
Authority
CN
China
Prior art keywords
iris
light
image
light source
information acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201821687786.9U
Other languages
Chinese (zh)
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Love Vision (beijing) Technology Co Ltd
Tianmu Aishi Beijing Technology Co Ltd
Original Assignee
Tianmu Love Vision (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Love Vision (beijing) Technology Co Ltd filed Critical Tianmu Love Vision (beijing) Technology Co Ltd
Priority to CN201821687786.9U priority Critical patent/CN209203221U/en
Application granted granted Critical
Publication of CN209203221U publication Critical patent/CN209203221U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The utility model provides a kind of iris dimensions measuring system and information acquisition system based on light control, and wherein measuring system includes light source, for providing illumination to non-targeted iris;Isolating device, the light for weakening light source enter target iris;Image collecting device, for acquiring target iris image information;The non-targeted iris and target iris are belonging respectively to the different eyes of same people.When in iris capturing, the eyes of finder receive illumination for the first time, another eyes also will appear with by the similar reaction of illumination eyes, and recognize the light filling process when phenomenon can be used for iris capturing for the first time.

Description

A kind of iris dimensions measuring system and information acquisition system based on light control
Technical field
The utility model relates to field of measuring technique, in particular to iris size, appearance and size field of measuring technique.
Background technique
In iris capturing measurement in order to increase effective area, need myosis.Pupil contracting common at present Small mode carries out light filling when being included in iris measurement, that is, utilizes certain illumination, so that pupil light-inletting quantity increases, so as to cause life The myosis of rationality.This process certainly will be needed using light source.
However the crystalline lens before eyeball is transparence object, meeting reflection source light when light source irradiates eyes, thus The picture of light source is formed on eyes, the picture of this light source will affect the acquisition of iris.When the image position of light source is overlapped with iris, rainbow Film information will be unable to collect, and can only collect a highlighted light source picture.
This problem in order to prevent, in the prior art usually by adjusting light source, collected iris, camera position, thus So that the image position of light source is in non-acquired region, such as on pupil, or be located at iris edge etc..But even if in this way, light source Picture can still influence collection effect to a certain extent, and need complicated adjustment process could make the image position of light source in Designated position.
Therefore, be badly in need of a kind of device that light filling can be carried out to iris collection device now so that when acquisition iris not by Reflective influence of the light source in eyes, and simple and stable.
Utility model content
In view of the above problems, the utility model is proposed to overcome the above problem in order to provide one kind or at least partly solve Certainly a kind of iris information based on light control of the above problem measures acquisition system.
The utility model provides a kind of iris dimensions measuring system based on light control, including
Light source, for providing illumination to non-targeted iris;
Isolating device, the light for weakening light source enter target iris;
Image collecting device, for acquiring target iris image information;
Dimension measuring device, for measuring iris dimensions;
The non-targeted iris and target iris are belonging respectively to the different eyes of same people.
A kind of iris information acquisition system based on light control, including
Light source, for providing illumination to non-targeted iris;
Isolating device, the light for weakening light source enter target iris;
Image collecting device, for acquiring target iris image information;
The non-targeted iris and target iris are belonging respectively to the different eyes of same people.
Optionally, the isolating device includes baffle.
Optionally, the baffle is at least not through the light of specific feature.
Optionally, the isolating device includes beam director.
Optionally, orienting device is orienting device, the orienting device in light source or the light source rotating device outside light source.
Optionally, described image acquisition device is one camera or single camera.
Optionally, image collecting device is along center axis rotation.
Optionally, image collecting device obtains the image of the different directions of object from multiple pickup areas.
Optionally, acquire multiple images when image collecting device position at least meet two neighboring position at least conform to as Lower condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is Two neighboring location drawing picture acquisition device optical axis included angle, m are coefficient.
Optionally, adjacent three positions satisfaction of image collecting device acquires on corresponding position when acquiring multiple images At least there is the part for indicating object the same area in three images.
Inventive point and technical effect
1, there are such technology prejudice for the prior art: illumination must be used to make myosis when acquisition iris, expanded Iris capturing area, therefore need to carry out illumination to iris while acquisition.And the utility model overcomes above-mentioned technology prejudice, it is first The secondary technology for proposing to irradiate collected iris without using light source.
2, when in iris capturing, the eyes of finder receive illumination for the first time, another eyes also will appear and by light According to the similar reaction of eyes, and the light filling process when phenomenon can be used for iris capturing is recognized for the first time, to propose to use The mode that the light of baffle light source enters target iris prevents reflective.
3, existing light-supplementing system be by controlling the position of light source picture, light source luminescent feature prevents on collected iris The picture of light source does not influence to acquire, and the utility model, which is put forward for the first time, enters target iris using the light that isolating device weakens light source, prevents Light source picture only occur influences iris capturing.
4, the technical problem for causing polyphaser matrix acquisition resolution ratio lower due to camera volume is recognized and proposed for the first time, It is not particularly suited for iris capturing.And propose that the position acquired when camera rotation is taken pictures meets specified conditions to improve acquisition and differentiate Rate, resolution ratio can achieve Pixel-level.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as practical to this Novel limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is the schematic diagram of the iris information acquisition system based on light control of embodiment 1 in the utility model;
Fig. 2 is a kind of realization side of the iris information acquisition system based on light control of embodiment 1 in the utility model The schematic diagram of formula;
Fig. 3 is that the another of the iris information acquisition system based on light control of embodiment 1 in the utility model is realized The schematic diagram of mode;
Fig. 4 is a kind of schematic diagram of iris 3D information acquisition system in the utility model embodiment 2;
The schematic diagram of camera follow shot status requirement in the position Fig. 5 the utility model embodiment 2;
Fig. 6 is a kind of reality that iris 3D information acquisition system uses one camera rotating acquisition in the utility model embodiment 3 The schematic diagram of existing mode;
Fig. 7 is iris 3D information acquisition system uses one camera rotating acquisition in the utility model embodiment 3 second The schematic diagram of implementation;
Fig. 8 uses the third of one camera rotating acquisition for iris 3D information acquisition system in the utility model embodiment 3 The schematic diagram of implementation;
Fig. 9 is iris 3D information acquisition system uses one camera rotating acquisition in the utility model embodiment 3 the 4th kind The schematic diagram of implementation;
Figure 10 is iris 3D information acquisition system uses one camera rotating acquisition in the utility model embodiment 3 the 5th kind The schematic diagram of implementation;
Figure 11 is iris 3D information acquisition system uses one camera rotating acquisition in the utility model embodiment 3 the 6th kind The schematic diagram of implementation;
Figure 12 is the kind reality for acquiring iris 3D information acquisition system in the utility model embodiment 4 using light deflection The schematic diagram of existing mode;
Figure 13 is to acquire the second various of iris 3D information acquisition system using light deflection in the utility model embodiment 4 The schematic diagram of implementation;
Figure 14 is the third reality for acquiring iris 3D information acquisition system in the utility model embodiment 4 using light deflection The schematic diagram of existing mode.
201 image collecting devices, 500 isolating devices, 600 light sources, 400 processors, the non-targeted iris of target iris 301. 302, isolation board 501,601 left light sources, 602 right light sources, 502 light orienting devices, 101 tracks, 100 image processing apparatus, 102 mechanical mobile devices, 202 rotary shafts, 203 shaft driving devices, 204 lifting devices, 205 lifting drives, 4 controls Terminal, 211 light deflection units.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure It is fully disclosed to those skilled in the art.
Fig. 1 is please referred to Fig. 5.
Embodiment 1 (spacer)
Including image collecting device 201, isolating device 500, light source 600, processor 400, target iris 301 is non-targeted Iris 302 is the different eyes of same people.
Image collecting device 201 can be that polyphaser matrix, fixed one camera, video camera, rotation one camera etc. can be real The equipment of existing Image Acquisition.Its image for being used to acquire target iris 301.It only needs to acquire rainbow when carrying out two-dimentional iris capturing Film two dimensional image, and be sent in processor 400 and carry out image procossing, measurement and identification.But two-dimentional iris measurement and identification Current high-precision, the acquisition of high accuracy, measurement, identification requirement, therefore the utility model be can no longer meet it is also proposed that utilizing Virtual camera matrix realizes three-dimensional iris capturing.Collected plurality of pictures is sent into processor by image collecting device 201 at this time Image procossing synthesis (specific method is referring to following embodiments) are carried out in 400, form 3-D image and point cloud data.
Light source 600, which is used to provide light to human eye, expands the area of iris so as to cause myosis.Light source 600 can be with For single light source, or distributed light source.Light source 600 can be the light source of fixed-illumination, or controlled intelligent light Source.
When carrying out iris capturing, in order to increase effective area, need myosis.Pupil contracting common at present Small mode carries out light filling when being included in iris measurement, that is, utilizes certain illumination, so that pupil light-inletting quantity increases, so as to cause life The myosis of rationality.This process certainly will be needed using light source.However the crystalline lens before eyeball is transparence object, light source Meeting reflection source light when irradiating eyes, to form the picture of light source on eyes, the picture of this light source will affect adopting for iris Collection.When the image position of light source 600 is overlapped with iris, iris information will be unable to collect, and can only collect a highlighted light Source image.Isolating device 500 is arranged in the utility model between target iris 301 and non-targeted iris 302, here isolating device 500 can be isolation board 501, stop the light of light source 600 to enter in target iris 301, prevent it on target iris 301 There is reflection image.The position of isolation board 501 and size can be arranged according to 600 position of light source, target iris position, as long as hindering The light of light source 600 enters in target iris 501.But meanwhile also to guarantee that the light of light source 600 is able to enter non-targeted rainbow Film, so as to cause pupil of human diminution.Such as isolation board 501 is located between two iris of human eye, light source 600 is located at non-targeted iris 302 sides, therefore the light of light source 600 enters in non-targeted iris 302, without entering in target iris 301.In non-targeted rainbow 302 side human eye of film perceive light it is stronger when, the pupil of two eyes can all reduce, even if thus realize in target iris 301 The light of upper no light source 600, can also cause the myosis at 301 center of target iris, to increase target iris 301 Area.
Isolating device 500 can prevent light from entering in target iris to be lighttight;Or it is semi-transparent or only Through the light of specific wavelength.Such as isolating device 500 can reveal the green light, but light source 600 is red-light source, isolating device 500 at this time Can block red light enter in target iris.In addition light source 600 can be linearly polarized light source, and isolating device 500 is and polarization at this time The vertical polarizing film of polarization state.
There may also be two light sources 600, respectively left light source 601, right light sources 602, as shown in Figure 1, tundish for system Include isolation board 501.When left light source 601 shines, the acquisition of image collecting device 201 right side iris image, after acquisition, on the right side When light source 602 shines, the acquisition of image collecting device 201 left side iris image.
Non-targeted iris 302 and target iris 301 are belonging respectively to the different eyes of same people.
Isolating device 500 or light orienting device 502, enable the light of light source 600 to orient to non-targeted Iris 302 irradiates, to reduce it into target iris 301, to prevent occurring light source 600 on target iris 301 Picture.The orienting device 502 can be the orienting device outside light source, i.e., after light source 600 shines, guidance light enters non-targeted rainbow Film 302, such as can be reflecting mirror or diaphotoscope.Or the orienting device in light source, i.e., luminescence unit issues in light source Light be shaped as specific direction light enter non-targeted iris 302 in.It can also be simultaneously light source rotating device, pass through driving Light source rotation, controls its light emission direction and only enters in non-targeted iris 302.Such as light source 600 is array intelligent light source, it can be with It realizes that non-targeted iris 302 is illuminated by adjusting each light source luminescent intensity and rotational angle, and does not have on target iris 301 There is the picture of light source.
Embodiment 2
In order to solve the above technical problems, an embodiment of the utility model provides a kind of iris 3D acquisition of information/measurement Device.As shown in figure 4, specifically including: track 101, image collecting device 201, image processing apparatus 100, mechanical mobile device 102, image collecting device 201 is mounted on mechanical mobile device 102, and mechanical mobile device 102 can be moved along track 101, So that the pickup area of image collecting device 201 constantly changes, formd on the scale of a period of time in space difference Multiple pickup areas of position constitute acquisition matrix, but in only one pickup area of some moment, therefore acquisition matrix is " virtual ".Since image collecting device 201 is usually made of camera, also referred to as virtual camera matrix.But image collecting device 201 may be video camera, CCD, CMOS, camera, the mobile phone with image collecting function, plate and other electronic equipments.
The matrix dot of above-mentioned virtual matrix determines by the position of image collecting device 201 when acquisition target object image, phase Adjacent two positions at least meet following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<1.5
Wherein L is the distance that image collecting device 201 arrives object (iris), and usually image collecting device 201 is the Apart from the distance in object face collected region when one position, m is coefficient.
H is object actual size in acquired image, and image is usually image collecting device 201 at first position The picture of shooting, the object in the picture has true geometric dimension (not being the size in picture), when measuring the size Along the orientation measurement of first position to the second position.Such as first position and the second position are the relationships moved horizontally, then The size is measured along the horizontal cross of object.Such as the object left end that can show that in picture is A, right end is B then measures the linear distance of A to B on object, is H.Measurement method can be according to A, B distance in picture, combining camera camera lens Focal length carries out actual distance calculation, and A, B can also be identified on object, it is straight that AB is directly measured using other measurement means Linear distance.
A is two neighboring location drawing picture acquisition device optical axis included angle.
M is coefficient
Since article size, concave-convex situation are different, the value of a can not be limited with strict formula, needs rule of thumb to carry out It limits.According to many experiments, the value of m preferably can be within 0.8 within 1.5.Specific experiment data are referring to such as Lower table:
After object and image collecting device 201 determine, the value of a can be calculated according to above-mentioned empirical equation, according to a Value is that can determine the parameter of virtual matrix, i.e. positional relationship between matrix dot.
In general, virtual matrix is one-dimensional matrix, such as along the multiple matrix dots of horizontal direction arrangement (acquisition position It sets).But when some target objects are larger, two-dimensional matrix is needed, then two adjacent in vertical direction positions equally meet Above-mentioned a value condition.
Under some cases, even from above-mentioned empirical equation, also it is not easy to determine matrix parameter (a value) under some occasions, this When need to adjust matrix parameter according to experiment, experimental method is as follows: prediction matrix parameter a is calculated according to above-mentioned formula, and according to Matrix parameter control camera is moved to corresponding matrix dot, such as camera shoots picture P1 in position W1, after being moved to position W2 Picture P2 is shot, whether in picture P1 and picture P2 have the part that indicates object the same area, i.e. P1 ∩ P2 is non-if comparing at this time Empty (such as simultaneously including human eye angle part, but photograph taking angle is different), if readjusting a value without if, re-moves To position W2 ', above-mentioned comparison step is repeated.If P1 ∩ P2 non-empty, continued to move to according to a value (adjustment or unadjusted) Camera shoots picture P3, comparing whether to have in picture P1, picture P2 and picture P3 again indicates that object is same to the position W3 The part in region, i.e. P1 ∩ P2 ∩ P3 non-empty, please refer to Fig. 2.It recycles plurality of pictures to synthesize 3D, tests 3D synthetic effect, symbol Close 3D information collection and measurement request.That is, the structure of matrix is by image collecting device when acquisition multiple images What 201 position determined, adjacent three positions meet three images acquired on corresponding position and at least there is expression target The part of object the same area.
After virtual matrix obtains multiple target object images, the above-mentioned image of image processing apparatus processing synthesizes 3D.It utilizes The multiple images synthesis 3D point cloud or image of multiple angles of camera shooting can be used and carry out figure according to adjacent image characteristic point As the method for splicing, other methods also can be used.
The method of image mosaic includes:
(1) multiple images are handled, extracts respective characteristic point;The feature of respective characteristic point can in multiple images To be retouched using SIFT (Scale-Invariant Feature Transform, scale invariant feature conversion) Feature Descriptor It states.SIFT feature description has 128 feature description vectors, and the 128 of any characteristic point can be described on direction and scale The feature of a aspect significantly improves the precision to feature description, while Feature Descriptor has independence spatially.
(2) characteristic point of the multiple images based on extraction, feature point cloud data and the iris for generating face characteristic respectively are special The feature point cloud data of sign.It specifically includes:
(2-1) carries out the spy of plurality of pictures according to the feature of the respective characteristic point of each image in the multiple images of extraction The matching for levying point, establishes matched facial feature points data set;According to the respective feature of each image in the multiple images of extraction The feature of point, carries out the matching of the characteristic point of plurality of pictures, establishes matched iris feature point data collection;
(2-2) according to the optical information of camera, obtain multiple images when camera different location, calculate each position phase Relative position of the machine relative to characteristic point spatially, and the space of the characteristic point in multiple images is calculated depending on the relative position Depth information.Similarly, the spatial depth information of the characteristic point in multiple images can be calculated.Bundle adjustment can be used in calculating Method.
The spatial depth information for calculating characteristic point may include: spatial positional information and colouring information, that is, can be feature Point is in the X axis coordinate of spatial position, characteristic point in the Y axis coordinate of spatial position, characteristic point in the Z axis coordinate of spatial position, spy The B for levying the colouring information of the value in the channel R of the colouring information of point, the value in the channel G of the colouring information of characteristic point, characteristic point is logical The value in road, the value in the channel Alpha of the colouring information of characteristic point etc..In this way, containing spy in the feature point cloud data generated The spatial positional information and colouring information of point are levied, the format of feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, Xn indicates characteristic point in the X axis coordinate of spatial position;Yn indicates characteristic point in the Y axis coordinate of spatial position; Zn indicates characteristic point in the Z axis coordinate of spatial position;Rn indicates the value in the channel R of the colouring information of characteristic point;Gn indicates feature The value in the channel G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;The color of An expression characteristic point The value in the channel Alpha of information.
(2-3) generates object according to the spatial depth information of multiple images matched characteristic point data collection and characteristic point The feature point cloud data of feature.
(2-4) constructs object 3D model according to feature point cloud data, to realize the acquisition of object point cloud data.
Collected object color, texture are attached on point cloud data by (2-5), form object 3D image.
Wherein it is possible to 3D rendering is synthesized using all images in one group of image, it can also be higher from wherein selection quality Image synthesized.
Above-mentioned joining method is limited citing, however it is not limited to which this, several with good grounds multi-angle two dimensional images of institute generate three The method of dimension image can be used.
Embodiment 3 (single-shaft-rotation iris capturing)
Small range, small depth targets object 3 are smaller compared with camera acquisition range for lateral dimension, and along camera depth of field direction Size is smaller, i.e., object 3 is less in depth direction information.Under this application, although passing through the side such as track, mechanical arm The single camera system that formula moves on a large scale can equally acquire 3 multi-angle image of object to synthesize 3D point cloud or image, but These equipment are complex, so that reliability reduces.And significantly movement causes acquisition time to extend.And due to Volume is larger, can not be suitable for many occasions (such as access control system).
And small range, small depth targets object 3 have the characteristics that oneself is peculiar, it is required that acquisition/measuring device volume it is small, can It is high by property, acquisition speed is fast, especially it requires acquisition range that lower (object 3 of big depth then needs large range of Acquisition, all information can be acquired by being in different location in particular for camera).Applicant be put forward for the first time the application and Occasion, and be the 3D point cloud and Image Acquisition for realizing object 3 with most succinct rotating device for its feature, it makes full use of The object 3 requires acquisition range small feature.
3D information acquisition system includes: image collecting device 201, for passing through the pickup area of image collecting device 201 3 one groups of images of object are acquired with 3 relative motion of object;Pickup area mobile device, for driving image collecting device 201 Pickup area and object 3 generate relative motion;Pickup area mobile device is turning gear, so that image collecting device 1 Along a central axis rotation;
Referring to Fig. 6 to Figure 11, image collector 011 is a camera, and camera passes through the camera that is fixedly mounted on turn seat On fixed frame, rotary shaft 202 is connected under turn seat, rotary shaft 202 is controlled by shaft driving device 203 and rotated, shaft driving Device 203 and camera are all connected with controlling terminal 4, and for controlling, shaft driving device 203 implements driving to controlling terminal 4 and camera is clapped It takes the photograph.In addition, rotary shaft 202 can also be directly fixedly connected with image collecting device 201, camera rotation is driven.
Due to different from traditional 3D acquisition, 3 iris of implementation goal object of the application belongs to small-scale 3D object.Cause This, without being reappeared on a large scale to target, but need to carry out high-precision acquisition, measurement and comparison to its surface main feature, I.e. measurement accuracy requires high.Camera rotational angle does not need accurate control that is excessive, but needing to guarantee rotational angle.Utility model By the way that angle acquisition device is arranged in driving rotary shaft 202 and/or turn seat, shaft driving device 203 drives rotary shaft 203, camera is rotated according to the degree of setting, and angle acquisition device measures degree of rotation and by measurement feedback to controlling terminal 4, it is compared with the degree of setting, guarantees rotation precision.Shaft driving device 203 drives rotary shaft 202 to turn over two or more A angle, camera circumferentially rotate around central axis under the drive of turn seat and complete the shooting of different angle, will be different The image of the shooting of angle is sent to controlling terminal 4, and terminal log generates final 3-D image according to being handled.It can also To be sent to processing unit, the synthesis (specific synthetic method see below image split-joint method) of 3D is realized, processing unit can be with For self-contained unit, or with other devices with processing function, or remote equipment.Wherein, camera can also connect Image pre-processing unit is connect, image is pre-processed.Referring to Fig. 1, object 3 is iris, guarantees mesh in camera rotation process Object 3 is marked in the pickup area of shooting.
Controlling terminal 4 is chosen as processor, computer, remote control center etc..
Image collecting device 201 could alternatively be video camera, CCD, other image acquisition devices such as infrared camera.Meanwhile Image collecting device 201 can be with integral installation on bracket, such as tripod, fixed platform etc..
Shaft driving device 203 is chosen as brushless motor, high-accuracy stepper motor, angular encoder, rotating electric machine etc..
Referring to Fig. 7, rotary shaft 202 is located at 201 lower section of image collecting device, rotary shaft 202 and image collecting device 201 It is directly connected to, central axis intersects with image collecting device 201 at this time;Central axis shown in Fig. 8 is located at image collecting device 201 The camera lens side of camera is provided with rotation at this point, camera is around center axis rotation and is shot between rotary shaft 202 and turn seat Turn linking arm;Central axis shown in Fig. 9 is located at the reversed side of camera lens of the camera of image collecting device 201, at this point, camera around Center axis rotation is simultaneously shot, and rotation link arm is provided between rotary shaft 202 and turn seat, and can according to need will even It connects arm and is set as that there is curved structure upward or downward;Central axis shown in Figure 10 is located at the camera of image collecting device 201 The reversed side of camera lens, and central axis be it is horizontally disposed, the setting allow camera vertical direction carry out angular transformation, can Vertical direction is adapted to shoot with the object 3 of special characteristic, wherein shaft driving device 203 drives rotary shaft 202 to rotate, It drives and swings linking arm up and down motion;Shaft driving device 203 shown in Figure 11 further includes lifting device 204 and rises for controlling The lifting drive 205 that falling unit 204 moves, lifting drive 205 connect with controlling terminal 4, increase 3D information and adopt The shooting area range of collecting system.
The 3D information acquisition system occupies little space, and the system of shooting efficiency mobile camera a wide range of compared with needs obviously mentions Height, especially suitable for small range, the application scenarios of small depth targets high-precision 3D acquisition of information.
Embodiment 4 (light deflection iris capturing)
Referring to Figure 12 to 14, iris 3D information acquisition system includes: image collecting device 201, for passing through Image Acquisition The pickup area of device 201 and 3 relative motion of object acquire 3 one groups of images of object;Pickup area mobile device, for driving The pickup area and object 3 of motion video acquisition device 201 generate relative motion;Pickup area mobile device is optical scanner dress It sets, so that in the case that image collecting device 201 is not moved or rotated, the pickup area and object 3 of image collecting device 201 Generate relative motion.
Referring to Figure 12, pickup area mobile device further includes light deflection unit 211, optionally, light deflection unit 211 It is driven by light deflection driving unit 212, image collecting device 201 is a camera, and camera is fixedly mounted, and physical location is not sent out Changing is not moved and is not rotated yet, make the pickup area of camera that certain variation occur by light deflection unit 211, To realize that object 3 and pickup area change, during being somebody's turn to do, light deflection unit 211 can be driven single by light deflection 212 driving of member is so that the light of different directions enters image collecting device 201.Light deflection driving unit 212 can be control The linear motion of light deflection unit 211 or the driving device of rotation.Light deflection driving unit 212 and camera are all connected with control eventually End 4, controlling terminal 4 implement driving and camera shooting for controlling shaft driving device 203.
It can also be appreciated that the implementation goal object 3 of the application belongs to small due to different from traditional 3D acquisition technique The 3D object of range.It is therefore not necessary to be reappeared on a large scale to target, but high-precision obtain need to be carried out to its surface main feature It takes, measures and compare, i.e., measurement accuracy requires high.Therefore the displacement or amount of spin of the utility model light deflection unit 211 Without excessive, but need to guarantee the requirement of precision and object 3 in coverage.Utility model passes through in light deflection list Angle acquisition device and/or displacement acquisition device are set in member 211, when light deflection driving unit 212 drives light deflection list When first 211 movement, angle acquisition device and/or displacement acquisition device measure degree of rotation and/or straight-line displacement amount and will measure As a result controlling terminal 4 is fed back to, is compared with preset parameter, guarantees precision.When light deflection driving unit 212 drives When dynamic light deflection unit 211 is rotated and/or is displaced, camera corresponds to the different location state of light deflection unit 211 Complete two or more shootings, the image of two or more shootings be sent to controlling terminal 4, terminal log according to being handled, And generate final 3-D image.Wherein, camera can also connect image pre-processing unit, pre-process to image.
Controlling terminal 4 is chosen as processor, computer, remote control center etc..
Image collecting device 201 could alternatively be video camera, CCD, other image acquisition devices such as infrared camera.Meanwhile Image collecting device 201 is fixed on mounting platform, and position fixation does not change.
Light deflection driving unit 212 is chosen as brushless motor, high-accuracy stepper motor, angular encoder, rotating electric machine etc..
Referring to Figure 12, light deflection unit 211 is reflecting mirror, it is to be understood that is needed to can be set one according to measurement One or more can be correspondingly arranged in a or multiple reflecting mirrors, light deflection driving unit 212, and controls plane mirror angle hair Changing is so that the light of different directions enters image collecting device 201;Light deflection unit 211 shown in Figure 13 is lens Group, the lens in lens group may be configured as one or more, and light deflection driving unit 212 can correspondingly be arranged one or more It is a, and control lens angle and change so that the light of different directions enters image collecting device 201;Light shown in Figure 14 is inclined Turning unit 211 includes multiple surface rotating mirror.
In addition, light deflection unit 211 can be DMD, i.e., it can control the deflection side of DMD reflecting mirror using electric signal To so that the light of different directions enters image collecting device 201.It, can and since DMD size is very small The size of whole equipment is reduced significantly, and since DMD can greatly improve measurement and acquisition speed with high-speed rotation Degree.This is also one of inventive point of the utility model
Although realizing camera rotation and light deflection simultaneously it is appreciated that above-mentioned two embodiment is separately write It is possible.
3D information measurement apparatus including 3D information acquisition system, wherein 3D information acquisition system obtains 3D information, will believe Breath is sent to controlling terminal 4, and the information of 4 pairs of controlling terminal acquisitions, which calculate analyzing, obtains whole characteristic points on object 3 Space coordinate.Including, 3D information image splicing module, 3D information pre-processing module, 3D information algorithms selection module, 3D letter Cease computing module, space coordinate point 3D information reconstruction module.The data that above-mentioned module is used to obtain 3D information acquisition system into Row calculation processing simultaneously generates measurement result, and wherein measurement result can be 3D point cloud image.Measurement include length, profile, area, The geometric parameters such as volume.
3D information comparison device including 3D information acquisition system, wherein 3D information acquisition system obtains 3D information, will believe Breath is sent to controlling terminal 4, and the information of 4 pairs of controlling terminal acquisitions, which calculate analyzing, obtains whole characteristic points on object 3 Space coordinate, and be compared with preset value, judge the state of measured target.Except the module in aforementioned 3D information measurement apparatus Outside, 3D information comparison device further includes default 3D information extraction modules, information comparison module, comparison result output module and prompt Module.Comparison device the measurement result of measured target object 3 can be compared with preset value, in order to produce result examine and It processes again.For finding the case where measured target object 3 and preset value are significantly greater than threshold value there are deviation in comparison result, issue Warning prompt.
The object 3 of 3D information acquisition system acquisition may be implemented at least in the mating object generating means of object 3 The 3D information in one region generates the mating object matched with 3 corresponding region of object.Specifically, the utility model is applied to fortune Dynamic instrument or medical auxiliary apparatus production, there are individual differences for organization of human body, and therefore, unified mating object is unable to satisfy everyone Demand, the utility model 3D information acquisition system obtains someone ancon image, its three-dimensional structure is inputted mating object and generates dress It sets, for producing the elbow rest set for being convenient for its ancon to restore rehabilitation.Mating object generating means can beat for industrial molding machine, 3D Print machine or other are all those skilled in the art will understand that production equipment.Its 3D information acquisition system for configuring the application To realize that rapid customization produces.
Although the utility model gives above-mentioned a variety of applications (measurement is compared, generated), it is to be understood that, this is practical new Type can independently be used as 3D information collecting device.
A kind of 3D information acquisition method, comprising:
S1. in the pickup area of image collecting device 201 and 3 relative movement of object, image collecting device 201 Acquire 3 one groups of images of object;
S2 pickup area mobile device by one of the following two kinds scheme drive the pickup area of image collecting device 201 with Object 3 generates relative motion:
S21. pickup area mobile device is turning gear, so that image collecting device 1 is along a central axis rotation;
S22. pickup area mobile device is optical scanner, so that image collecting device 201 was not moved or rotated In the case of, the pickup area and object 3 of image collecting device 201 generate relative motion.
It can be used using the multiple images synthesis 3D point cloud or image of multiple angles of camera shooting according to adjacent image The method that characteristic point carries out image mosaic, also can be used other methods.
The method of image mosaic includes:
(1) multiple images are handled, extracts respective characteristic point;The feature of respective characteristic point can in multiple images To be retouched using SIFT (Scale-Invariant Feature Transform, scale invariant feature conversion) Feature Descriptor It states.SIFT feature description has 128 feature description vectors, and the 128 of any characteristic point can be described on direction and scale The feature of a aspect significantly improves the precision to feature description, while Feature Descriptor has independence spatially.
(2) characteristic point of the multiple images based on extraction, feature point cloud data and the iris for generating face characteristic respectively are special The feature point cloud data of sign.It specifically includes:
(2-1) carries out the spy of multiple images according to the feature of the respective characteristic point of each image in the multiple images of extraction The matching for levying point, establishes matched facial feature points data set;According to the respective feature of each image in the multiple images of extraction The feature of point, carries out the matching of the characteristic point of multiple images, establishes matched iris feature point data collection;
(2-2) according to the optical information of camera, obtain multiple images when camera different location, calculate each position phase Relative position of the machine relative to characteristic point spatially, and the space of the characteristic point in multiple images is calculated depending on the relative position Depth information.Similarly, the spatial depth information of the characteristic point in multiple images can be calculated.Bundle adjustment can be used in calculating Method.
The spatial depth information for calculating characteristic point may include: spatial positional information and colouring information, that is, can be feature Point is in the X axis coordinate of spatial position, characteristic point in the Y axis coordinate of spatial position, characteristic point in the Z axis coordinate of spatial position, spy The B for levying the colouring information of the value in the channel R of the colouring information of point, the value in the channel G of the colouring information of characteristic point, characteristic point is logical The value in road, the value in the channel Alpha of the colouring information of characteristic point etc..In this way, containing spy in the feature point cloud data generated The spatial positional information and colouring information of point are levied, the format of feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, Xn indicates characteristic point in the X axis coordinate of spatial position;Yn indicates characteristic point in the Y axis coordinate of spatial position; Zn indicates characteristic point in the Z axis coordinate of spatial position;Rn indicates the value in the channel R of the colouring information of characteristic point;Gn indicates feature The value in the channel G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;The color of An expression characteristic point The value in the channel Alpha of information.
(2-3) generates object 3 according to the spatial depth information of multiple images matched characteristic point data collection and characteristic point The feature point cloud data of feature.
(2-4) constructs object 3D model according to feature point cloud data, to realize the acquisition of 3 point cloud data of object.
Collected 3 color of object, texture are attached on point cloud data by (2-5), form object 3D rendering.
Wherein it is possible to 3D rendering is synthesized using all images in one group of image, it can also be higher from wherein selection quality Image synthesized.
Embodiment 5
When forming matrix, it is also necessary to guarantee that the ratio of article size that camera is shot in matrix dot in picture is closed It is suitable, and shoot apparent.So during forming matrix, camera needs to carry out zoom and focusing in matrix dot.
(1) zoom
After camera photographic subjects object, object is estimated in the ratio of camera view, and be compared with predetermined value.It is excessive Or it is too small require carry out zoom.Zooming method can be with are as follows: using additional gearshift image collecting device 201 diameter Image collecting device 201 is moved up, allows image collecting device 201 close to or far from target object, to guarantee Each matrix dot, object accounting holding in picture are basically unchanged.
Further include range unit, the real-time range (object distance) that image collecting device 201 arrives object can be measured.It can be by object It is tabulating away from, accounting, focal length triadic relation data of the object in picture, according to focal length, object accounting in picture Size more right than determining object distance of tabling look-up, so that it is determined that matrix dot.
In some cases, change in the region of different matrix dot objects or object with respect to camera, can also pass through Focal length is adjusted to realize that accounting of the object in picture is kept constant.
(2) auto-focusing
During forming virtual matrix, distance (object distance) h (x) of range unit real-time measurement camera to object, and Measurement result is sent to image processing apparatus 100, image processing apparatus 100 looks into object distance-focal length table, finds corresponding focal length Value, Xiang Xiangji 201 issue focusing signal, and control camera ultrasonic motor driving camera lens is mobile to carry out rapid focus.In this way, can be with In the case where the position for not adjusting image collecting device 201 does not also adjust its lens focus significantly, rapid focus is realized, Guarantee that image collecting device 201 shoots apparent.This is also one of inventive point of the utility model.Certainly, in addition to ranging side Formula is carried out to afocal, can also be focused by the way of picture contrast comparison.
Object can be a physical objects in the utility model, or multiple objects constituent.
The 3D information of object include 3D rendering, 3D point cloud, 3D grid, part 3D feature, 3D size and all with mesh Mark the parameter of object 3D feature.
So-called 3D, three-dimensional refer to tri- directional informations of XYZ in the utility model, especially have depth information, There is essential distinction with only two-dimensional surface information.Also it is known as 3D, panorama, holography, three-dimensional with some, but actually only includes two Information is tieed up, does not especially include that the definition of depth information has essential distinction.
Pickup area described in the utility model refers to the range that image collecting device (such as camera) can be shot.
Image collecting device in the utility model can for CCD, CMOS, camera, video camera, industrial camera, monitor, Camera, mobile phone, plate, notebook, mobile terminal, wearable device, smart glasses, smart watches, Intelligent bracelet and band There are image collecting function all devices.
For example, the iris information acquisition system of no-reflection uses commercially available industrial camera in a kind of specific embodiment WP-UC2000, design parameter are as shown in the table:
Processor or controlling terminal use shelf computer, and such as Dell/ Dell Precision3530, design parameter is as follows:
Mechanical mobile device is using customization moving guide rail system TM-01, design parameter are as follows:
Holder: three axis holders reserve camera mechanical interface, computer control interface;
Guide rail: arc-shaped guide rail is mechanically connected with holder and cooperates;
Servo motor: brand: vertical dimension, model: 130-06025, nominal torque: 6Nm, encoder type: 2500 lines increase Amount formula, wire length: 300cm, rated power: 1500W, voltage rating: 220V, rated current: 6A, rated speed: 2500rpm;
Control mode: it is controlled by PC control either other modes.
The 3D information for the object multiple regions that above embodiments obtain can be used for being compared, such as identity Identification.The 3D information of human body face and iris is obtained first with the embodiment of the utility model, and is stored it in server, As normal data.When in use, being operated such as needing to carry out authentication and paid, opened the door, it can be acquired with 3D System acquires again and obtains the 3D information of human body face and iris, it is compared with normal data, compares and successfully then permits Perhaps next step movement is carried out.It is appreciated that this compare the identification that can be used for the fixtures such as antique, the art work, i.e., first Obtain antique, art work multiple regions 3D information as normal data, when needing to identify, again acquisition multiple regions 3D Information, and be compared with normal data, it discerns the false from the genuine.
The 3D information for the object multiple regions that above embodiments obtain can be used for designing for the object, production, make Make mating object.For example, obtaining human body head 3D data, it can be human design, manufacture more particularly suitable cap;Obtain human body head Portion's data and eyes 3D data can be human design, the suitable glasses of manufacture.
Above embodiments obtain object 3D information can be used for the geometric dimension to the object, appearance profile into Row measurement.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that the utility model Embodiment can be practiced without these specific details.In some instances, be not been shown in detail well known method, Structure and technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments in this include institute in other embodiments Including certain features rather than other feature, but the combination of the feature of different embodiment means in the scope of the present invention Within and form different embodiments.For example, in detail in the claims, the one of any of embodiment claimed all may be used Come in a manner of in any combination using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice Microprocessor or digital signal processor (DSP) realize some of some or all components according to an embodiment of the present invention Or repertoire.The present invention is also implemented as some or all equipment for executing method as described herein Or program of device (for example, computer program and computer program product).It is such to realize that program of the invention can store On a computer-readable medium, it or may be in the form of one or more signals.Such signal can be from internet Downloading obtains on website, is perhaps provided on the carrier signal or is provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame Claim.
So far, although those skilled in the art will appreciate that present invention has been shown and described in detail herein multiple shows Example property embodiment still without departing from the spirit and scope of the present invention, still can according to the present disclosure directly Determine or deduce out many other variations or modifications consistent with the principles of the invention.Therefore, the scope of the present invention is understood that and recognizes It is set to and covers all such other variations or modifications.

Claims (11)

1. a kind of iris dimensions measuring system based on light control, it is characterised in that: including
Light source, for providing illumination to non-targeted iris;
Isolating device, the light for weakening light source enter target iris;
Image collecting device, for acquiring target iris image information;
Dimension measuring device, for measuring iris dimensions;
The non-targeted iris and target iris are belonging respectively to the different eyes of same people.
2. a kind of iris information acquisition system based on light control, it is characterised in that: including
Light source, for providing illumination to non-targeted iris;
Isolating device, the light for weakening light source enter target iris;
Image collecting device, for acquiring target iris image information;
The non-targeted iris and target iris are belonging respectively to the different eyes of same people.
3. the iris information acquisition system based on light control as claimed in claim 2, it is characterised in that: the isolating device Including baffle.
4. the iris information acquisition system based on light control as claimed in claim 3, it is characterised in that: isolating device is not Light transmission, light that is semi-transparent or only transmitting specific wavelength;Or isolating device is polarizing film.
5. the iris information acquisition system based on light control as claimed in claim 2, it is characterised in that: the isolating device Including beam director.
6. the iris information acquisition system based on light control as claimed in claim 5, it is characterised in that: orienting device is light The orienting device or light source rotating device in orienting device, light source outside source.
7. the iris information acquisition system based on light control as claimed in claim 2, it is characterised in that: described image acquisition Device is one camera or single camera.
8. the iris information acquisition system based on light control as claimed in claim 7, it is characterised in that: image collecting device Along center axis rotation.
9. the iris information acquisition system based on light control as claimed in claim 8, it is characterised in that: image collecting device The image of the different directions of object is obtained from multiple pickup areas.
10. the iris information acquisition system based on light control as claimed in claim 9, it is characterised in that: acquire multiple figures As when image collecting device position at least meet two neighboring position and at least conform to following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is adjacent Two location drawing picture acquisition device optical axis included angles, m is coefficient.
11. the iris information acquisition system based on light control as claimed in claim 9, it is characterised in that: acquire multiple figures As when image collecting device adjacent three positions meet three images acquiring on corresponding position and at least exist and indicate mesh Mark the part of object the same area.
CN201821687786.9U 2018-10-18 2018-10-18 A kind of iris dimensions measuring system and information acquisition system based on light control Active CN209203221U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201821687786.9U CN209203221U (en) 2018-10-18 2018-10-18 A kind of iris dimensions measuring system and information acquisition system based on light control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201821687786.9U CN209203221U (en) 2018-10-18 2018-10-18 A kind of iris dimensions measuring system and information acquisition system based on light control

Publications (1)

Publication Number Publication Date
CN209203221U true CN209203221U (en) 2019-08-06

Family

ID=67456031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201821687786.9U Active CN209203221U (en) 2018-10-18 2018-10-18 A kind of iris dimensions measuring system and information acquisition system based on light control

Country Status (1)

Country Link
CN (1) CN209203221U (en)

Similar Documents

Publication Publication Date Title
CN109443199B (en) 3D information measuring system based on intelligent light source
CN109394168B (en) A kind of iris information measuring system based on light control
CN109218702B (en) Camera rotation type 3D measurement and information acquisition device
CN109141240B (en) A kind of measurement of adaptive 3 D and information acquisition device
CN109285109B (en) A kind of multizone 3D measurement and information acquisition device
CN109146961A (en) A kind of 3D measurement and acquisition device based on virtual matrix
CN208653401U (en) Adapting to image acquires equipment, 3D information comparison device, mating object generating means
CN209279885U (en) Image capture device, 3D information comparison and mating object generating means
CN111060023A (en) High-precision 3D information acquisition equipment and method
CN109269405A (en) A kind of quick 3D measurement and comparison method
CN208795174U (en) Camera rotation type image capture device, comparison device, mating object generating means
US11509835B2 (en) Imaging system and method for producing images using means for adjusting optical focus
CN109146949B (en) A kind of 3D measurement and information acquisition device based on video data
CN111780682A (en) 3D image acquisition control method based on servo system
CN109394170B (en) A kind of iris information measuring system of no-reflection
CN211178345U (en) Three-dimensional acquisition equipment
CN109084679B (en) A kind of 3D measurement and acquisition device based on spatial light modulator
CN206378680U (en) 3D cameras based on 360 degree of spacescans of structure light multimode and positioning
CN208653473U (en) Image capture device, 3D information comparison device, mating object generating means
CN209103318U (en) A kind of iris shape measurement system based on illumination
US11017562B2 (en) Imaging system and method for producing images using means for adjusting optical focus
CN208795167U (en) Illumination system for 3D information acquisition system
CN209203221U (en) A kind of iris dimensions measuring system and information acquisition system based on light control
CN215300796U (en) Binocular stereo vision processing device and system
CN213072921U (en) Multi-region image acquisition equipment, 3D information comparison and matching object generation device

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant