CN109443199B - 3D information measuring system based on intelligent light source - Google Patents

3D information measuring system based on intelligent light source Download PDF

Info

Publication number
CN109443199B
CN109443199B CN201811213081.8A CN201811213081A CN109443199B CN 109443199 B CN109443199 B CN 109443199B CN 201811213081 A CN201811213081 A CN 201811213081A CN 109443199 B CN109443199 B CN 109443199B
Authority
CN
China
Prior art keywords
image
light source
light
information
collecting device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811213081.8A
Other languages
Chinese (zh)
Other versions
CN109443199A (en
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Love Vision (beijing) Technology Co Ltd
Original Assignee
Tianmu Love Vision (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Love Vision (beijing) Technology Co Ltd filed Critical Tianmu Love Vision (beijing) Technology Co Ltd
Priority to CN201910862132.8A priority Critical patent/CN110567371B/en
Priority to CN201811213081.8A priority patent/CN109443199B/en
Publication of CN109443199A publication Critical patent/CN109443199A/en
Application granted granted Critical
Publication of CN109443199B publication Critical patent/CN109443199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Abstract

The present invention provides a kind of 3D information measuring system, acquisition system and method based on intelligent light source, wherein measuring system includes: light supply apparatus, for providing illumination to object;Image collecting device acquires target object image for providing pickup area;Detection device, for detecting the feature of object multiple regions reflected light;The feature of control device, the object reflected light for being sent according to detection device changes light supply apparatus luminescence feature, so that the illuminance that object different zones receive is generally equalized;Image processing apparatus, the target object image for receiving image collecting device transmission obtain object 3D information;Measuring device, according to its size of object 3D information measurement.The measurement means of " first carrying out 3D synthesis, recycle 3D point cloud DATA REASONING " are put forward for the first time, the major reason that precision is not high, speed is unhappy is illumination effect when Image Acquisition.

Description

3D information measuring system based on intelligent light source
Technical field
The present invention relates to field of measuring technique, in particular to object length, appearance and size field of measuring technique.
Background technique
When carrying out object measurement, usually used mechanical means (such as graduated scale), electromagnetic method (such as electromagnetism coding Device), optical means (such as laser range finder) and image method.But first synthetic body 3D point cloud data are seldom used at present, Object length, the mode of topography measurement are carried out again.Although it is any can to measure object after obtaining object 3D information for this mode Size, but technology prejudice existing for fields of measurement thinks that such measurement method is complicated and measuring speed is unhappy, precision is not high, It is main reason is that the optimization of composition algorithm is not in place.But it never refers to and passes through light control in 3D acquisition, synthesis, measurement To improve speed and precision.
Although it is not brand new technical that light control technology is whole, also referred in general take pictures, in the prior art Never be applied to 3D acquisition, synthesis, in measurement, do not consider 3D acquisition, synthesis yet, in measurement process for light control Particular/special requirement and specific condition, therefore can not also be equal and use.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kind State 3D information measuring system, acquisition system and the method based on intelligent light source of problem.
The present invention provides a kind of 3D information measuring system based on intelligent light source, including
Light supply apparatus, for providing illumination to object;
Image collecting device acquires target object image for providing pickup area;
Detection device, for detecting the feature of object multiple regions reflected light;
Control device, the feature of the object reflected light for being sent according to detection device change the luminous spy of light supply apparatus Sign, so that the illuminance that object different zones receive is generally equalized;
Image processing apparatus, the target object image for receiving image collecting device transmission obtain object 3D information;
Measuring device, according to its size of object 3D information measurement.
The present invention also provides a kind of 3D information acquisition systems based on intelligent light source, including
Light supply apparatus, for providing illumination to object;
Image collecting device acquires target object image for providing pickup area;
Detection device, for detecting the feature of object multiple regions reflected light;
Control device, the feature of the object reflected light for being sent according to detection device change the luminous spy of light supply apparatus Sign, so that the illuminance that object different zones receive is generally equalized;
Image processing apparatus, the target object image for receiving image collecting device transmission obtain object 3D information.
The present invention also provides a kind of 3D information collecting method based on intelligent light source,
Light supply apparatus is opened, provides illumination to object;
Detection device detects the characteristic of object different zones reflected light, and sends it to control device;
Control device is according to the Characteristics Control light supply apparatus luminescence feature of object different zones reflected light, so that target The illuminance that object different zones receive is generally equalized;
Image acquisition device object multiple directions image, and send it to image processing apparatus;
Image processing apparatus receives the object multiple images that image collecting device is sent and synthesizes object 3D information.
Optionally, image processing apparatus and control device are the same part;And/or image collecting device is with detection device The same part.
Optionally, the luminescence feature are as follows: luminous intensity, shine illumination, light emission color temperature, emission wavelength, light emission direction, hair Optical position and/or their arbitrary combinations.
Optionally, the feature of the reflected light are as follows: reflected light light intensity, reflection illuminance, reflection light color temperature, reflecting light Long, reflection optical position, the reflected light uniformity, the acutance of reflected image, the clarity of reflected image, the contrast of reflected image And/or their arbitrary combinations.
Optionally, light supply apparatus includes multiple sub-light sources, or for that can provide photograph to object different zones from different directions Bright integrated light source.
Optionally, multiple sub-light sources of light supply apparatus are located at the different location around object.
Optionally, multiple sub-light sources or integrated light source configure so that object multiple regions by illumination illuminance substantially It is equal.
Optionally, image collecting device acquires object multiple images by the relative motion of pickup area and object.
Optionally, acquire multiple images when image collecting device position at least meet two neighboring position at least conform to as Lower condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is Two neighboring location drawing picture acquisition device optical axis included angle, m are coefficient.
Optionally, adjacent three positions satisfaction of image collecting device acquires on corresponding position when acquiring multiple images At least there is the part for indicating object the same area in three images.
Inventive point and technical effect
1, the measurement means of " first carrying out 3D synthesis, recycle 3D point cloud DATA REASONING " are put forward for the first time, precision is not high, fast Spend illumination effect when unhappy major reason is Image Acquisition.
2, it is put forward for the first time the quality and speed in order to guarantee 3D acquisition synthesis, is contemplated that the light that object receives and emits The uniformity of illumination adjusts illumination apparatus by the mutual cooperation of control device and detection device, to improve 3D acquisition synthesis Quality and speed.
3, be put forward for the first time in 3D is acquired and synthesized, using multiple light courcess or can the luminous integrated light source of multi-angle, guarantee Relatively uniform illuminance, to improve measurement accuracy and speed.
4, it is put forward for the first time in 3D is acquired and synthesized, object multiple regions, the illuminance of multiple angles are generally equalized ( Even property), to improve the quality and speed of 3D acquisition synthesis.
5, it proposes in 3D collection process, the optimum position condition of camera further increases measurement accuracy and speed.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is that 3D information measurement/a kind of embodiment of acquisition system based on intelligent light source is shown in the embodiment of the present invention 1 It is intended to;
Fig. 2 is 3D information measurement/acquisition system another embodiment based on intelligent light source in the embodiment of the present invention 1 Schematic diagram;
Fig. 3 is 3D information measurement/acquisition system schematic diagram in the embodiment of the present invention 2;
Fig. 4 is the schematic diagram of camera follow shot status requirement in the embodiment of the present invention 2;
Fig. 5 is in the embodiment of the present invention 3 using a kind of schematic diagram of implementation of one camera rotating acquisition;
Fig. 6 is in the embodiment of the present invention 3 using the schematic diagram of second of implementation of one camera rotating acquisition;
Fig. 7 is in the embodiment of the present invention 3 using the schematic diagram of the third implementation of one camera rotating acquisition;
Fig. 8 is in the embodiment of the present invention 3 using the schematic diagram of the 4th kind of implementation of one camera rotating acquisition;
Fig. 9 is in the embodiment of the present invention 3 using the schematic diagram of the 5th kind of implementation of one camera rotating acquisition;
Figure 10 is in the embodiment of the present invention 3 using the schematic diagram of the 6th kind of implementation of one camera rotating acquisition;
Figure 11 is a kind of implementation for acquiring iris 3D information acquisition device in the embodiment of the present invention 4 using light deflection Schematic diagram;
Figure 12 is the second various realizations for acquiring iris 3D information acquisition device in the embodiment of the present invention 4 using light deflection The schematic diagram of mode;
Figure 13 is the third the realization side for acquiring iris 3D information acquisition device in the embodiment of the present invention 4 using light deflection The schematic diagram of formula.
Description of symbols:
201 image collecting devices, 300 objects, 500 control devices, 600 light sources, 400 processors, 700 detection devices, 101 tracks, 100 image processing apparatus, 102 mechanical mobile devices, 202 rotary shafts, 203 shaft driving devices, 204 lifting dresses It sets, 205 lifting drives, 4 controlling terminals, 211 light deflection units, 212 light deflection driving units.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure It is fully disclosed to those skilled in the art.
Embodiment 1 (light source control)
Including image collecting device 201, object 300, control device 500, light source 600, processor 400, detection device 700.Please refer to Fig. 1 and Fig. 2.
Object 300 can for the iris comprising biological characteristic, face, hand, etc. human organs or region or entire people Body, or various animals and plants entirety or region can also be non-life body with appearance profile object (such as hand Table).
Image collecting device 201 can be that polyphaser matrix, fixed one camera, video camera, rotation one camera etc. can be real The equipment of existing Image Acquisition.Its image for being used to acquire object 300.Two-dimension human face measurement can no longer meet at present with identification In high precision, the acquisition, measurement, identification of high accuracy require, therefore the present invention using virtual camera matrix it is also proposed that realized three-dimensional Iris capturing.Collected plurality of pictures is sent into processor 400 and carries out image procossing synthesis by image collecting device 201 at this time (specific method is referring to following embodiments) form 3-D image and point cloud data.
Light source 600 is used to provide illumination to object 300, so that object region to be collected is illuminated, and illuminance is big It causes identical.Light source 600 may include multiple sub-light sources 601, or provide photograph to object different zones from different directions Bright integrated light source 602.Due to the bumps of object profile, the needs of light source 600 guarantee provide illumination in different directions, It can realize the uniformity of 300 different zones illuminance of object.Different, the light source according to the region to be collected of object 300 600 can be set different shapes.Such as need to acquire hand 3D information, then the sub-light source 601 of light source 600 should surround hand Portion forms full encirclement structure;It such as needs to acquire face's 3D information, is wrapped then the integrated light source 602 of light source forms half around face Closed structure.It is appreciated that either sub-light source 601 or integrated light source 602 can not only exist only in a section It is interior, and the two also can be combined with each other use.Such as when acquiring face 3D, if only half-turn light source, under face Bar region will form shade, cause illumination different.It needs that integrated light source or son are arranged again in existing 602 lower part of half-turn light source at this time Light source, to illuminate chin area.
Preferably, for each sub-light source 601, the luminous of itself should also be as meeting certain uniformity requirement.But Excessively require the uniformity of sub-light source 601 that cost can be greatly improved.According to many experiments, preferably each sub-light source is in luminous radius Half in the range of have uniform illuminance.
Detection device 700 is used to detect the illuminance of 300 different zones of object reflection, such as in face acquisition, by Block that nose two sides light is less, and illuminance is relatively low in nose.Detection device 700 receives the reflection of nose two sides at this time Light measures its illuminance or reflective light intensity, sends it to controller 500, at the same also by the illuminance of facial other parts or Reflective light intensity is sent to controller 500, and controller 500 carries out the comparison of multiple regions illuminance or light intensity, distinguish illuminance/ The non-uniform region of light intensity (such as nose two sides), and corresponding sub-light source 601 is controlled according to the information and improves luminous intensity, example As the sub-light source 601 of main irradiation nose two sides improves luminous intensity.Preferably, sub-light source 601 includes mobile device, controller 500 can improve or weaken the light intensity or illumination of corresponding region by the position and angle for controlling sub-light source.Detection device 700 is examined Light intensity/illumination that object 300 reflects is surveyed, light intensity/illumination of the light source received with this approximate target object 300 is whole in object It is acceptable (error rate is within 10%) by lot of experiment validation, and can make in the approximate situation of body material Must control it is simpler, to prevent the complexity of control system.Such as when acquiring face 3D information, due to skin light-reflecting property It is relatively uniform, therefore the received light intensity of face and the light intensity of reflection have relatively-stationary relationship.Therefore it is examined with detection device 700 Light intensity/the illumination for surveying face reflection is appropriately that this is also one of inventive point of the invention.
It is appreciated that reflected light light intensity, the reflection illuminance, reflection for surveying the detection object 300 of device 700 can also be utilized It is light color temperature, reflected light wavelength, reflection optical position, the reflected light uniformity, the acutance of reflected image, the clarity of reflected image, anti- The contrast and/or their arbitrary combinations for penetrating image, to control the luminous intensity of light source 600, shine illumination, illuminant colour Temperature, emission wavelength, light emission direction, luminous position and/or their arbitrary combinations.
Therefore detection device 700 can be the device for specially measuring above-mentioned parameter, or CCD, CMOS, camera, take the photograph The image capture devices such as camera.In the case of it is therefore preferable that, detection device 700 and image collecting device 201 can be same portion Part, i.e. image collecting device 201 realize the function of detection device 700, detect the optical characteristics of object 300.In object 300 Image Acquisition before, first detect whether 300 illumination condition of object meets the requirements using image collecting device 201, and pass through control Light source processed realizes suitable illumination condition, and then image collecting device 201 starts to acquire again looks for picture for 3D synthesis more.
The 3D information for the multiple pictures synthesis object 300 that processor 400 is used to be acquired according to image collecting device 201, Here 3D information include 3D rendering, 3D point cloud, 3D grid, part 3D feature, 3D size and all with object 3D feature Parameter.It is appreciated that controller 500 and processor 400 can realize two functions for same device, or different dresses It sets, realizes control and image procossing respectively.This can be depending on actual chips function, performance.
In the prior art it has been generally acknowledged that when being acquired, synthesized and measured using 3D, speed is unhappy, precision is not high master Reason is wanted to be that the optimization of composition algorithm is not in place.But it never refers to and being mentioned in 3D acquisition, synthesis, measurement by light control High speed and precision.And in fact, can be improved the speed and precision of synthesis really by the optimization of algorithm, but effect is still not Ideal, the speed and quality difference especially synthesized in different application are larger.If advanced optimizing algorithm, need Different optimization is carried out for different occasions, difficulty is higher.Applicant passes through optimization illumination condition by many experiments discovery, Aggregate velocity and quality can be greatly improved.This feature is very different with 2D information collection.2D information collection illumination condition Picture quality is only influenced, but not influences acquisition speed, and picture can also be modified by the later period.And applicant passes through reality Issue after examination and approval existing, when 3D information collection, optimization illumination condition aggregate velocity can be promoted significantly.For details, reference can be made to following tables.
Embodiment 2
In order to solve the above technical problems, one embodiment of the invention provides a kind of 3D information measurement/acquisition system.Such as figure Shown in 3, specifically include: track 101, image collecting device 201, image processing apparatus 100, mechanical mobile device 102, image is adopted Acquisition means 201 are mounted on mechanical mobile device 102, and mechanical mobile device 102 can be moved along track 101, so that figure As the pickup area of acquisition device 201 constantly changes, formd on the scale of a period of time in the multiple of space different location Pickup area constitutes acquisition matrix, but in only one pickup area of some moment, therefore acquisition matrix is " virtual ". Since image collecting device 201 is usually made of camera, also referred to as virtual camera matrix.But image collecting device 201 can also be with For video camera, CCD, CMOS, camera, the mobile phone with image collecting function, plate and other electronic equipments.
The matrix dot of above-mentioned virtual matrix determines by the position of image collecting device 201 when acquisition target object image, phase Adjacent two positions at least meet following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<1.5;
Wherein L is the distance that image collecting device 201 arrives object, and usually image collecting device 201 is in first position When distance apart from object face collected region, m is coefficient.
H is object actual size in acquired image, and image is usually image collecting device 201 at first position The picture of shooting, the object in the picture has true geometric dimension (not being the size in picture), when measuring the size Along the orientation measurement of first position to the second position.Such as first position and the second position are the relationships moved horizontally, then The size is measured along the horizontal cross of object.Such as the object left end that can show that in picture is A, right end is B then measures the linear distance of A to B on object, is H.Measurement method can be according to A, B distance in picture, combining camera camera lens Focal length carries out actual distance calculation, and A, B can also be identified on object, directly measures AB straight line using other measurement means Distance.
A is two neighboring location drawing picture acquisition device optical axis included angle.
M is coefficient
Since article size, concave-convex situation are different, the value of a can not be limited with strict formula, needs rule of thumb to carry out It limits.According to many experiments, the value of m preferably can be within 0.8 within 1.5.Specific experiment data are referring to such as Lower table:
After object and image collecting device 201 determine, the value of a can be calculated according to above-mentioned empirical equation, according to a Value is that can determine the parameter of virtual matrix, i.e. positional relationship between matrix dot.
In general, virtual matrix is one-dimensional matrix, such as along the multiple matrix dots of horizontal direction arrangement (acquisition position It sets).But when some target objects are larger, two-dimensional matrix is needed, then two adjacent in vertical direction positions equally meet Above-mentioned a value condition.
Under some cases, even from above-mentioned empirical equation, also it is not easy to determine matrix parameter (a value) under some occasions, this When need to adjust matrix parameter according to experiment, experimental method is as follows: prediction matrix parameter a is calculated according to above-mentioned formula, and according to Matrix parameter control camera is moved to corresponding matrix dot, such as camera shoots picture P1 in position W1, after being moved to position W2 Picture P2 is shot, whether in picture P1 and picture P2 have the part that indicates object the same area, i.e. P1 ∩ P2 is non-if comparing at this time Empty (such as simultaneously including human eye angle part, but photograph taking angle is different), if readjusting a value without if, re-moves To position W2 ', above-mentioned comparison step is repeated.If P1 ∩ P2 non-empty, phase is continued to move to according to a value (adjustment or unadjusted) Machine shoots picture P3, comparing whether to have in picture P1, picture P2 and picture P3 again indicates object the same area to the position W3 Part, i.e. P1 ∩ P2 ∩ P3 non-empty please refers to Fig. 4.It recycles plurality of pictures to synthesize 3D, tests 3D synthetic effect, meet 3D Information collection and measurement request.That is, the structure of matrix is by image collecting device 201 when acquisition multiple images What position determined, it is at least same in the presence of expression object that adjacent three positions meet three images acquired on corresponding position The part in region.
After virtual matrix obtains multiple target object images, the above-mentioned image of image processing apparatus processing synthesizes 3D.It utilizes The multiple images synthesis 3D point cloud or image of multiple angles of camera shooting can be used and carry out figure according to adjacent image characteristic point As the method for splicing, other methods also can be used.
The method of image mosaic includes:
(1) multiple images are handled, extracts respective characteristic point;The feature of respective characteristic point can in multiple images To be retouched using SIFT (Scale-Invariant Feature Transform, scale invariant feature conversion) Feature Descriptor It states.SIFT feature description has 128 feature description vectors, and the 128 of any characteristic point can be described on direction and scale The feature of a aspect significantly improves the precision to feature description, while Feature Descriptor has independence spatially.
(2) characteristic point of the multiple images based on extraction, feature point cloud data and the iris for generating face characteristic respectively are special The feature point cloud data of sign.It specifically includes:
(2-1) carries out the spy of plurality of pictures according to the feature of the respective characteristic point of each image in the multiple images of extraction The matching for levying point, establishes matched facial feature points data set;According to the respective feature of each image in the multiple images of extraction The feature of point, carries out the matching of the characteristic point of plurality of pictures, establishes matched iris feature point data collection;
(2-2) according to the optical information of camera, obtain multiple images when camera different location, calculate each position phase Relative position of the machine relative to characteristic point spatially, and the space of the characteristic point in multiple images is calculated depending on the relative position Depth information.Similarly, the spatial depth information of the characteristic point in multiple images can be calculated.Bundle adjustment can be used in calculating Method.
The spatial depth information for calculating characteristic point may include: spatial positional information and colouring information, that is, can be feature Point is in the X axis coordinate of spatial position, characteristic point in the Y axis coordinate of spatial position, characteristic point in the Z axis coordinate of spatial position, spy Levy the channel B of the colouring information of the value in the channel R of the colouring information of point, the value in the channel G of the colouring information of characteristic point, characteristic point Value, the value in the channel Alpha of colouring information of characteristic point etc..In this way, containing feature in the feature point cloud data generated The spatial positional information and colouring information of point, the format of feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, Xn indicates characteristic point in the X axis coordinate of spatial position;Yn indicates characteristic point in the Y axis coordinate of spatial position; Zn indicates characteristic point in the Z axis coordinate of spatial position;Rn indicates the value in the channel R of the colouring information of characteristic point;Gn indicates feature The value in the channel G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;The color of An expression characteristic point The value in the channel Alpha of information.
(2-3) generates object according to the spatial depth information of multiple images matched characteristic point data collection and characteristic point The feature point cloud data of feature.
(2-4) constructs object 3D model according to feature point cloud data, to realize the acquisition of object point cloud data.
Collected object color, texture are attached on point cloud data by (2-5), form object 3D rendering.
Wherein it is possible to 3D rendering is synthesized using all images in one group of image, it can also be higher from wherein selection quality Image synthesized.
Above-mentioned joining method is limited citing, however it is not limited to which this, several with good grounds multi-angle two dimensional images of institute generate three The method of dimension image can be used.
Embodiment 3 (single-shaft-rotation iris capturing)
Small range, small depth targets object 3 are smaller compared with camera acquisition range for lateral dimension, and along camera depth of field direction Size is smaller, i.e., object 3 is less in depth direction information.Under this application, although passing through the side such as track, mechanical arm The single camera system that formula moves on a large scale can equally acquire 3 multi-angle image of object to synthesize 3D point cloud or image, but These equipment are complex, so that reliability reduces.And significantly movement causes acquisition time to extend.And due to Volume is larger, can not be suitable for many occasions (such as access control system).
And small range, small depth targets object 3 have the characteristics that oneself is peculiar, it is required that acquisition/measuring device volume it is small, can It is high by property, acquisition speed is fast, especially it requires acquisition range that lower (object 3 of big depth then needs large range of Acquisition, all information can be acquired by being in different location in particular for camera).Applicant be put forward for the first time the application and Occasion, and be the 3D point cloud and Image Acquisition for realizing object 3 with most succinct rotating device for its feature, it makes full use of The object 3 requires acquisition range small feature.
3D information acquisition system includes: image collecting device 201, for passing through the pickup area of image collecting device 201 3 one groups of images of object are acquired with 3 relative motion of object;The mobile dress of pickup area, for driving image collecting device 201 Pickup area and object 3 generate relative motion;Pickup area mobile device is turning gear, so that image collecting device 201 Along a central axis rotation;
Referring to Fig. 5-Figure 10, image collecting device 201 is a camera, and camera passes through the camera that is fixedly mounted on turn seat On fixed frame, rotary shaft 202 is connected under turn seat, rotary shaft 202 is controlled by shaft driving device 203 and rotated, shaft driving Device 203 and camera are all connected with controlling terminal 4, and for controlling, shaft driving device 203 implements driving to controlling terminal 4 and camera is clapped It takes the photograph.In addition, rotary shaft 201 can also be directly fixedly connected with image collecting device 201, camera rotation is driven.
Due to different from traditional 3D acquisition, the implementation goal object 3 of the application belongs to small-scale 3D object.Therefore, nothing Target need to be reappeared on a large scale, but high-precision acquisition, measurement and comparison need to be carried out to its surface main feature, that is, be measured Required precision is high.Camera rotational angle does not need accurate control that is excessive, but needing to guarantee rotational angle.Invention is by driving Angle acquisition device is set in rotary shaft 202 and/or turn seat, shaft driving device 203 drive rotary shaft 202, camera according to The degree of setting rotates, and angle acquisition device measures degree of rotation and the degree by measurement feedback to controlling terminal 4, with setting Number is compared, and guarantees rotation precision.Shaft driving device 203 drives rotary shaft 202 to turn over two or more angles, and camera exists The shooting for circumferentially rotating and completing different angle under the drive of turn seat around central axis, by the figure of the shooting of different angle As being sent to controlling terminal 4, terminal log generates final 3-D image according to being handled.It is single that processing can also be sent to Member realizes the synthesis (specific synthetic method see below image split-joint method) of 3D, and processing unit can be self-contained unit, can also Think and other devices with processing function, or remote equipment.Wherein, camera can also connect image preprocessing list Member pre-processes image.Object 3 is face, guarantees object 3 in the pickup area of shooting in camera rotation process It is interior.
Controlling terminal 4 is chosen as processor, computer, remote control center etc..
Image collecting device 201 could alternatively be video camera, CCD, other image acquisition devices such as infrared camera.Meanwhile Image collecting device 201 can be with integral installation on bracket, such as tripod, fixed platform etc..
Shaft driving device 203 is chosen as brushless motor, high-accuracy stepper motor, angular encoder, rotating electric machine etc..
Referring to Fig. 6, rotary shaft 202 is located at 201 lower section of image collecting device, rotary shaft 202 and image collecting device 201 It is directly connected to, central axis intersects with image collecting device 201 at this time;Central axis shown in Fig. 7 is located at image collecting device 201 The camera lens side of camera is provided with rotation at this point, camera is around center axis rotation and is shot between rotary shaft 202 and turn seat Turn linking arm;Central axis shown in Fig. 8 is located at the reversed side of camera lens of the camera of image collecting device 201, at this point, camera around Center axis rotation is simultaneously shot, and rotation link arm is provided between rotary shaft 202 and turn seat, and can according to need will even It connects arm and is set as that there is curved structure upward or downward;Central axis shown in Fig. 9 is located at the mirror of the camera of image collecting device 201 Reversed side, and central axis be it is horizontally disposed, which allows camera to carry out angular transformation in vertical direction, can fit There should be the object 3 of special characteristic to shoot in vertical direction, wherein shaft driving device 203 drives rotary shaft 202 to rotate, band Movable pendulum moves linking arm and moves up and down;Shaft driving device 203 shown in Fig. 10 further includes lifting device 204 and goes up and down for controlling The lifting drive 205 that device 204 moves, lifting drive 205 are connect with controlling terminal 4, increase 3D acquisition of information The shooting area range of device.
The 3D information acquisition device occupies little space, and the system of shooting efficiency mobile camera a wide range of compared with needs obviously mentions Height, especially suitable for small range, the application scenarios of small depth targets high-precision 3D acquisition of information.
Embodiment 4 (light deflection iris capturing)
Referring to Figure 11-13,3D information acquisition system includes: image collecting device 201, for passing through image collecting device 201 pickup area and 3 relative motion of object acquire 3 one groups of images of object;Pickup area mobile device, for driving figure As the pickup area and object 3 of acquisition device 201 generate relative motion;Pickup area mobile device is optical scanner, So that the pickup area and object 3 of image collecting device 201 produce in the case that image collecting device 201 is not moved or rotated Raw relative motion.
Referring to Figure 11, pickup area mobile device further includes light deflection unit 211, optionally, light deflection unit 211 It is driven by light deflection driving unit 212, image collecting device 201 is a camera, and camera is fixedly mounted, and physical location is not sent out Changing is not moved and is not rotated yet, make the pickup area of camera that certain variation occur by light deflection unit 211, To realize that object 3 and pickup area change, during being somebody's turn to do, light deflection unit 211 can be driven single by light deflection 212 driving of member is so that the light of different directions enters image collecting device 201.Light deflection driving unit 212 can be control The linear motion of light deflection unit 211 or the driving device of rotation.Light deflection driving unit 212 and camera are all connected with control eventually End 4, controlling terminal 4 implement driving and camera shooting for controlling shaft driving device 203.
It can also be appreciated that the implementation goal object 3 of the application belongs to small due to different from traditional 3D acquisition technique The 3D object of range.It is therefore not necessary to be reappeared on a large scale to target, but high-precision obtain need to be carried out to its surface main feature It takes, measures and compare, i.e., measurement accuracy requires high.Therefore the displacement of light deflection unit 211 of the present invention or amount of spin are not necessarily to It is excessive, but need to guarantee the requirement of precision and object 3 in coverage.Invention on light deflection unit 211 by setting Angle acquisition device and/or displacement acquisition device are set, when light deflection driving unit 212 drives light deflection unit 211 to move When, angle acquisition device and/or displacement acquisition device measure degree of rotation and/or straight-line displacement amount and give measurement feedback Controlling terminal 4 is compared with preset parameter, guarantees precision.When light deflection driving unit 212 drives light deflection When unit 211 is rotated and/or is displaced, camera correspond to light deflection unit 211 different location state complete two or The image of two or more shootings is sent to controlling terminal 4 by multiple shootings, and terminal log generates final according to being handled 3-D image.Wherein, camera can also connect image pre-processing unit, pre-process to image.
Controlling terminal 4 is chosen as processor, computer, remote control center etc..
Image collecting device 201 could alternatively be video camera, CCD, other image acquisition devices such as infrared camera.Meanwhile Image collecting device 201 is fixed on mounting platform, and position fixation does not change.
Light deflection driving unit 212 is chosen as brushless motor, high-accuracy stepper motor, angular encoder, rotating electric machine etc..
Referring to Figure 11, light deflection unit 211 is reflecting mirror, it is to be understood that is needed to can be set one according to measurement One or more can be correspondingly arranged in a or multiple reflecting mirrors, light deflection driving unit 212, and controls plane mirror angle hair Changing is so that the light of different directions enters image collecting device 201;Light deflection unit 211 shown in Figure 12 is lens Group, the lens in lens group may be configured as one or more, and light deflection driving unit 212 can correspondingly be arranged one or more It is a, and control lens angle and change so that the light of different directions enters image collecting device 201;Light shown in Figure 13 is inclined Turning unit 211 includes multiple surface rotating mirror.
In addition, light deflection unit 211 can be DMD, i.e., it can control the deflection side of DMD reflecting mirror using electric signal To so that the light of different directions enters image collecting device 201.And since DMD size is very small, can show It lands and reduces the size of whole equipment, and since DMD can greatly improve measurement and acquisition speed with high-speed rotation Degree.This is also one of inventive point of the invention.
Although realizing camera rotation and light deflection simultaneously it is appreciated that above-mentioned two embodiment is separately write It is possible.
3D information measurement apparatus including 3D information acquisition device, wherein 3D information acquisition device obtains 3D information, will believe Breath is sent to controlling terminal 4, and the information of 4 pairs of controlling terminal acquisitions, which calculate analyzing, obtains whole characteristic points on object 3 Space coordinate.Including, 3D information image splicing module, 3D information pre-processing module, 3D information algorithms selection module, 3D letter Cease computing module, space coordinate point 3D information reconstruction module.The data that above-mentioned module is used to obtain 3D information acquisition device into Row calculation processing simultaneously generates measurement result, and wherein measurement result can be 3D point cloud image.Measurement include length, profile, area, The geometric parameters such as volume.
3D information comparison device including 3D information acquisition device, wherein 3D information acquisition device obtains 3D information, will believe Breath is sent to controlling terminal 4, and the information of 4 pairs of controlling terminal acquisitions, which calculate analyzing, obtains whole characteristic points on object 3 Space coordinate, and be compared with preset value, judge the state of measured target.Except the module in aforementioned 3D information measurement apparatus Outside, 3D information comparison device further includes default 3D information extraction modules, information comparison module, comparison result output module and prompt Module.Comparison device the measurement result of measured target object 3 can be compared with preset value, in order to produce result examine and It processes again.For finding the case where measured target object 3 and preset value are significantly greater than threshold value there are deviation in comparison result, issue Warning prompt.
At least the one of the object 3 of 3D information acquisition device acquisition may be implemented in the mating object generating means of object 3 The 3D information in a region generates the mating object matched with 3 corresponding region of object.Specifically, the present invention is applied to sports apparatus Or medical auxiliary apparatus production, there are individual differences for organization of human body, and therefore, unified mating object is unable to satisfy everyone need It asks, 3D information acquisition device of the present invention obtains someone ancon image, its three-dimensional structure is inputted mating object generating means, for giving birth to Produce the elbow rest set for restoring rehabilitation convenient for its ancon.Mating object generating means can for industrial molding machine, 3D printer or its He is all those skilled in the art will understand that production equipment.Its 3D information acquisition device for configuring the application is fast to realize Speed customizes production.
Although The present invention gives above-mentioned a variety of applications (measurement compares, generation), it is to be understood that, the present invention can be only It is vertical to be used as 3D information collecting device.
A kind of 3D information collecting method, comprising:
S1. in the pickup area of image collecting device 201 and 3 relative movement of object, image collecting device 1 is adopted Collect 3 one groups of images of object;
S2 pickup area mobile device by one of the following two kinds scheme drive the pickup area of image collecting device 201 with Object 3 generates relative motion:
S21. pickup area mobile device is turning gear, so that image collecting device 201 is along a central axis rotation;
S22. pickup area mobile device is optical scanner, so that image collecting device 201 was not moved or rotated In the case of, the pickup area and object 3 of image collecting device 201 generate relative motion.
It can be used using the multiple images synthesis 3D point cloud or image of multiple angles of camera shooting according to adjacent image The method that characteristic point carries out image mosaic, also can be used other methods.
The method of image mosaic includes:
(1) multiple images are handled, extracts respective characteristic point;The feature of respective characteristic point can in multiple images To be retouched using SIFT (Scale-Invariant Feature Transform, scale invariant feature conversion) Feature Descriptor It states.SIFT feature description has 128 feature description vectors, and the 128 of any characteristic point can be described on direction and scale The feature of a aspect significantly improves the precision to feature description, while Feature Descriptor has independence spatially.
(2) characteristic point of the multiple images based on extraction, feature point cloud data and the iris for generating face characteristic respectively are special The feature point cloud data of sign.It specifically includes:
(2-1) carries out the spy of multiple images according to the feature of the respective characteristic point of each image in the multiple images of extraction The matching for levying point, establishes matched facial feature points data set;According to the respective feature of each image in the multiple images of extraction The feature of point, carries out the matching of the characteristic point of multiple images, establishes matched iris feature point data collection;
(2-2) according to the optical information of camera, obtain multiple images when camera different location, calculate each position phase Relative position of the machine relative to characteristic point spatially, and the space of the characteristic point in multiple images is calculated depending on the relative position Depth information.Similarly, the spatial depth information of the characteristic point in multiple images can be calculated.Bundle adjustment can be used in calculating Method.
The spatial depth information for calculating characteristic point may include: spatial positional information and colouring information, that is, can be feature Point is in the X axis coordinate of spatial position, characteristic point in the Y axis coordinate of spatial position, characteristic point in the Z axis coordinate of spatial position, spy Levy the channel B of the colouring information of the value in the channel R of the colouring information of point, the value in the channel G of the colouring information of characteristic point, characteristic point Value, the value in the channel Alpha of colouring information of characteristic point etc..In this way, containing feature in the feature point cloud data generated The spatial positional information and colouring information of point, the format of feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, Xn indicates characteristic point in the X axis coordinate of spatial position;Yn indicates characteristic point in the Y axis coordinate of spatial position; Zn indicates characteristic point in the Z axis coordinate of spatial position;Rn indicates the value in the channel R of the colouring information of characteristic point;Gn indicates feature The value in the channel G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;The color of An expression characteristic point The value in the channel Alpha of information.
(2-3) generates object 3 according to the spatial depth information of multiple images matched characteristic point data collection and characteristic point The feature point cloud data of feature.
(2-4) constructs object 3D model according to feature point cloud data, to realize the acquisition of 3 point cloud data of object.
Collected 3 color of object, texture are attached on point cloud data by (2-5), form object 3D rendering.
Wherein it is possible to 3D rendering is synthesized using all images in one group of image, it can also be higher from wherein selection quality Image synthesized.
Embodiment 5
When forming matrix, it is also necessary to guarantee that the ratio of article size that camera is shot in matrix dot in picture is closed It is suitable, and shoot apparent.So during forming matrix, camera needs to carry out zoom and focusing in matrix dot.
(1) zoom
After camera photographic subjects object, object is estimated in the ratio of camera view, and be compared with predetermined value.It is excessive Or it is too small require carry out zoom.Zooming method can be with are as follows: using additional gearshift image collecting device 201 diameter Image collecting device 201 is moved up, allows image collecting device 201 close to or far from target object, to guarantee Each matrix dot, object accounting holding in picture are basically unchanged.
Further include range unit, the real-time range (object distance) that image collecting device 201 arrives object can be measured.It can be by object It is tabulating away from, accounting, focal length triadic relation data of the object in picture, according to focal length, object accounting in picture Size more right than determining object distance of tabling look-up, so that it is determined that matrix dot.
In some cases, change in the region of different matrix dot objects or object with respect to camera, can also pass through Focal length is adjusted to realize that accounting of the object in picture is kept constant.
(2) auto-focusing
During forming virtual matrix, distance (object distance) h (x) of range unit real-time measurement camera to object, and Measurement result is sent to image processing apparatus 100, image processing apparatus 100 looks into object distance-focal length table, finds corresponding focal length Value, Xiang Xiangji 201 issue focusing signal, and control camera ultrasonic motor driving camera lens is mobile to carry out rapid focus.In this way, can be with In the case where the position for not adjusting image collecting device 201 does not also adjust its lens focus significantly, rapid focus is realized, Guarantee that image collecting device 201 shoots apparent.This is also one of inventive point of the invention.Certainly, in addition to distance measuring method into Row can also focus to afocal by the way of picture contrast comparison.
Heretofore described object can be a physical objects, or multiple objects constituent.
The 3D information of the object includes 3D rendering, 3D point cloud, 3D grid, part 3D feature, 3D size and all bands There is the parameter of object 3D feature.
So-called 3D, three-dimensional refer to tri- directional informations of XYZ in the present invention, especially have depth information, and only There is two-dimensional surface information that there is essential distinction.Also it is known as 3D, panorama, holography, three-dimensional with some, but actually only includes two-dimentional letter Breath, does not especially include that the definition of depth information has essential distinction.
Pickup area described in the present invention refers to the range that image collecting device (such as camera) can be shot.
Image collecting device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera shooting Head, mobile phone, plate, notebook, mobile terminal, wearable device, smart glasses, smart watches, Intelligent bracelet and with figure As acquisition function all devices.
For example, the iris information acquisition system of no-reflection uses commercially available industrial camera in a kind of specific embodiment WP-UC2000, design parameter are as shown in the table:
Processor or controlling terminal use shelf computer, and such as Dell/ Dell Precision3530, design parameter is as follows:
Mechanical mobile device is using customization moving guide rail system TM-01, design parameter are as follows:
Holder: three axis holders reserve camera mechanical interface, computer control interface;
Guide rail: arc-shaped guide rail is mechanically connected with holder and cooperates;
Servo motor: brand: vertical dimension, model: 130-06025, nominal torque: 6Nm, encoder type: 2500 lines increase Amount formula, wire length: 300cm, rated power: 1500W, voltage rating: 220V, rated current: 6A, rated speed: 2500rpm;
Control mode: it is controlled by PC control either other modes.
The 3D information for the object multiple regions that above embodiments obtain can be used for being compared, such as identity Identification.The 3D information of human body face and iris is obtained first with the solution of the present invention, and is stored it in server, as Normal data.When in use, operating such as needing to carry out authentication and paid, opened the door, 3D acquisition device can be used It is compared the 3D information for acquiring and obtaining human body face and iris again with normal data, compare successfully then allow into Row acts in next step.It is appreciated that this compare the identification that can be used for the fixtures such as antique, the art work, i.e., first obtain Antique, art work multiple regions 3D information as normal data, when needing to identify, again obtain multiple regions 3D letter Breath, and be compared with normal data, it discerns the false from the genuine.
The 3D information for the object multiple regions that above embodiments obtain can be used for designing for the object, production, make Make mating object.For example, obtaining human body head 3D data, it can be human design, manufacture more particularly suitable cap;Obtain human body head Portion's data and eyes 3D data can be human design, the suitable glasses of manufacture.
Above embodiments obtain object 3D information can be used for the geometric dimension to the object, appearance profile into Row measurement.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments in this include institute in other embodiments Including certain features rather than other feature, but the combination of the feature of different embodiment means in the scope of the present invention Within and form different embodiments.For example, in detail in the claims, the one of any of embodiment claimed all may be used Come in a manner of in any combination using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice Microprocessor or digital signal processor (DSP) realize some of some or all components according to an embodiment of the present invention Or repertoire.The present invention is also implemented as some or all equipment for executing method as described herein Or program of device (for example, computer program and computer program product).It is such to realize that program of the invention can store On a computer-readable medium, it or may be in the form of one or more signals.Such signal can be from internet Downloading obtains on website, is perhaps provided on the carrier signal or is provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame Claim.
So far, although those skilled in the art will appreciate that present invention has been shown and described in detail herein multiple shows Example property embodiment still without departing from the spirit and scope of the present invention, still can according to the present disclosure directly Determine or deduce out many other variations or modifications consistent with the principles of the invention.Therefore, the scope of the present invention is understood that and recognizes It is set to and covers all such other variations or modifications.

Claims (24)

1. a kind of 3D information measuring system based on intelligent light source, it is characterised in that: including
Light supply apparatus, for providing illumination to object;
Image collecting device acquires target object image for providing pickup area;
Detection device, for detecting the feature of object multiple regions reflected light;
The feature of control device, the object reflected light for being sent according to detection device changes light supply apparatus luminescence feature, with So that the illuminance that object different zones receive is generally equalized;
Image processing apparatus, the target object image for receiving image collecting device transmission obtain object 3D information;
Measuring device, according to its size of object 3D information measurement;
Image collecting device acquires object multiple images by the relative motion of pickup area and object;Acquire multiple images When image collecting device position at least meet two neighboring position and at least conform to following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is adjacent Two location drawing picture acquisition device optical axis included angles, m is coefficient.
2. the 3D information measuring system based on intelligent light source as described in claim 1, it is characterised in that: image processing apparatus and Control device is the same part;And/or image collecting device and detection device are the same part.
3. the 3D information measuring system based on intelligent light source as described in claim 1, it is characterised in that: the luminescence feature Are as follows: luminous intensity, the illumination that shines, light emission color temperature, emission wavelength, light emission direction, luminous position and/or their arbitrary combinations.
4. the 3D information measuring system based on intelligent light source as described in claim 1, it is characterised in that: the spy of the reflected light Sign are as follows: reflected light light intensity, reflection illuminance, reflection light color temperature, reflected light wavelength, reflection optical position, the reflected light uniformity, reflection Acutance, the clarity of reflected image, the contrast of reflected image and/or their arbitrary combinations of image.
5. the 3D information measuring system based on intelligent light source as described in claim 1, it is characterised in that: light supply apparatus includes more A sub-light source, or for can the integrated light source of illumination be provided to object different zones from different directions.
6. the 3D information measuring system based on intelligent light source as claimed in claim 5, it is characterised in that: light supply apparatus it is multiple Sub-light source is located at the different location around object.
7. the 3D information measuring system based on intelligent light source as claimed in claim 5, it is characterised in that: multiple sub-light sources or one Body light source configures so that object multiple regions are generally equalized by the illuminance of illumination.
8. the 3D information measuring system based on intelligent light source as described in claim 1, it is characterised in that: when acquisition multiple images Adjacent three positions of image collecting device meet three images acquired on corresponding position and at least there is expression object The part of the same area.
9. a kind of 3D information acquisition system based on intelligent light source, it is characterised in that: including
Light supply apparatus, for providing illumination to object;
Image collecting device acquires target object image for providing pickup area;
Detection device, for detecting the feature of object multiple regions reflected light;
The feature of control device, the object reflected light for being sent according to detection device changes light supply apparatus luminescence feature, with So that the illuminance that object different zones receive is generally equalized;
Image processing apparatus, the target object image for receiving image collecting device transmission obtain object 3D information;Image is adopted Acquisition means acquire object multiple images by the relative motion of pickup area and object;
The position of image collecting device at least meets two neighboring position and at least conforms to following condition when acquiring multiple images:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is adjacent Two location drawing picture acquisition device optical axis included angles, m is coefficient.
10. the 3D information acquisition system based on intelligent light source as claimed in claim 9, it is characterised in that: image processing apparatus It is the same part with control device;And/or image collecting device and detection device are the same part.
11. the 3D information acquisition system based on intelligent light source as claimed in claim 9, it is characterised in that: the luminescence feature Are as follows: luminous intensity, the illumination that shines, light emission color temperature, emission wavelength, light emission direction, luminous position and/or their arbitrary combinations.
12. the 3D information acquisition system based on intelligent light source as claimed in claim 9, it is characterised in that: the reflected light Feature are as follows: reflected light light intensity, reflection illuminance, reflection light color temperature, reflected light wavelength, reflection optical position, the reflected light uniformity, anti- Penetrate the acutance of image, the clarity of reflected image, the contrast of reflected image and/or their arbitrary combinations.
13. the 3D information acquisition system based on intelligent light source as claimed in claim 9, it is characterised in that: light supply apparatus includes Multiple sub-light sources, or for can the integrated light source of illumination be provided to object different zones from different directions.
14. the 3D information acquisition system based on intelligent light source as claimed in claim 13, it is characterised in that: light supply apparatus it is more A sub-light source is located at the different location around object.
15. the 3D information acquisition system based on intelligent light source as claimed in claim 13, it is characterised in that: multiple sub-light sources or Integrated light source configuration is so that object multiple regions are generally equalized by the illuminance of illumination.
16. the 3D information acquisition system based on intelligent light source as claimed in claim 9, it is characterised in that: acquisition multiple images When image collecting device adjacent three positions meet three images acquiring on corresponding position and at least exist and indicate target The part of object the same area.
17. a kind of 3D information collecting method based on intelligent light source, it is characterised in that:
Light supply apparatus is opened, provides illumination to object;
Detection device detects the characteristic of object different zones reflected light, and sends it to control device;
Control device is according to the Characteristics Control light supply apparatus luminescence feature of object different zones reflected light, so that object is not The illuminance received with region is generally equalized;
Image acquisition device object multiple directions image, and send it to image processing apparatus;
Image processing apparatus receives the object multiple images that image collecting device is sent and synthesizes object 3D information;Image Acquisition Device acquires object multiple images by the relative motion of pickup area and object;Image collector when acquiring multiple images The position set at least meets two neighboring position and at least conforms to following condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8;
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is adjacent Two location drawing picture acquisition device optical axis included angles, m is coefficient.
18. the 3D information collecting method based on intelligent light source as claimed in claim 17, it is characterised in that: image processing apparatus It is the same part with control device;And/or image collecting device and detection device are the same part.
19. the 3D information collecting method based on intelligent light source as claimed in claim 17, it is characterised in that: the luminescence feature Are as follows: luminous intensity, the illumination that shines, light emission color temperature, emission wavelength, light emission direction, luminous position and/or their arbitrary combinations.
20. the 3D information collecting method based on intelligent light source as claimed in claim 17, it is characterised in that: the reflected light Feature are as follows: reflected light light intensity, reflection illuminance, reflection light color temperature, reflected light wavelength, reflection optical position, the reflected light uniformity, anti- Penetrate the acutance of image, the clarity of reflected image, the contrast of reflected image and/or their arbitrary combinations.
21. the 3D information collecting method based on intelligent light source as claimed in claim 17, it is characterised in that: light supply apparatus includes Multiple sub-light sources, or for can the integrated light source of illumination be provided to object different zones from different directions.
22. the 3D information collecting method based on intelligent light source as claimed in claim 21, it is characterised in that: light supply apparatus it is more A sub-light source is located at the different location around object.
23. the 3D information collecting method based on intelligent light source as claimed in claim 21, it is characterised in that: multiple sub-light sources or Integrated light source configuration is so that object multiple regions are generally equalized by the illuminance of illumination.
24. the 3D information collecting method based on intelligent light source as claimed in claim 17, it is characterised in that: acquisition multiple images When image collecting device adjacent three positions meet three images acquiring on corresponding position and at least exist and indicate target The part of object the same area.
CN201811213081.8A 2018-10-18 2018-10-18 3D information measuring system based on intelligent light source Active CN109443199B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910862132.8A CN110567371B (en) 2018-10-18 2018-10-18 Illumination control system for 3D information acquisition
CN201811213081.8A CN109443199B (en) 2018-10-18 2018-10-18 3D information measuring system based on intelligent light source

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811213081.8A CN109443199B (en) 2018-10-18 2018-10-18 3D information measuring system based on intelligent light source

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201910862132.8A Division CN110567371B (en) 2018-10-18 2018-10-18 Illumination control system for 3D information acquisition

Publications (2)

Publication Number Publication Date
CN109443199A CN109443199A (en) 2019-03-08
CN109443199B true CN109443199B (en) 2019-10-22

Family

ID=65547620

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910862132.8A Active CN110567371B (en) 2018-10-18 2018-10-18 Illumination control system for 3D information acquisition
CN201811213081.8A Active CN109443199B (en) 2018-10-18 2018-10-18 3D information measuring system based on intelligent light source

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201910862132.8A Active CN110567371B (en) 2018-10-18 2018-10-18 Illumination control system for 3D information acquisition

Country Status (1)

Country Link
CN (2) CN110567371B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110063712B (en) * 2019-04-01 2022-01-18 深圳市明瞳视光科技有限公司 Lens displacement optometry system based on simulated light field by adopting cloud technology
CN113065502A (en) * 2019-12-12 2021-07-02 天目爱视(北京)科技有限公司 3D information acquisition system based on standardized setting
CN112304222B (en) * 2019-12-12 2022-04-08 天目爱视(北京)科技有限公司 Background board synchronous revolution's 3D information acquisition equipment
CN111780682A (en) * 2019-12-12 2020-10-16 天目爱视(北京)科技有限公司 3D image acquisition control method based on servo system
CN111207690B (en) * 2020-02-17 2021-03-12 天目爱视(北京)科技有限公司 Adjustable iris 3D information acquisition measuring equipment
CN111770264B (en) * 2020-06-04 2022-04-08 深圳明心科技有限公司 Method and device for improving imaging effect of camera module and camera module
CN112257537B (en) * 2020-10-15 2022-02-15 天目爱视(北京)科技有限公司 Intelligent multi-point three-dimensional information acquisition equipment
CN113405950B (en) * 2021-07-22 2022-07-05 福建恒安集团有限公司 Method for measuring diffusion degree of disposable sanitary product
CN113779668B (en) * 2021-08-23 2023-05-23 浙江工业大学 Foundation pit support structure displacement monitoring system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104040287A (en) * 2012-01-05 2014-09-10 合欧米成像公司 Arrangement for optical measurements and related method
CN108492358A (en) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on grating
CN108492357A (en) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on laser
CN108491760A (en) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 3D four-dimension iris data acquisition methods based on light-field camera and system

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0629707B2 (en) * 1986-10-17 1994-04-20 株式会社日立製作所 Optical cutting line measuring device
US5418546A (en) * 1991-08-20 1995-05-23 Mitsubishi Denki Kabushiki Kaisha Visual display system and exposure control apparatus
JP3878023B2 (en) * 2002-02-01 2007-02-07 シーケーディ株式会社 3D measuring device
CN1296747C (en) * 2002-12-03 2007-01-24 中国科学院长春光学精密机械与物理研究所 Scanning method of forming planar light source, planar light source and laser projection television
CN101149254B (en) * 2007-11-12 2012-06-27 北京航空航天大学 High accuracy vision detection system
CN101557472B (en) * 2009-04-24 2011-08-31 华商世纪(北京)科贸发展股份有限公司 Automatic image data collecting system
CN102080776B (en) * 2010-11-25 2012-11-28 天津大学 Uniform illuminating source and design method based on multiband LED (light emitting diode) array and diffuse reflection surface
CN103268499B (en) * 2013-01-23 2016-06-29 北京交通大学 Human body skin detection method based on multispectral imaging
CN104634277B (en) * 2015-02-12 2018-05-15 上海图漾信息科技有限公司 Capture apparatus and method, three-dimension measuring system, depth computing method and equipment
JP6624911B2 (en) * 2015-12-03 2019-12-25 キヤノン株式会社 Measuring device, measuring method and article manufacturing method
CN105608734B (en) * 2015-12-23 2018-12-14 王娟 A kind of image rebuilding method using three-dimensional image information acquisition device
CN106813595B (en) * 2017-03-20 2018-08-31 北京清影机器视觉技术有限公司 Three-phase unit characteristic point matching method, measurement method and three-dimensional detection device
CN207037685U (en) * 2017-07-11 2018-02-23 北京中科虹霸科技有限公司 One kind illuminates adjustable iris collection device
CN107389694B (en) * 2017-08-28 2023-04-25 宁夏大学 Multi-CCD camera synchronous signal acquisition device and method
CN107959802A (en) * 2018-01-10 2018-04-24 南京火眼猴信息科技有限公司 Illumination light filling unit and light compensating apparatus for Tunnel testing image capture apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104040287A (en) * 2012-01-05 2014-09-10 合欧米成像公司 Arrangement for optical measurements and related method
CN108492358A (en) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on grating
CN108492357A (en) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on laser
CN108491760A (en) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 3D four-dimension iris data acquisition methods based on light-field camera and system

Also Published As

Publication number Publication date
CN109443199A (en) 2019-03-08
CN110567371B (en) 2021-11-16
CN110567371A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
CN109443199B (en) 3D information measuring system based on intelligent light source
CN109394168B (en) A kind of iris information measuring system based on light control
CN111060024B (en) 3D measuring and acquiring device with rotation center shaft intersected with image acquisition device
CN109269405B (en) A kind of quick 3D measurement and comparison method
CN109285109B (en) A kind of multizone 3D measurement and information acquisition device
CN109141240B (en) A kind of measurement of adaptive 3 D and information acquisition device
CN208795174U (en) Camera rotation type image capture device, comparison device, mating object generating means
CN109146961A (en) A kind of 3D measurement and acquisition device based on virtual matrix
CN111060023A (en) High-precision 3D information acquisition equipment and method
CN208653401U (en) Adapting to image acquires equipment, 3D information comparison device, mating object generating means
CN108604291A (en) Expression identification system, expression discrimination method and expression identification program
CN209279885U (en) Image capture device, 3D information comparison and mating object generating means
CN109146949B (en) A kind of 3D measurement and information acquisition device based on video data
CN108492357A (en) A kind of 3D 4 D datas acquisition method and device based on laser
CN211178345U (en) Three-dimensional acquisition equipment
CN111780682A (en) 3D image acquisition control method based on servo system
CN109819235A (en) A kind of axial distributed awareness integrated imaging method having following function
CN109394170B (en) A kind of iris information measuring system of no-reflection
US11882354B2 (en) System for acquisiting iris image for enlarging iris acquisition range
CN109084679B (en) A kind of 3D measurement and acquisition device based on spatial light modulator
CN208653473U (en) Image capture device, 3D information comparison device, mating object generating means
CN208795167U (en) Illumination system for 3D information acquisition system
CN209103318U (en) A kind of iris shape measurement system based on illumination
CN209203221U (en) A kind of iris dimensions measuring system and information acquisition system based on light control
CN111207690B (en) Adjustable iris 3D information acquisition measuring equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant