CN109269405B - A kind of quick 3D measurement and comparison method - Google Patents
A kind of quick 3D measurement and comparison method Download PDFInfo
- Publication number
- CN109269405B CN109269405B CN201811032876.9A CN201811032876A CN109269405B CN 109269405 B CN109269405 B CN 109269405B CN 201811032876 A CN201811032876 A CN 201811032876A CN 109269405 B CN109269405 B CN 109269405B
- Authority
- CN
- China
- Prior art keywords
- cloud
- point
- model
- sample
- sparse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/245—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of 3D dimension measurement methods, comprising: obtains the multiple image information of the different angle of first object;According to first cloud 3D model of first object described in the multiple image information architecture;There is the point cloud 3D model sample of identity information to compare one by one with binding in first sample database is pre-stored on first cloud 3D model, the point cloud 3D model sample to match with first cloud 3D model is found, first cloud 3D model is measured and puts the geometry gap between cloud 3D model sample;In the multiple image at least there is the part for indicating object the same area in three images;Described cloud is sparse cloud or point off density cloud.It is proposed is with object is sparse or point off density cloud and sample measure/comparing raising to measure/speed compared.Creatively proposing picture quality will cause influence for 3D measurement/comparison accuracy and speed simultaneously.And using the means for the relationship for optimizing adjacent shooting picture, guarantee that the picture obtained synthesis precision is high.
Description
Technical field
The present invention relates to field of measuring technique, especially a kind of quick 3D measurement based on 3D data and comparison method and it is
System.
Background technique
Currently, after usually obtaining object point cloud data, being carried out using being dimensioned in the 3D measurement for carrying out object
It calculates, but in some cases necessary not only for the 3D size of object, it is also necessary to measure each region of object and standard mesh
Mark the size disparity between object.And such case is different from common 3D dimensional measurement, be generally used for measurement accuracy require compared with
High occasion, but if the speed that will greatly affect measurement will be measured again after object 3D model all synthesis.
There are also existing some in order to improve the side that speed measures object and standard target object geometric dimension gap
Method, including the method etc. using two dimensional image, however these method measurement accuracy are difficult to reach actual requirement.
In addition, how to guarantee quick, high quality 3D measurement and synthesis, those skilled in the art are usually from measuring process
Seek solution with composition algorithm optimization direction.Do not recognize that the quality of acquisition image also will affect object 3D synthesis
Quality, thus influence 3D measurement and compare accuracy and speed, more without propose solution.
Especially in some cases, object not instead of one, multiple regions are (such as single with the prior art at this time
The rotation of camera surrounding target object) image of shooting is difficult to meet 3D measurement/synthesis and fast and accurately requires.
In the prior art there is also using multiple cameras while taking pictures, each camera position, light are strictly demarcated before shooting
Method of the parameter to improve picture quality is learned, but this method early-stage preparations time is longer and bulky, in such as gate inhibition
In system and it is not suitable for.
However, being the basis that 3D is compared with the distance difference measurement of reference substance, above-mentioned distance difference measurement is accurate, can accurate judgement
Whether two 3D models characterize same object.And be difficult to that measurement is rapidly completed in the case where guaranteeing precision in the prior art,
Also it can not just obtain and rapidly and accurately complete to compare.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kind
State 3D measurement and the comparison method of problem.
The present invention provides a kind of 3D dimension measurement methods, comprising:
Obtain the multiple image information of the different angle of first object;First according to the multiple image information architecture
First cloud 3D model of target;
By first cloud 3D model and it is pre-stored in the point cloud 3D mould bound in first sample database and have identity information
Pattern originally compares one by one, finds the point cloud 3D model sample to match with first cloud 3D model, measures first cloud 3D
Geometry gap between model and point cloud 3D model sample;
In the multiple image at least there is the part for indicating object the same area in three images;
Described cloud is sparse cloud or point off density cloud.
The present invention also provides a kind of rapid comparison methods, comprising:
Obtain the multiple image information of the different angle of first object;Three images at least exist in the multiple image
Indicate the part of object the same area;
According to first sparse cloud 3D model of first object described in the multiple image information architecture;
Described first sparse cloud 3D model there is into the sparse of identity information with binding in first sample database is pre-stored in
Point cloud 3D model sample compares one by one, finds the sparse cloud 3D model sample to match with described first sparse cloud 3D model
This, to complete to compare.
Optionally, the camera site satisfaction of camera site two adjacent images is as follows in the multiple image of the different angle
Condition:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is
Two neighboring location drawing picture acquisition device optical axis included angle, m are coefficient.
Optionally, comprising: the sparse cloud 3D model sample to match with described first sparse cloud 3D model is corresponding
Identity information as comparison result export.
Optionally,
Construct the first point off density cloud 3D model of the biological characteristic of the first object;
If the comparison result is multiple identity informations:
By the first point off density cloud 3D model and it is pre-stored in corresponding with the comparison result in the second sample database
Point off density cloud 3D model sample compares one by one, finds the point off density cloud 3D model to match with the first point off density cloud 3D model
Sample, to complete depth comparison;
The corresponding identity information of the point off density cloud 3D model sample to match with the first point off density cloud 3D model is made
For final result output.
Optionally, sparse cloud 3D model sample in the first sample database is obtained by following steps:
Obtain the multiple image information of the different angle of target sample;
According to sparse cloud 3D model sample of the biological characteristic of target sample described in multiple image information architecture;
It is tied to the sparse cloud 3D model sample using the identity information of the target sample as distinguishing mark, is stored
Form the first sample database.
Optionally, the point off density cloud 3D model sample in second sample database is obtained by following steps:
According to the sparse cloud 3D model sample, the point off density cloud 3D model of the biological characteristic of the target sample is constructed
Sample;
It is tied to the point off density cloud 3D model sample using the identity information of the target sample as distinguishing mark, is stored
Form second sample database.
Optionally, it is described compare include compare each point of 3D model three-dimensional coordinate gray value or some point with it is neighbouring
The relationship of point.
Optionally, the preliminary comparison is carried out using temmoku point cloud matching identification method or the depth compares, the temmoku
Putting cloud matching identification method includes:
Characteristic point fitting;
Curved surface entirety best fit;
Similarity calculation.
Optionally, the temmoku point cloud matching identification method comprises the following specific steps that:
Characteristic point fitting is carried out using based on airspace directly matched method, in the corresponding rigid region of two clouds,
It chooses three and features above point is used as fitting key point, pass through coordinate transform, directly progress characteristic point Corresponding matching;
After characteristic point Corresponding matching, the alignment of data of the point cloud after whole curved surface best fit;
Similarity calculation is carried out using least square method.
Optionally, the multiple image information of different angle is obtained in the following way:
Using image collecting device around a certain central axis rotation;
Or, using one or more image collecting devices respectively with the multiple regions relative motion of object;
Or, image collecting device with auto-focusing or zoom are carried out in object relative movement;
Or, image collecting device is translating in object rotation process along optical axis direction.
Inventive point and technical effect of the invention
1, the mode that proposition sparse cloud of object or point off density cloud and sample are measured/compared improves measurement/ratio
Pair speed.Creatively proposing picture quality will cause influence for 3D measurement/comparison accuracy and speed simultaneously.And it utilizes
The means for optimizing the relationship of adjacent shooting picture guarantee that the picture obtained synthesis precision is high.
2, propose with first measured/compared with sparse cloud of object and sample, afterwards use object point off density cloud and sample
Originally it measures/mode compared measures step by step/to compare, so that in some cases, sparse cloud comparison means are
Achievable accurate measurement/comparison further increases measurement/ratio alignment without being directly entered the comparison of point off density cloud
Exactness and speed.
3, camera position is defined (empirical equation for optimizing camera position) when by acquisition picture, ensure that figure
Piece parameter sparse cloud preferably or point off density cloud are measured/are compared with sample, improve precision.
4, (camera is adaptively moved by carrying out adaptive automatic focal length adaptation during shooting object using camera
High-speed auto-focusing is realized in the distance of distance objective object or ranging), picture quality is improved, to improve 3D measurement, compare speed
And precision.And only once focused before shooting starts in the prior art, it is subsequent due to camera or object of which movement or object
Different zones are concave-convex different and cause the problem for running coke that can not solve, and it is too long to will lead to shooting time for manual focus again, or claps
It is too long to take the photograph time, to influence 3D measurement/comparison.
5, it proposes to reduce using the spinning of camera single shaft since complicated track or mobile mechanism's bring volume increase, can
The problem of being reduced by property, suitable for more applications.
6, the prior art mainly passes through HardwareUpgring and stringent calibration for the promotion of synthetic effect, does not have in the prior art
Angle position when any enlightenment can be taken pictures by changing camera guarantees the effect and stability of 3D synthesis, more without specific excellent
The condition of change.Guarantee the effect and stability of 3D synthesis present invention firstly provides angle position when optimization camera is taken pictures,
And by repetition test, the best practices condition that camera position needs to meet is proposed, the effect of 3D synthesis is substantially increased
With composograph stability.
Detailed description of the invention
By reading the following detailed description of the preferred embodiment, various other advantages and benefits are common for this field
Technical staff will become clear.The drawings are only for the purpose of illustrating a preferred embodiment, and is not considered as to the present invention
Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is a kind of flow diagram of quick personal identification method of the embodiment of the present invention;
Fig. 2 is a kind of flow diagram of preferred quickly personal identification method of the embodiment of the present invention;
Multiple image in the embodiment of the present invention of the position Fig. 3 collects the schematic diagram of image according to the requirement of acquisition position;
Fig. 4 is a kind of schematic diagram of quick identification system of another embodiment of the present invention;
Fig. 5 is a kind of schematic diagram of preferred quickly identification system of another embodiment of the present invention.
Appended drawing reference:
1 image acquiring device,
2 sparse cloud 3D model construction devices,
3 preliminary identification devices,
4 PRELIMINARY RESULTS output devices,
5 point off density cloud 3D model construction devices,
6 depth recognition devices,
7 depth results output devices.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
It is fully disclosed to those skilled in the art.
One embodiment of the invention provides a kind of quick personal identification method, includes the following steps, as shown in Figure 1:
Step S101 arranges the phase unit of more cameras composition according to preset rules, obtains the different angle of first object
Multiple image information;
Specifically, arranging the phase unit of more cameras composition, according to preset rules for according to first object acquisition target
Difference arranges that the camera of different location and quantity, first object can be the face of people, head, ear, hand, refer to portion or rainbow
One of film, or multiple combinations are chosen according to the specific requirement of the identification of concrete scene setting.
For example, arc bearing structure can be used to arrange each camera, arc carrying when first object is the face of people
Structure setting installs several cameras in the position for facing face's preset distance in arc bearing structure, every camera according to
Installation site is arranged in the requirement of the face image angle of acquisition, so as to the image of final every camera acquisition, can synthesize building
Face 3D data out.
Step S102, according to first sparse cloud 3D model of the biological characteristic of multiple image information architecture first object;
Specifically, first sparse cloud 3D model of building, can use following steps, as shown in Figure 2:
Step S1021, handles multiple image, extracts respective characteristic point in multiple image;
Step S1022, respective characteristic point in the multiple image based on extraction, generates the characteristic point cloud number of biological characteristic
According to;
Step S1023 is directed to first sparse cloud 3D model of first object according to the building of feature point cloud data.
First sparse cloud 3D model is had identity information with binding in first sample database is pre-stored in by step S103
Sparse cloud 3D model sample compares one by one, finds the sparse cloud 3D model sample to match with first sparse cloud 3D model
This, to complete preliminary compare;
Specifically, each sparse cloud 3D model includes 5000 or so characteristic points, it is able to satisfy general accuracy of identification
It is required that application.
The sparse cloud 3D model sample number that binding has different people identity information is previously stored in first sample database
It can be to be acquired when registering or registering for the first time according to, all sample datas, such as to can be bank or other mechanisms pre- using equipment
Storage is first obtained, is also possible to the population managements authorities such as local police station, the Ministry of Public Security and registers and adopt when handling identity card
Collection processing storage.
The sparse cloud 3D model sample number that binding has different people identity information is previously stored in first sample database
It acquires, can also be adopted using the equipment different from identification apparatus with same equipment of identification apparatus according to that can use
Collection.
Step S104 believes the corresponding identity of the sparse cloud 3D model sample to match with first sparse cloud 3D model
Breath is exported as comparison result.
Specifically, the corresponding identity information of matched 3D model sample found, name, year such as people can be exported directly
The information such as age, native place and previous conviction information, can satisfy the application to the more demanding environment of accuracy of identification.
Reached under conditions of not needing any identity document by above-mentioned quick personal identification method, by obtaining at that time
The biological information of the first object taken just can recognize that the purpose of the identity information of first object.It is this according to a cloud 3D
Model data avoids the error as brought by artificial judgment come the method for judging automatically target identities, without handling certificate,
Also the case where forged certificate is not present, can fast and accurately identify the identity of target, can be realized is proved with disclosed information
The effect of identity.
Respective characteristic point in multiple image in above step S1022 based on extraction, generates the characteristic point of biological characteristic
Cloud data specifically can be and include the following steps S201 to step S203.
Step S201 carries out the matching of characteristic point according to the feature of characteristic point respective in the multiple image of extraction, establishes
Matched characteristic point data collection.
Step S202 calculates each camera relative to first object feature in space according to the optical information of more cameras
On relative position, and calculate the spatial depth information of the characteristic point in multiple image depending on the relative position.
It is raw to generate first object according to the spatial depth information of matched characteristic point data collection and characteristic point by step S203
The feature point cloud data of object feature.
In above step S201, the feature of respective characteristic point can use SIFT (Scale- in multiple image
Invariant Feature Transform, scale invariant feature conversion) Feature Descriptor describes.SIFT feature description
With 128 feature description vectors, the feature of 128 aspects of any characteristic point can be described on direction and scale, significantly
The precision described to feature is improved, while Feature Descriptor has independence spatially.
In step S202, according to the optical information of more cameras, feature of each camera relative to first object is calculated
Relative position spatially, specifically, each phase can be calculated using light-stream adjustment according to the optical information of more cameras
Relative position of the machine relative to first object feature spatially.
In the definition of light-stream adjustment, it is assumed that have the point in a 3d space, it is located at multiple phases of different location
Machine sees, then light-stream adjustment is to extract the coordinate of 3D point and each camera from these multi-angle of view information
The process of relative position and optical information.
Further, the spatial depth information of the characteristic point in multiple image referred in step S202 may include: sky
Between location information and colouring information, that is, can be characteristic point the X axis coordinate of spatial position, characteristic point spatial position Y-axis
Coordinate, characteristic point are in the Z axis coordinate of spatial position, the colouring information of the value in the channel R of the colouring information of characteristic point, characteristic point
The value in the channel G, the value of the channel B of the colouring information of characteristic point, value in the channel Alpha of colouring information of characteristic point etc..This
Sample contains the spatial positional information and colouring information of characteristic point, the lattice of feature point cloud data in the feature point cloud data of generation
Formula can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, Xn indicates characteristic point in the X axis coordinate of spatial position;Yn indicates characteristic point in the Y axis coordinate of spatial position;
Zn indicates characteristic point in the Z axis coordinate of spatial position;Rn indicates the value in the channel R of the colouring information of characteristic point;Gn indicates feature
The value in the channel G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;The color of An expression characteristic point
The value in the channel Alpha of information.
According to the spatial depth information of multiple images matched characteristic point data collection and characteristic point, object feature is generated
Feature point cloud data.
Object 3D model is constructed according to feature point cloud data, to realize the acquisition of object point cloud data.
Collected object color, texture are attached on point cloud data, object 3D rendering is formed.
Wherein it is possible to 3D rendering is synthesized using all images in one group of image, it can also be higher from wherein selection quality
Image synthesized.
Above-mentioned joining method is limited citing, however it is not limited to which this, several with good grounds multi-angle two dimensional images of institute generate three
The method of dimension image can be used.
In embodiments of the present invention, the sparse cloud 3D model sample that binding has identity information is prestored in sample database
Data, acquisition construct the sparse current cloud 3D model of first object to be identified, call the sample data in sample database,
The current sparse cloud 3D model data of acquisition is compared one by one with sample data, identifies the sample data to match,
It identifies the corresponding identity information of the sample data, that is, has identified the identity information of current first object.
In addition, it is necessary to explanation, through the above steps 101 to step 104 obtain current first objects sparse point
Cloud 3D model data, the temporal information of acquisition and the PRELIMINARY RESULTS of comparison can be carried out storing, to form the body of the user
Part identification historical record, uses for subsequent big data analysis or related authorities.
Optionally,
Step S301 constructs the first point off density cloud of the biological characteristic of first object according to first sparse cloud 3D model
3D model;
Step S302, if comparison result is multiple identity informations: by the first point off density cloud 3D model and it is pre-stored in second
Point off density cloud 3D model sample corresponding with comparison result compares one by one in sample database, finds and the first point off density cloud 3D mould
The point off density cloud 3D model sample that type matches, to complete depth comparison;
Step 303, the corresponding identity of the point off density cloud 3D model sample to match with the first point off density cloud 3D model is believed
Breath is exported as final result.
That is, matching rule can be set, by the preliminary comparison of step 101 to step 104, symbol can be filtered out
Several identity informations of matching rule are closed, the primary identification based on sparse cloud 3D model data of magnanimity is completed, range will be compared
Several more similar identity informations are contracted to, accurate comparison is then done by depth correlation, depth comparison is based on each close
Collection point cloud 3D model includes 2,000,000 or more characteristic points, can reach very high accuracy of identification.
It is tentatively compared by being first directed to sparse cloud 3D model data, filters out more similar several model samples
This, then transfer corresponding point off density cloud 3D model data and carry out depth comparison, finally lock the highest point off density cloud 3D of matching degree
Model data, corresponding identity information is exactly the identity information of current first object, so as to complete the mesh to unknown identity
Mark the identification of people.In this way, on the one hand improving recognition speed, on the other hand, accuracy of identification is also improved.
Preliminary matching identification and depth matching identification can design use for different grades of security level.
Specifically, being previously stored with the point off density cloud 3D model that binding has different people identity information in the second sample database
Sample data, all sample datas equally can be to acquire when registering or register for the first time, such as can be bank or other mechanisms
Using equipment in advance obtain storage, be also possible to the population managements authorities such as local police station, the Ministry of Public Security when handling identity card into
What row registration and acquisition process stored.
The point off density cloud 3D model sample number that binding has different people identity information is previously stored in second sample database
It acquires, can also be adopted using the equipment different from identification apparatus with same equipment of identification apparatus according to that can use
Collection.
First sample database and the second sample database can store in local device, also can store and service beyond the clouds
Device.
Optionally, sparse cloud 3D model sample in first sample database is obtained by following steps:
The phase unit that more cameras composition is arranged according to preset rules, obtains the multiple image of the different angle of target sample
Information;
According to sparse cloud 3D model sample of the biological characteristic of multiple image information architecture target sample;
It is tied to sparse cloud 3D model sample using the identity information of target sample as distinguishing mark, storage forms first
Sample database.
Optionally, the point off density cloud 3D model sample in the second sample database is obtained by following steps:
According to first sparse cloud 3D model, the point off density cloud 3D model sample of the biological characteristic of target sample is constructed;
It is tied to point off density cloud 3D model sample using the identity information of target sample as distinguishing mark, storage forms second
Sample database.
Specifically, first sample database and the second sample database can be stored in the same storage device,
It can be stored in different storage devices.
In addition, it is necessary to which explanation, the point off density cloud 3D pattern number of current first object is constructed here by above-mentioned steps
According to the temporal information of building and the depth results of comparison can also be stored, and the identification of the user can be equally formed
Historical record is used for subsequent big data analysis or related authorities.
When using with acquiring first sample database data when same equipment of identification apparatus, optionally, step 101
Before, select command can also be first obtained, if getting into the order for registering or registering channel, executes the step of identity registration
Suddenly;If the step of getting the order into verifying channel, executing authentication.
Specifically, select command button can be arranged in the user interface of equipment, the first order button controllable device into
Enter registration or registration channel, executes the acquisition, processing and storage of registration;Second order button controllable device enters identity
It identifies channel, executes the acquisition, processing and comparison of identification.
Optionally, according to first sparse cloud 3D model of the biological characteristic of multiple image information architecture first object, tool
Body includes,
Using light-stream adjustment, the characteristic point of most feature is obtained from each image information in multiple image information,
Synthesize first sparse cloud 3D model.
Optionally, according to first sparse cloud 3D model, the first point off density cloud 3D of the biological characteristic of first object is constructed
Model specifically includes,
According to first sparse cloud 3D model, the first point off density cloud 3D model is synthesized by CMPS algorithm.
Although above-described embodiment be illustrated by taking the identity of people as an example, but it is understood that, identity is one wide in range
Concept, there is identity in the tangible things such as object, animals and plants.Identity can be its title, type or artificially to its volume
Number etc. it is any indicate its feature parameter.
Present invention firstly provides when carrying out 3D measurement/comparison/identification, the process and quality of the original image of acquisition are to whole
A measurement/comparison/identification process speed and precision is affected.Therefore, The present invention gives concentrate preferred picture to obtain
Method.
(1) optimization of camera position
Since target object is different, shape bumps situation is different, carried out to reach preferable synthetic effect to camera position
It is difficult to standardize expression, therefore the technology at present also not optimizing camera position when optimization.And it is reliable and stable in order to be formed
Camera matrix or the virtual matrix that is formed by camera motion, by repetition test, summing up experience carries out the structure of matrix
Optimization gives the empirical condition that the position of camera acquisition image needs to meet:
The two neighboring position of image collecting device at least meets following condition when acquiring target object image:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<1.5
Wherein L is image collecting device to the distance of object, usually image collecting device distance at first position
The distance in object face collected region.
H is object actual size in acquired image, and described image is usually image collecting device in first position
When the picture that shoots, the object in the picture has true geometric dimension (not being the size in picture), measures the size
When along first position to the second position orientation measurement.Such as first position and the second position are the relationships moved horizontally, that
The size is measured along the horizontal cross of object.Such as the object left end that can show that in picture is A, right end
For B, then the linear distance of A to B on object is measured, is H.The measurement method can be according to A, B distance in picture, in conjunction with phase
Machine lens focus carries out actual distance calculation, and A, B can also be identified on object, is directly measured using other measurement means
AB linear distance.
A is two neighboring location drawing picture acquisition device optical axis included angle.
M is coefficient.
Since article size, concave-convex situation are different, the value of a can not be limited with strict formula, needs rule of thumb to carry out
It limits.According to many experiments, the value of m preferably can be within 0.8 within 1.5.Specific experiment data are referring to such as
Lower table:
Object | M value | Synthetic effect | Synthetic ratio |
Human body head | 0.1、0.2、0.3、0.4 | It is very good | > 90% |
Human body head | 0.4、0.5、0.6 | It is good | > 85% |
Human body head | 0.7、0.8 | It is relatively good | > 80% |
Human body head | 0.9、1.0 | Generally | > 70% |
Human body head | 1.0、1.1、1.2 | Generally | > 60% |
Human body head | 1.2、1.3、1.4、1.5 | Synthesis reluctantly | > 50% |
Human body head | 1.6、1.7 | It is difficult to synthesize | < 40% |
After object and image collecting device determine, the value of a can be calculated according to above-mentioned empirical equation, according to a value
It can determine the parameter of virtual matrix, i.e. positional relationship between matrix dot.
In general, virtual matrix is one-dimensional matrix, such as along the multiple matrix dots of horizontal direction arrangement (acquisition position
It sets).But when some target objects are larger, two-dimensional matrix is needed, then two adjacent in vertical direction positions equally meet
Above-mentioned a value condition.
Under some cases, even from above-mentioned empirical equation, also it is not easy to determine matrix parameter (a value) under some occasions, this
When need to adjust matrix parameter according to experiment, experimental method is as follows: prediction matrix parameter a is calculated according to above-mentioned formula, and according to
Matrix parameter control camera is moved to corresponding matrix dot, such as camera shoots picture P1 in position W1, after being moved to position W2
Picture P2 is shot, whether in picture P1 and picture P2 have the part that indicates object the same area, i.e. P1 ∩ P2 is non-if comparing at this time
Empty (such as simultaneously including human eye angle part, but photograph taking angle is different), if readjusting a value without if, re-moves
To position W2 ', above-mentioned comparison step is repeated.If P1 ∩ P2 non-empty, phase is continued to move to according to a value (adjustment or unadjusted)
Machine shoots picture P3, comparing whether to have in picture P1, picture P2 and picture P3 again indicates object the same area to the position W3
Part, i.e. P1 ∩ P2 ∩ P3 non-empty.Recycle plurality of pictures synthesize 3D, test 3D synthetic effect, meet 3D information collection and
Measurement request.That is, as shown in figure 3, the structure of matrix is by the position of image collecting device when acquisition multiple images
Decision is set, adjacent three positions meet three images acquired on corresponding position and at least there is the expression same area of object
The part in domain.
(2) optimization of camera shooting process
It notices for the first time and is proposed for the irregular object of profile, merely with single coke in camera relative movement
It will affect 3D synthetic effect and measurement, comparison accuracy away from take pictures.To overcome caused by object irregular contour apart from camera
The problem that the irregular variation of distance causes focusing inaccurate proposes the solution party using mobile camera, again zoom, auto-focusing
Case is suggested for the first time in 3D acquisition and fields of measurement.And it is put forward for the first time and is focused in real time in camera moving process.Overcome
In the prior art it is primary focus caused by the bad problem of 3D synthetic effect.Simultaneously in order to cooperate real-time focusing, camera is rotated
Mode is optimized: being suitable for that the angle taken pictures stops waiting focusing, is being rotated again after shooting.Using pair of optimization
The problem of burnt strategy, ensure that focusing speed, prevent from reducing due to bring acquisition speed of focusing in real time, and time of measuring extends.
This is all different with existing focusing strategy, and existing focusing strategy is not high to the requirement of real-time of focusing.
Since camera shooting apparent is to require object focusing accurate, but focus in traditional technology only when rotating beginning
It carries out, if that starting the position of focusing, the region distance camera distance of object straight-on camera is H, and during rotation,
The region distance camera distance of object straight-on camera is h (x), and wherein x is camera position.Since contour of object is not circle,
Or it since camera rotation center is difficult to be overlapped with object center completely, then h (x) is difficult to be equal to H, will cause in this way
It is accurate to be difficult to focus in rotation process, to cause 3D rendering that can not synthesize, or synthesis has large error, leads to 3D measurement not
Accurately.
Therefore, gearshift can be in the radially mobile image acquisition equipment of image capture device, so that Image Acquisition
Equipment can be close to or far from target object, to guarantee in entire rotation process, image capture device is focused always accurately,
Drive image collecting device that image collecting device and object distance in relative movement are protected by gearshift
It holds constant.Accordingly even when can also be protected in entire rotation process for camera lens is the image capture device of tight shot
Card focusing is accurate.
Further include range unit, can measure image capture device to object real-time range.Range unit is for the first time
Focusing measures image capture device to the distance H of target object, after rotation starts, range unit real-time measurement image after the completion
Equipment is acquired to the real-time range h (x) of object, and data H and h (x) are passed into processing unit.Processing unit 100 judges h
(x) > H, then command displacement device is radially controlled to close to mobile h (the x)-H distance in the direction of object if judging h (x) < H
Gearshift is radially failure to actuate to mobile H-h (x) distance in the direction far from object if judging h (x)=H.
Wherein range unit can be the various types such as laser range finder, image rangefinder.It can individually become one
Module is also possible to a part of image capture device.
Wherein image capture device can be camera, video camera, CCD, CMOS, and can arrange in pairs or groups various camera lenses as needed,
Such as infrared lens, visible light lens, remote zoom lens, wide-angle lens, Macro Lens head etc..
Preferably, although image capture device can be kept constant with object distance by mobile image acquisition equipment,
Mobile stepper motor is carried out with minimum step, it has impact on the mobile resolution ratio of image capture device.So that can not
It is stringent to keep image capture device and object distance constant.Mobile inaccuracy caused by ageing equipment also results in this and asks
Topic.Therefore, it in order to avoid the irretentive problem of distance caused by mechanical structure, can be turned in image capture device each
The position shot stops operating, then re-starts auto-focusing.
But since quality is big, mobile camera needs the longer time, causes the time adaptively adjusted longer, is unfavorable for
Quick Acquisition and measurement.During rotation, range unit real-time measurement camera to object distance (object distance) h (x), and will survey
Amount result is sent to processing unit, and processing unit looks into object distance-focal length table, finds corresponding focal length value, issues focusing letter to camera
Number, control camera ultrasonic motor driving camera lens is mobile to carry out rapid focus.In this way, image capture device can not adjusted
Rapid focus in the case where also not adjusting its lens focus significantly, is realized in position, guarantees that image capture device shooting photo is clear
It is clear.This is also one of inventive point of the invention.
To sum up, using adaptive unit in the relative movement of image acquisition device region and object basis
Image collecting device is adjusted image collecting device at a distance from object, it is made to obtain the clear image of object, institute
Image collecting device and object distance in above-mentioned relative movement can be made for driving image collecting device by stating adjustment
It is constant;Or can in above-mentioned relative movement the autozoom of Real time changing focus or auto-focusing.
(3) optimization of certain occasions
In some cases .., it is often necessary to acquire the 3D information of object different zones.Such as it needs to acquire people face simultaneously
Portion and iris information.There is also utilization 3D to acquire the advanced pedestrian face 3D information collection of equipment in the prior art, then carries out iris
The scheme of 3D information collection.However, requirement of the object different zones for 3D acquisition equipment is different.Such as face 3D is adopted
When collection, need to acquire using head as the information within the scope of 180 ° of axis, and iris 3D acquisition only needs acquisition very low-angle
Information;Camera usually uses Visible Light Camera when face 3D is acquired, and iris 3D acquisition then needs to use infrared camera;Face 3D acquisition
The requirements such as the camera lens depth of field, lens type are all different with iris 3D acquisition.That is, due to object different zones
Feature is different, if single 3D acquisition equipment is mixed, it is poor that it will cause collection effects, or even can not synthesize 3D rendering.
By the pickup area and the 1st region relative motion of object of the 1st image collecting device, the 1st area of object is acquired
The 1st group of domain image;And so on, by the pickup area and object the n-th region relative motion of m image collecting device, adopt
Collect the n-th region of object n-th group image, wherein m >=1, n >=2;Object is obtained according to the multiple images in above-mentioned every group of image
The 3D information of corresponding region.
By taking human face and iris as an example, processor controls corresponding servo motor, drives face image acquisition unit, iris
Image acquisition units enable camera to carry out 180 ° of rotation around human body head along respective track, moving on rails, thus
Shoot multiple images of human body head;Camera is enabled to carry out 90 ° of rotations around human eye, to shoot body iris
Multiple images.Needs are acquired according to practical 3D, camera can also be rotated at any angle around human body head, such as 45 °,
90°,270°,360°.Simultaneously according to the needs of acquisition iris, the iris information of one eye eyeball can be acquired, two can also be acquired
Eyes.It only needs to rotate 20 ° or so if only acquisition one eye eyeball.Also, it is to be understood that camera needs the angle rotated
Size, the distance of camera distance target area of degree and target area, camera focus etc. have relationship.Restriction can be previously entered
These parameters, processor control corresponding camera rotational angle after calculating.The characteristics of alternatively, it is also possible to according to pickup area, identification
Starting point and ending point out controls camera between starting point and ending point and takes pictures.Such as can identify canthus position,
Start to take pictures when camera view moves to canthus position, stops taking pictures when leaving another canthus position.In addition to this it is possible to
The opportunity that camera is taken pictures is not controlled, is to start to take pictures in track starting point, is taken pictures in the stopping of track terminal.
Processor receives one group of image of camera transmission respectively, and filters out multiple images from image group respectively.Again
Facial 3D rendering is synthesized using multiple images, synthesizes iris 3D rendering using multiple images.Synthetic method can be used according to phase
The method that adjacent image characteristic point carries out image mosaic, also can be used other methods.
And in some cases .., such as in access control system, since locational space itself is limited, acquisition/measurement/comparison is filled
It is higher to set volume requirement.It is excessively complicated that there are structures in 3D rendering acquisition mode based on monocular camera, and occupied space is big, uses
The deficiencies of low efficiency, acquires especially for small range, small depth targets object 3D rendering, and there is no efficiently adopt in existing product
Collect equipment and measuring device.Such as the device that 3D shooting is carried out using single camera exists in the prior art, but it is needed simultaneously
Using rotating device, track (translating device), occupied in particular for the carrying platform, track etc. of straight line (circular curve) movement
The biggish mechanical structure in space needs for camera to be arranged farther away two positions in space and realizes Image Acquisition and measurement, makes
Obtaining whole device, structure is complicated.Camera is carried using robotic arm there are also some, any angle, any position in realization solid space
The shooting set.Although the range that this equipment can acquire, measure is wider, robotic structure is complicated, and control difficulty is larger.And
The complexity of structures and methods means that reliability can reduce to a certain extent.And since there are straight line (curve) telecontrol equipments
Or mechanical arm, the control of the device and mobile bring acquisition and measurement inaccuracy are also intrinsic problem.And size is smaller
And the lesser object of depth (such as iris) usually requires that small acquisition/measuring device volume, high reliablity, acquisition speed are fast,
Especially it requires acquisition range lower.There is no any prior arts to recognize the special of the type object acquisition at present
It is required that is, it is no it is any propose the problem motivation, more without it is any for small range, small depth 3D point cloud, image it is specific
Acquisition/measuring device and method.
Image collecting device acquires one group of object by the pickup area of image collecting device and object relative motion
Image;Pickup area mobile device is used to drive the pickup area of image collecting device and object to generate relative motion;It is described
Pickup area mobile device is turning gear, so that image collecting device is along a central axis rotation;Image collector is set to a phase
Machine, the camera by the camera fixed frame that is fixedly mounted on turn seat, being connected with rotary shaft under turn seat, rotary shaft by
The control rotation of shaft driving device, shaft driving device and camera are all connected with controlling terminal, and controlling terminal is for controlling shaft drive
Dynamic device implements driving and camera shooting.In addition, rotary shaft can also be directly fixedly connected with image collecting device, camera is driven
Rotation.
Pivot axle can be located at below image collecting device, and rotary shaft is directly connected to image collecting device, at this time
Central axis intersects with image collecting device;Central axis is located at the camera lens side of the camera of image collecting device, at this point, camera is in
Mandrel is rotated and is shot, and rotation link arm is provided between rotary shaft and turn seat;Central axis is located at image collecting device
The reversed side of camera lens of camera be arranged between rotary shaft and turn seat at this point, camera is around center axis rotation and is shot
Rotation link arm, and can according to need and set linking arm to there is curved structure upward or downward;Central axis is located at
The reversed side of the camera lens of the camera of image collecting device, and central axis be it is horizontally disposed, the setting allow camera hang down
Histogram is adaptable to vertical direction and shoots with the object of special characteristic, wherein shaft driving device to angular transformation is carried out
Rotary shaft rotation is driven, drives and swings linking arm up and down motion;Shaft driving device further includes lifting device and rises for controlling
The lifting drive of falling unit movement, lifting drive are connect with controlling terminal, increase the bat of 3D information acquisition device
Take the photograph regional scope.
In addition to the above methods, scheming the pickup area mobile device is optical scanner, so that image collecting device is not
In the case where mobile or rotation, the pickup area and object of image collecting device generate relative motion.The pickup area is moved
Dynamic device further includes light deflection unit, and optionally, light deflection unit is driven by light deflection driving unit, image collector
It is set to a camera, the camera is fixedly mounted, and physical location does not change, i.e., do not move and also do not rotate, inclined by light
Turning unit makes the pickup area of camera that certain variation occur, to realize that object changes with pickup area, the process
In, light deflection unit can be driven by light deflection driving unit so that the light of different directions enters image collecting device.
Light deflection driving unit can be control light deflection unit linear motion or the driving device of rotation.Light deflection driving is single
Member and camera are all connected with controlling terminal, and controlling terminal implements driving and camera shooting for controlling shaft driving device.
Optionally, it is tentatively compared using temmoku point cloud matching identification method or depth compares, temmoku point cloud matching identification
Method includes:
S301. characteristic point is fitted;
S302. curved surface entirety best fit;
S303. similarity calculation.
Optionally, temmoku point cloud matching identification method comprises the following specific steps that:
Characteristic point fitting is carried out using based on airspace directly matched method, in the corresponding rigid region of two clouds,
It chooses three and features above point is used as fitting key point, pass through coordinate transform, directly progress characteristic point Corresponding matching;
After characteristic point Corresponding matching, the alignment of data of the point cloud after whole curved surface best fit;
Similarity calculation is carried out using least square method.
Temmoku point cloud matching identification method (Yare Eyes point cloud match recognition method) is known
Other process and working principle are as follows: firstly, point cloud at a time is the basic element for forming four dimension modules, it includes space
Coordinate information (XYZ) and colouring information (RGB).The attribute of point cloud includes spatial resolution, positional accuracy, surface normal etc..
Its feature is not influenced by external condition, will not all be changed for translating and rotating.Reverse software is able to carry out a cloud
Editor and processing, such as: imageware, geomagic, catia, copycad and rapidform.
The distinctive directly matched method in airspace that is based on of temmoku point cloud matching identification method includes: iteration closest approach method ICP
(Iterative closest point), ICP method are generally divided into two steps, the fitting of first step characteristic point, second step curved surface entirety
Best fit.The purpose for being first fitted alignment feature point is in order to which the shortest time is found and is aligned two clouds of fitting to be compared.
But not limited to this.Such as it may is that
The first step chooses three and features above point is used as fitting key point in the corresponding rigid region of two clouds,
Pass through coordinate transform, directly progress characteristic point Corresponding matching.
ICP is a very effective tool in 3D data reconstruction process, at certain for curve or the registration of curved surface segment
One moment gave the rough initial alignment condition of two 3D models, and ICP seeks rigid transformation between the two iteratively with minimum
Change alignment error, realizes the registration of the space geometry relationship of the two.
Given setWithSet element indicates two model surfaces
Coordinate points, the iterative solution of ICP registration technique apart from nearest corresponding points, establish transformation matrix, and implement to become to one of
It changes, until reaching some condition of convergence, its coding of iteration stopping is as follows:
1.1ICP algorithm
Input .P1, P2.
P2 after output is transformed
P2 (0)=P2, l=0;
Do
Each of For P2 (l) point
A nearest point y is looked in P1i;
End For
It calculatesRegistration error E;
If E is greater than a certain threshold value
Calculate the transformation matrix T (l) between P2 (l) and Y (l);
P2 (l+1)=T (l) P2 (l), l=l+1;
Else
Stop;
End If
While||P2(l+l)-P2(l)||>threshold;
Wherein registration error
1.2 matchings based on local feature region:
By taking the identification of human face's information as an example, faceform is broadly divided into rigid model part and plasticity model part, plasticity
Deformation influences the accuracy of alignment, and then influences similarity.Second of acquisition data has local difference to plasticity model for the first time,
A kind of solution route be only in rigid region selected characteristic point, characteristic point be extracted from an object, under certain condition
Constant attribute is stablized in holding, is fitted alignment to characteristic point using iteration closest approach method ICP.
Requirement to characteristic point:
1) completeness contains object information as much as possible, is allowed to be different from the object of other classifications;
2) data volume needed for compactedness expression is as few as possible;
3) feature preferably remains unchanged under model rotation, translation, mirror transformation.
In 3D living things feature recognition, using two 3D biological characteristic model point clouds of alignment, the similar of input model is calculated
Degree, wherein registration error is as difference measure.
Step 2: after characteristic point best fit, the alignment of data of the point cloud after whole curved surface best fit.
Third step, similarity calculation.
Least square method (also known as least squares method) is a kind of mathematical optimization techniques.It passes through the quadratic sum for minimizing error
Find the optimal function matching of data.Unknown data can be easily acquired using least square method, and these are acquired
Data and real data between error quadratic sum be minimum.Least square method can also be used for curve matching.It is some other excellent
Change problem can also be expressed by minimizing energy or maximizing entropy with least square method.It is usually used in solving curve fit problem,
And then solve the complete fitting of curved surface.It can accelerate Data Convergence by iterative algorithm, quickly acquire optimal solution.
If 3D data model at a time is inputted with stl file format, by calculating point cloud and triangle
The distance of piece determines its deviation.Therefore, this method needs to establish each tri patch plane equation, and deviation arrives flat for point
The distance in face.And be IGES or STEP model for 3D data model at a time, since free form surface expression-form is
The face NURBS, so the method that the distance calculating in point to face needs to use numerical optimization is calculated.By in iterative calculation point cloud
Each point expresses deviation to the minimum range of nurbs surface, or nurbs surface is carried out to specify scale discrete, with point and point
Apart from approximate expression point deviation, or it is converted into STL format and carries out deviation calculating.Different coordinate alignment and deviation calculating side
The testing result of method, acquisition is also different.The size of alignment error will directly affect the confidence level of detection accuracy and assessment report.
Best fit alignment is that detection error averagely arrives entirety, is terminated in terms of iteration by guaranteeing the whole minimum condition of deviation
The alignment procedure of calculation carries out 3D analysis to registration result, generates result object in the form of the root mean square of error between two figures
Output, root mean square is bigger, reflects that difference of two models at this is bigger.Vice versa.Judge according to registration ratio is compared
It whether is to compare subject matter.
Optionally, first object includes at least one of head and face portion, ear, hand, iris.
Specifically, one of above-mentioned position can be used as acquisition data and identification object, can also be used two or more
Combination, be acquired and identify.In contrast, it is acquired and identifies using two or more combinations, can reach higher
Accuracy of identification.
In addition, head and face portion, ear, hand all include rigid region and flexible region, ratio, eyes, the mouth of face be easy
Denaturation is flexible region, and ear and iris are unlikely to deform, and is rigid region.
Can preset matching rule, define matching threshold, acquisition and identification process in, can individually select rigid region to carry out
Acquisition and identification, can also individually select flexible region to be acquired and identify, when the 3D mould currently acquired in selected region
When type data and 3D model sample Data Matching degree reach threshold value, i.e., it is believed that identities match, is identified and currently acquired
The corresponding identity of personage information of 3D model data.
Same way, it is also possible to define the range of rigid region and flexible region, identification weight and threshold value respectively, in acquisition and
In identification process, the range for definition and weight rigid region and flexible region can be acquired simultaneously, and according to the threshold value of definition
It is identified, when the 3D model data of selected rigid region and flexible region currently acquired and 3D model sample data
When reaching threshold value with degree, i.e., it is believed that identities match, is identified the body of the corresponding personage of 3D model data currently acquired
Part information.
It is described herein to reach threshold value, it can be setting rigid region and flexible region respectively while reaching threshold value, it can also
Reach threshold value to be set as one of rigid region or flexible region.Come as long as being able to achieve according to preset matching rule
The purpose of identification is compared, does not do specific restriction herein.
Another embodiment of the present invention provides a kind of quick identification system, as shown in Figure 4, comprising:
Image acquiring device obtains first object for arranging the phase unit of more cameras composition according to preset rules
The multiple image information of different angle;
Sparse cloud 3D model construction device, for according to the of the biological characteristic of multiple image information architecture first object
One sparse cloud 3D model;
Preliminary identification device, for first sparse cloud 3D model to be had body with binding in first sample database is pre-stored in
Sparse cloud 3D model sample of part information compares one by one, finds the sparse cloud 3D to match with first sparse cloud 3D model
Model sample, to complete preliminary compare;
PRELIMINARY RESULTS output device, the sparse cloud 3D model sample for will match with first sparse cloud 3D model
Corresponding identity information is exported as comparison result.
Specifically, arranging the phase unit of more cameras composition, according to preset rules for according to first object acquisition target
Difference arranges that the camera of different location and quantity, first object can be the face of people, head, ear, hand, refer to portion or rainbow
One of film, or multiple combinations are chosen according to the specific requirement of the identification of concrete scene setting.
For example, arc bearing structure can be used to arrange each camera, arc carrying when first object is the face of people
Structure setting installs several cameras in the position for facing face's preset distance in arc bearing structure, every camera according to
Installation site is arranged in the requirement of the face image angle of acquisition, so as to the image of final every camera acquisition, can synthesize building
Face 3D data out.
Camera can use fixed-focus camera or zoom camera, be selected according to specific application.
Sparse cloud 3D model construction device may include with image procossing specifically, can be data processing centre
The processing unit of device GPU and central processor CPU;The image information of several objective body characteristic images is assigned to the block of GPU
Operation is carried out in block, and combines the centralized dispatching and distribution function of CPU, calculates the respective feature of several biometric images
Point.It can be seen that the embodiment of the present invention carries out the acquisition of target biometric information using more photographing camera control technologies, it can
To significantly improve the collecting efficiency of characteristic information.Also, the embodiment of the present invention based on central processing unit and graphics processor and
Row calculates, and can efficiently realize the processing of characteristic information.
Preliminary identification device can be used temmoku point cloud matching identification method and tentatively be compared.
PRELIMINARY RESULTS output device, can specifically using display carry out can image viewization show, voice output can also be used
Device carries out sound result prompt, text output PRELIMINARY RESULTS can also be used, to the identity information for the PRELIMINARY RESULTS that comparison obtains
It is exported.
From the foregoing, it will be observed that above-mentioned quick identification system obtains the different angle of first object by image acquiring device
Multiple image information, by sparse cloud 3D model construction device according to the biological characteristic of multiple image information architecture first object
First sparse cloud 3D model, first sparse cloud 3D model and first sample data will be pre-stored in by preliminary identification device
Binding has sparse cloud 3D model sample of identity information to compare one by one in library, finds and matches with first sparse cloud 3D model
Sparse cloud 3D model sample, with complete it is preliminary compare, finally will be with first sparse cloud 3D by PRELIMINARY RESULTS output device
The corresponding identity information of sparse cloud 3D model sample that model matches is exported as comparison result.It is being not required to reach
Under conditions of wanting any identity document, by the surface of first object at that time, the identity letter of first object just can recognize that
The purpose of breath.This method for judging automatically target identities according to sparse cloud 3D model data, avoids due to manually sentencing
Error brought by disconnected the case where forged certificate is also not present, can fast and accurately identify the body of target without handling certificate
Part.
This system, which can be applied, handles the peace such as equipment, airport, railway station, subway in entry and exit rays safety detection apparatus, banking
Examine equipment.
It is illustrated so that user handles equipment using banking as an example:
Transacting business for the first time is registered or is registered, and image acquiring device acquires the multiple image of the different angle of user
Information is sent to sparse cloud 3D model construction device and is handled, and constructs the sparse cloud 3D model sample of the user, association
The identity information of user is stored in first sample database.
When user arrives bank's transacting business again, user identity identification is carried out, phase unit acquires the current difference of user
The multiple image information of angle, sparse cloud 3D model construction device construct first sparse cloud of the user through data processing
3D model is sent to preliminary identification device, transfers all sparse cloud 3D model samples being stored in first sample database
It compares, finds the sparse cloud 3D model sample to match with first sparse cloud 3D model, and then find associated identity letter
Breath, has obtained PRELIMINARY RESULTS, has been sent to display and is shown, shown the identity of the user, if PRELIMINARY RESULTS shows current use
Family identity is the registered user of bank, then the banking of the user is called to handle permission, into service processing menu, is carried out
Corresponding business handling operation.
Optionally, as shown in figure 5, system further include:
Point off density cloud 3D model construction device, for constructing the biology of first object according to first sparse cloud 3D model
First point off density cloud 3D model of feature;
Depth recognition device, if being multiple identity informations for comparison result:
By the first point off density cloud 3D model be pre-stored in the second sample database the corresponding point off density cloud 3D with comparison result
Model sample compares one by one, the point off density cloud 3D model sample to match with the first point off density cloud 3D model is found, to complete depth
Degree compares;
Depth results output device, the point off density cloud 3D model sample for will match with the first point off density cloud 3D model
Corresponding identity information is exported as final result.
That is, preliminary identification device and depth recognition device can preset matching rule, by preliminary identification device
Preliminary comparison can filter out the several identity informations for meeting matching rule, complete to be based on sparse cloud 3D model data of magnanimity
Primary identification, range shorter will be compared to several more similar identity informations, then pass through the depth of depth recognition device
Accurate comparison is done in comparison, and to be based on each point off density cloud 3D model include 2,000,000 or more characteristic points that depth compares can reach
To very high accuracy of identification.
It is tentatively compared by being first directed to sparse cloud 3D model data, filters out more similar several model samples
This, then transfer corresponding point off density cloud 3D model data and carry out depth comparison, finally lock the highest point off density cloud 3D of matching degree
Model data, corresponding identity information is exactly the identity information of current first object, so as to complete the mesh to unknown identity
Mark the identification of people.In this way, on the one hand improving recognition speed, on the other hand, accuracy of identification is also improved.
Preliminary identification device and depth recognition device can match for the application system of different grades of security level to design
Set use.
PRELIMINARY RESULTS output device and depth results output device can be exported using a device, and two dresses can also be used
Output is set, is shown for example, by using a display, two displays can also be used and show.
Optionally, further includes:
Storage device, for storing sparse cloud 3D model sample data and the second sample number in first sample database
According to the point off density cloud 3D model sample data in library.
Specifically, can choose the configuration mode of storage device, when can will store to the higher closed system of security level
Device is configured in local, to guarantee network security and the speed of service;When general for security level, can Open Management be
System.Storage device can be configured server beyond the clouds, the range of application can be expanded.
Heretofore described target can be a physical objects or people, or multiple objects constituent.
The 3D information of the object includes 3D rendering, 3D point cloud, 3D grid, part 3D feature, 3D size and all bands
There is the parameter of object 3D feature.
So-called 3D, three-dimensional refer to tri- directional informations of XYZ in the present invention, especially have depth information, and only
There is two-dimensional surface information that there is essential distinction.Also it is known as 3D, panorama, holography, three-dimensional with some, but actually only includes two-dimentional letter
Breath, does not especially include that the definition of depth information has essential distinction.
Pickup area described in the present invention refers to the range that Image Acquisition/acquisition device (such as camera) can be shot.
Image Acquisition/acquisition device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitoring
Device, camera, mobile phone, plate, notebook, mobile terminal, wearable device, smart glasses, smart watches, Intelligent bracelet and
With image collecting function all devices.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect
Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself
All as a separate embodiment of the present invention.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment
Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or
Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any
Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed
All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any
Can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors
Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice
Microprocessor or digital signal processor (DSP) are according to an embodiment of the present invention biological special based on Visible Light Camera to realize
Levy some or all functions of some or all components in 4 D data acquisition device.The present invention is also implemented as using
In executing some or all device or device programs of method as described herein (for example, computer program and meter
Calculation machine program product).It is such to realize that program of the invention can store on a computer-readable medium, or can have one
The form of a or multiple signals.Such signal can be downloaded from an internet website to obtain, or mention on the carrier signal
For, or be provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability
Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch
To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame
Claim.
So far, although those skilled in the art will appreciate that present invention has been shown and described in detail herein multiple shows
Example property embodiment still without departing from the spirit and scope of the present invention, still can according to the present disclosure directly
Determine or deduce out many other variations or modifications consistent with the principles of the invention.Therefore, the scope of the present invention is understood that and recognizes
It is set to and covers all such other variations or modifications.
Claims (18)
1. a kind of 3D dimension measurement method characterized by comprising
Obtain the multiple image information of the different angle of first object;According to first object described in the multiple image information architecture
First cloud 3D model;
By first cloud 3D model and it is pre-stored in the point cloud 3D model sample bound in first sample database and have identity information
This is compared one by one, is found the point cloud 3D model sample to match with first cloud 3D model, is measured first cloud 3D model
With the geometry gap between cloud 3D model sample;
In the multiple image at least there is the part for indicating object the same area in three images;
Described cloud is sparse cloud or point off density cloud;
The camera site of camera site two adjacent images meets following condition in the multiple image of the different angle:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is adjacent
Two location drawing picture acquisition device optical axis included angles, m is coefficient.
2. the method as described in claim 1 characterized by comprising dilute by matching with first sparse cloud 3D model
The corresponding identity information of point cloud 3D model sample is dredged to export as comparison result.
3. according to the method described in claim 2, it is characterized in that,
Construct the first point off density cloud 3D model of the biological characteristic of the first object;
If the comparison result is multiple identity informations:
By the first point off density cloud 3D model and it is pre-stored in corresponding with the comparison result intensive in the second sample database
Point cloud 3D model sample compares one by one, finds the point off density cloud 3D model sample to match with the first point off density cloud 3D model
This, to complete depth comparison;
Identity information that the point off density cloud 3D model sample to match with the first point off density cloud 3D model is corresponding is as most
Terminate fruit output.
4. according to the method described in claim 3, it is characterized in that, point off density cloud 3D model in second sample database
Sample is obtained by following steps:
According to the sparse cloud 3D model sample, the point off density cloud 3D model sample of the biological characteristic of the target sample is constructed
This;
It is tied to the point off density cloud 3D model sample using the identity information of the target sample as distinguishing mark, storage is formed
Second sample database.
5. the method according to claim 1, wherein sparse cloud 3D model in the first sample database
Sample is obtained by following steps:
Obtain the multiple image information of the different angle of target sample;
According to sparse cloud 3D model sample of the biological characteristic of target sample described in multiple image information architecture;
It is tied to the sparse cloud 3D model sample using the identity information of the target sample as distinguishing mark, storage is formed
The first sample database.
6. the method according to claim 1, wherein the multiple image information for obtaining different angle passes through such as lower section
Formula:
Using image collecting device around a certain central axis rotation;
Or, using one or more image collecting devices respectively with the multiple regions relative motion of object;
Or, image collecting device with auto-focusing or zoom are carried out in object relative movement;
Or, image collecting device is translating in object rotation process along optical axis direction.
7. according to any method of claim 2-4, which is characterized in that described compare includes comparing each point of 3D model
The relationship of three-dimensional coordinate or gray value or some point and neighbor point.
8. -6 any method according to claim 1, which is characterized in that carried out using temmoku point cloud matching identification method preliminary
It compares or depth compares, the temmoku point cloud matching identification method includes:
Characteristic point fitting;
Curved surface entirety best fit;
Similarity calculation.
9. according to the method described in claim 8, it is characterized in that, temmoku point cloud matching identification method includes following specific step
It is rapid:
Characteristic point fitting is carried out using based on the directly matched method in airspace, in the corresponding rigid region of two clouds, is chosen
Three and features above point conduct fitting key point, pass through coordinate transform, directly progress characteristic point Corresponding matching;
After characteristic point Corresponding matching, the alignment of data of the point cloud after whole curved surface best fit;
Similarity calculation is carried out using least square method.
10. a kind of rapid comparison method characterized by comprising
Obtain the multiple image information of the different angle of first object;Three images at least exist and indicate in the multiple image
The part of object the same area;
According to first sparse cloud 3D model of first object described in the multiple image information architecture;
By described first sparse cloud 3D model and it is pre-stored in the sparse cloud bound in first sample database and have identity information
3D model sample compares one by one, finds the sparse cloud 3D model sample to match with described first sparse cloud 3D model, with
It completes to compare;
The camera site of camera site two adjacent images meets following condition in the multiple image of the different angle:
H* (1-cosb)=L*sin2b;
A=m*b;
0<m<0.8
Wherein L is distance of the image collecting device to object, and H is object actual size in acquired image, and a is adjacent
Two location drawing picture acquisition device optical axis included angles, m is coefficient.
11. method as claimed in claim 10 characterized by comprising will be with described first sparse cloud 3D model phase
The corresponding identity information of sparse cloud 3D model sample matched is exported as comparison result.
12. according to the method for claim 11, which is characterized in that
Construct the first point off density cloud 3D model of the biological characteristic of the first object;
If the comparison result is multiple identity informations:
By the first point off density cloud 3D model and it is pre-stored in corresponding with the comparison result intensive in the second sample database
Point cloud 3D model sample compares one by one, finds the point off density cloud 3D model sample to match with the first point off density cloud 3D model
This, to complete depth comparison;
Identity information that the point off density cloud 3D model sample to match with the first point off density cloud 3D model is corresponding is as most
Terminate fruit output.
13. according to the method for claim 12, which is characterized in that the point off density cloud 3D mould in second sample database
Pattern sheet is obtained by following steps:
According to the sparse cloud 3D model sample, the point off density cloud 3D model sample of the biological characteristic of the target sample is constructed
This;
It is tied to the point off density cloud 3D model sample using the identity information of the target sample as distinguishing mark, storage is formed
Second sample database.
14. according to the method described in claim 10, it is characterized in that, sparse cloud 3D mould in the first sample database
Pattern sheet is obtained by following steps:
Obtain the multiple image information of the different angle of target sample;
According to sparse cloud 3D model sample of the biological characteristic of target sample described in multiple image information architecture;
It is tied to the sparse cloud 3D model sample using the identity information of the target sample as distinguishing mark, storage is formed
The first sample database.
15. according to the method described in claim 10, it is characterized in that, obtaining the multiple image information of different angle by as follows
Mode:
Using image collecting device around a certain central axis rotation;
Or, using one or more image collecting devices respectively with the multiple regions relative motion of object;
Or, image collecting device with auto-focusing or zoom are carried out in object relative movement;
Or, image collecting device is translating in object rotation process along optical axis direction.
16. any method of 0-15 according to claim 1, which is characterized in that described compare includes that comparison 3D model is each
The three-dimensional coordinate or gray value of point or some relationship of point with neighbor point.
17. any method of 0-15 according to claim 1, which is characterized in that carried out using temmoku point cloud matching identification method
Preliminary comparison or depth compare, and the temmoku point cloud matching identification method includes:
Characteristic point fitting;
Curved surface entirety best fit;
Similarity calculation.
18. according to the method for claim 17, which is characterized in that the temmoku point cloud matching identification method includes following specific
Step:
Characteristic point fitting is carried out using based on the directly matched method in airspace, in the corresponding rigid region of two clouds, is chosen
Three and features above point conduct fitting key point, pass through coordinate transform, directly progress characteristic point Corresponding matching;
After characteristic point Corresponding matching, the alignment of data of the point cloud after whole curved surface best fit;
Similarity calculation is carried out using least square method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910862183.0A CN110543871B (en) | 2018-09-05 | 2018-09-05 | Point cloud-based 3D comparison measurement method |
CN201811032876.9A CN109269405B (en) | 2018-09-05 | 2018-09-05 | A kind of quick 3D measurement and comparison method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811032876.9A CN109269405B (en) | 2018-09-05 | 2018-09-05 | A kind of quick 3D measurement and comparison method |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910862183.0A Division CN110543871B (en) | 2018-09-05 | 2018-09-05 | Point cloud-based 3D comparison measurement method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109269405A CN109269405A (en) | 2019-01-25 |
CN109269405B true CN109269405B (en) | 2019-10-22 |
Family
ID=65187253
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910862183.0A Active CN110543871B (en) | 2018-09-05 | 2018-09-05 | Point cloud-based 3D comparison measurement method |
CN201811032876.9A Active CN109269405B (en) | 2018-09-05 | 2018-09-05 | A kind of quick 3D measurement and comparison method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910862183.0A Active CN110543871B (en) | 2018-09-05 | 2018-09-05 | Point cloud-based 3D comparison measurement method |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN110543871B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109990703A (en) * | 2019-03-18 | 2019-07-09 | 桂林电子科技大学 | A kind of size detecting method and system of prefabricated components |
CN110188616B (en) * | 2019-05-05 | 2023-02-28 | 上海盎维信息技术有限公司 | Space modeling method and device based on 2D and 3D images |
EP3970121A4 (en) * | 2019-05-14 | 2023-01-18 | INTEL Corporation | Automatic point cloud validation for immersive media |
CN110189347B (en) * | 2019-05-15 | 2021-09-24 | 深圳市优博讯科技股份有限公司 | Method and terminal for measuring volume of object |
CN110213566B (en) * | 2019-05-20 | 2021-06-01 | 歌尔光学科技有限公司 | Image matching method, device, equipment and computer readable storage medium |
CN111028341B (en) * | 2019-12-12 | 2020-08-04 | 天目爱视(北京)科技有限公司 | Three-dimensional model generation method |
CN111060023B (en) * | 2019-12-12 | 2020-11-17 | 天目爱视(北京)科技有限公司 | High-precision 3D information acquisition equipment and method |
CN111325780B (en) * | 2020-02-17 | 2021-07-27 | 天目爱视(北京)科技有限公司 | 3D model rapid construction method based on image screening |
CN111208138B (en) * | 2020-02-28 | 2021-03-12 | 天目爱视(北京)科技有限公司 | Intelligent wood recognition device |
CN113066132B (en) * | 2020-03-16 | 2024-06-25 | 天目爱视(北京)科技有限公司 | 3D modeling calibration method based on multi-equipment acquisition |
WO2021195854A1 (en) | 2020-03-30 | 2021-10-07 | Shanghaitech University | Multi-view neural human rendering |
CN113532268B (en) * | 2020-04-20 | 2024-04-16 | 成都鼎桥通信技术有限公司 | Object measurement method, shooting terminal and storage medium |
CN111797268B (en) * | 2020-07-17 | 2023-12-26 | 中国海洋大学 | RGB-D image retrieval method |
US11703457B2 (en) * | 2020-12-29 | 2023-07-18 | Industrial Technology Research Institute | Structure diagnosis system and structure diagnosis method |
CN113251926B (en) * | 2021-06-04 | 2021-09-24 | 山东捷瑞数字科技股份有限公司 | Method and device for measuring size of irregular object |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11160034A (en) * | 1997-10-27 | 1999-06-18 | Je Baku Hii | Noncontact method for measuring three-dimensional micro structure using optical window |
CN107578434A (en) * | 2017-08-25 | 2018-01-12 | 上海嘉奥信息科技发展有限公司 | VR rendering intents and system based on 3D point cloud rapid registering |
CN107592449A (en) * | 2017-08-09 | 2018-01-16 | 广东欧珀移动通信有限公司 | Three-dimension modeling method, apparatus and mobile terminal |
CN107702662A (en) * | 2017-09-27 | 2018-02-16 | 深圳拎得清软件有限公司 | Reverse monitoring method and its system based on laser scanner and BIM |
CN108340405A (en) * | 2017-11-10 | 2018-07-31 | 广东康云多维视觉智能科技有限公司 | A kind of robot three-dimensional scanning system and method |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927747B (en) * | 2014-04-03 | 2017-01-11 | 北京航空航天大学 | Face matching space registration method based on human face biological characteristics |
CN105184856A (en) * | 2015-09-02 | 2015-12-23 | 泰山学院 | Two-phase human skin three-dimensional reconstruction method based on density matching |
CN105931177B (en) * | 2016-04-14 | 2020-06-02 | 付常青 | Image acquisition processing device and method under specific environment |
CN107590827A (en) * | 2017-09-15 | 2018-01-16 | 重庆邮电大学 | A kind of indoor mobile robot vision SLAM methods based on Kinect |
CN107977997B (en) * | 2017-11-29 | 2020-01-17 | 北京航空航天大学 | Camera self-calibration method combined with laser radar three-dimensional point cloud data |
CN108446596A (en) * | 2018-02-14 | 2018-08-24 | 天目爱视(北京)科技有限公司 | Iris 3D 4 D datas acquisition system based on Visible Light Camera matrix and method |
CN108334873A (en) * | 2018-04-04 | 2018-07-27 | 天目爱视(北京)科技有限公司 | A kind of 3D four-dimension hand data discrimination apparatus |
-
2018
- 2018-09-05 CN CN201910862183.0A patent/CN110543871B/en active Active
- 2018-09-05 CN CN201811032876.9A patent/CN109269405B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11160034A (en) * | 1997-10-27 | 1999-06-18 | Je Baku Hii | Noncontact method for measuring three-dimensional micro structure using optical window |
CN107592449A (en) * | 2017-08-09 | 2018-01-16 | 广东欧珀移动通信有限公司 | Three-dimension modeling method, apparatus and mobile terminal |
CN107578434A (en) * | 2017-08-25 | 2018-01-12 | 上海嘉奥信息科技发展有限公司 | VR rendering intents and system based on 3D point cloud rapid registering |
CN107702662A (en) * | 2017-09-27 | 2018-02-16 | 深圳拎得清软件有限公司 | Reverse monitoring method and its system based on laser scanner and BIM |
CN108340405A (en) * | 2017-11-10 | 2018-07-31 | 广东康云多维视觉智能科技有限公司 | A kind of robot three-dimensional scanning system and method |
Also Published As
Publication number | Publication date |
---|---|
CN109269405A (en) | 2019-01-25 |
CN110543871A (en) | 2019-12-06 |
CN110543871B (en) | 2022-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109269405B (en) | A kind of quick 3D measurement and comparison method | |
CN109141240B (en) | A kind of measurement of adaptive 3 D and information acquisition device | |
CN111060023B (en) | High-precision 3D information acquisition equipment and method | |
CN107204010B (en) | A kind of monocular image depth estimation method and system | |
CN109218702A (en) | A kind of camera rotation type 3D measurement and information acquisition device | |
CN109285109B (en) | A kind of multizone 3D measurement and information acquisition device | |
CN208653401U (en) | Adapting to image acquires equipment, 3D information comparison device, mating object generating means | |
CN109443199B (en) | 3D information measuring system based on intelligent light source | |
CN109394168B (en) | A kind of iris information measuring system based on light control | |
CN111292364A (en) | Method for rapidly matching images in three-dimensional model construction process | |
CN110580732A (en) | Foot 3D information acquisition device | |
CN110012196A (en) | A kind of light-field camera refocusing method | |
CN106296661A (en) | A kind of demarcation preprocess method being applicable to light-field camera | |
CN108470373A (en) | It is a kind of based on infrared 3D 4 D datas acquisition method and device | |
CN108564018A (en) | A kind of biological characteristic 3D 4 D datas recognition methods and system based on infrared photography | |
CN209279885U (en) | Image capture device, 3D information comparison and mating object generating means | |
CN109146949B (en) | A kind of 3D measurement and information acquisition device based on video data | |
CN208795174U (en) | Camera rotation type image capture device, comparison device, mating object generating means | |
CN108470166A (en) | A kind of biological characteristic 3D 4 D datas recognition methods and system based on laser scanning | |
CN108319939A (en) | A kind of 3D four-dimension head face data discrimination apparatus | |
CN108492357A (en) | A kind of 3D 4 D datas acquisition method and device based on laser | |
CN110909634A (en) | Visible light and double infrared combined rapid in vivo detection method | |
CN108259764A (en) | Video camera, image processing method and device applied to video camera | |
CN110276831A (en) | Constructing method and device, equipment, the computer readable storage medium of threedimensional model | |
CN110210292A (en) | A kind of target identification method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |