CN108520055A - A kind of product testing identification method compared based on temmoku point cloud - Google Patents
A kind of product testing identification method compared based on temmoku point cloud Download PDFInfo
- Publication number
- CN108520055A CN108520055A CN201810302367.7A CN201810302367A CN108520055A CN 108520055 A CN108520055 A CN 108520055A CN 201810302367 A CN201810302367 A CN 201810302367A CN 108520055 A CN108520055 A CN 108520055A
- Authority
- CN
- China
- Prior art keywords
- product
- feature
- product feature
- data
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K17/00—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/018—Certifying business or products
- G06Q30/0185—Product, service or business identity fraud
Abstract
The present invention designs a kind of product testing identification method compared based on temmoku point cloud, including:Product Feature Information acquires:Several characteristic images of product under different visual angles are collected by camera, four dimension modules of product feature are built according to several characteristic images, to realize that product feature 4 D data acquires;The storage of Product Feature Information 4 D data:Collected product feature 4 D data is stored using the peculiar information of product as distinguishing mark, forms the database for the 4 D data for including a plurality of product information feature;Product testing is identified:The product feature 4 D data stored in database is found using the peculiar information of target product, and identification is detected to product using temmoku point cloud matching identification method.The temmoku point cloud matching identification method includes the following steps:Characteristic point is fitted;Curved surface entirety best fit;Similarity calculation.The present invention can improve the recognition efficiency of Product Feature Information, completely restore the various features of product feature spatially, and unlimited possibility is provided for applications such as product testing identifications.
Description
Technical field
The present invention relates to product feature recognition technical field, especially a kind of product testing mirror compared based on temmoku point cloud
Determine method.
Background technology
The intrinsic feature of product feature, that is, product, such as product form, product material.Product feature especially product texture
Peculiar certain uniqueness and stability, i.e. comparison in difference between certain product textural characteristics of any two different product
Greatly, and product textural characteristics will not generally change a lot with the time, this allows for product textural characteristics and is well suited for answering
In Testing and appraisal field, especially in the identification of the arts work such as antique, calligraphy and painting.
Current product testing data are all by the way of 2D images or the simple 3d space information of 2D Image-aideds
It is detected, so there is certain wrong probability on identification, has given part criminal's opportunity, part inspection of out-tricking
Identification systems are surveyed, prodigious hidden danger is brought to property safety.Therefore, there is an urgent need for be directed to product Texture eigenvalue to carry out multidimensional data
Identification improves Testing and appraisal accuracy.
Invention content
In view of the above problems, it is proposed that the present invention overcoming the above problem in order to provide one kind or solves at least partly
State the product testing identification method of problem compared based on temmoku point cloud.
A kind of product testing identification method compared based on temmoku point cloud comprising following steps:
S01. Product Feature Information acquires,
Several product feature images of product under different visual angles are acquired by camera, according to several product feature image structures
Four dimension modules for building product feature, to realize that product feature 4 D data acquires;
S02. the storage of Product Feature Information 4 D data,
The peculiar information (I1, I2 ... In) of scanning or typing product is made with the peculiar information (I1, I2 ... In) of the product
Storage is associated to collected product feature 4 D data for distinguishing mark, formation includes four dimensions of a plurality of product feature
According to the database of (D1, D2 ... Dn);
S03. product testing is identified,
The feature 4 D data (T1, T2 ... Tn) of target product is acquired, and target product described in scanning or typing is peculiar
Information (I1, I2 ... In) finds the product feature 4 D data (D1, D2 ... Dn) stored in the database, and the target is produced
The feature 4 D data (T1, T2 ... Tn) of product respectively with stored in the corresponding database product feature 4 D data (D1,
D2 ... Dn) it is compared, carry out Testing and appraisal product.
Further, step S01 further includes:
By several product feature images of the collected product of camera under different visual angles,
Several described product feature images are handled, respective feature in several described product feature images is extracted
Point;
Based on respective characteristic point in several product feature images described in extraction, the characteristic point cloud number of product feature is generated
According to;
Four dimension modules that product feature is built according to the feature point cloud data, to realize adopting for product feature 4 D data
Collection.
Further, the step of extracting respective characteristic point in several described product feature images further comprises:
Several described product feature images are transmitted to the processing list with image processor GPU and central processor CPU
Member;The image information of several product feature images is assigned in the block block of GPU and carries out operation, and combines the collection of CPU
Middle scheduling and distribution function calculate several described respective characteristic points of product feature image.
Further, described based on respective characteristic point in several product feature images described in extraction, it is special to generate product
The step of levying point cloud data further comprises:
According to the feature of respective characteristic point in several product feature images described in extraction, the matching of characteristic point is carried out,
Establish matched characteristic point data collection;
According to the optical information of camera, opposite position of the camera under different visual angles relative to product feature spatially is calculated
Set, and according to the relative position calculate the characteristic point in several described product feature images a certain visual angle spatial depth
Information;
According to the spatial depth information of matched characteristic point data collection and characteristic point on different time, product feature is generated
Feature point cloud data;
The feature of respective characteristic point converts SIFT feature using scale invariant feature in several described product feature images
Son is described to describe;
According to optical information of the camera under different visual angles, it is opposite under a certain visual angle that camera is calculated using light-stream adjustment
In the relative position of product feature spatially;The spatial depth information packet of characteristic point in several described product feature images
It includes:Spatial positional information and colouring information.
Further, the step of four dimension module that product feature is built according to the feature point cloud data further wraps
It includes:
Set the reference dimension of four dimension modules of product feature to be built;
According to the spatial positional information of the reference dimension and the feature point cloud data, the feature point cloud data is determined
In each characteristic point characteristic manner, to build four dimension modules of product feature;It is wrapped in four dimension modules of the product feature
Include at least one following 4 D data:
Spatial form characteristic of four dimension modules in different visual angles is described;
Surface texture feature data of four dimension modules in different visual angles are described;
Facing material and light characteristic of four dimension modules in different visual angles are described.
Further, Product Feature Information is acquired using camera composition camera matrix.
Further, used camera can be Visible Light Camera, infrared camera, laser camera, light-field camera and/or
Magazine a kind of or arbitrarily several combination of grating.
Further, layout camera matrix may be used such as under type:
Support construction is built, arc bearing structure is set in the support construction;
More cameras are arranged in the arc bearing structure.
Further, display is set in the arc bearing structure;
After structure obtains four dimension modules of product feature, 4 D data is shown by visual means over the display;
Before the camera matrix formed using more cameras is acquired product information, by display interfaces, if
The parameter of taking pictures of fixed each Visible Light Camera.
Further, when target product is identified in the step S03, using temmoku point cloud matching identification method to described
Product feature 4 D data (D1, the D2 ... stored in target product feature 4 D data (T1, T2 ... Tn) and the database
Dn it) is compared;The temmoku point cloud matching identification method includes the following steps:
S0301. characteristic point is fitted;
S0302. curved surface entirety best fit;
S0303. similarity calculation.
Further, the temmoku point cloud matching identification method comprises the following specific steps that:
Characteristic point fitting is carried out using based on spatial domain directly matched method, in the corresponding rigid region of two clouds,
It chooses three and features above point is used as fitting key point, pass through coordinate transform, directly carry out characteristic point Corresponding matching;
After characteristic point Corresponding matching, the alignment of data of the point cloud after whole curved surface best fit;
Similarity calculation is carried out using least square method.
The beneficial effects of the present invention are:A kind of product testing identification method compared based on temmoku point cloud is provided,
Be specifically the camera matrix that is formed using a plurality of types of cameras or more Visible Light Cameras in method to Product Feature Information into
Row acquisition, obtains several product feature images in given time;And then several product feature images are handled, extraction is more
Respective characteristic point in width product feature image;It is raw subsequently, based on respective characteristic point in several product feature images of extraction
At the feature point cloud data of product feature;Later, four dimension modules that product feature is built according to feature point cloud data, to realize production
The acquisition of product feature 4 D data.It can be seen that the embodiment of the present invention carries out product using more Visible Light Camera control technologies
The acquisition of characteristic information can significantly improve the collecting efficiency of Product Feature Information;Also, the embodiment of the present invention is utilized and is collected
The characteristic information of product feature spatially completely restores the various features of product feature spatially, is subsequent product
The application of characteristic provides unlimited possibility.Parallel computation based on central processing unit and graphics processor, can be fast
Speed efficiently realizes the processing of characteristic information and puts the generation of cloud.Also, it is retouched using scale invariant feature conversion SIFT feature
The computation capability that son combines special graph processor is stated, matching and the space characteristics point clouds of characteristic point can be fast implemented
It generates.In addition, using unique sizing calibration method, the space ruler of any characteristic point of product feature can be quickly and accurately extracted
Very little and time size, generates four dimension modules of product feature, to realize the acquisition and identification of 4 D data.
Description of the drawings
By reading the detailed description of hereafter preferred embodiment, various other advantages and benefit are common for this field
Technical staff will become clear.Attached drawing only for the purpose of illustrating preferred embodiments, and is not considered as to the present invention
Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the flow of the Testing and appraisal method according to an embodiment of the invention based on multidimensional product information data
Figure;
Fig. 2 shows the flow charts according to an embodiment of the invention based on multidimensional product information collecting method;
Fig. 3 shows the flow chart of the recognition methods according to an embodiment of the invention compared based on temmoku point cloud.
Specific implementation mode
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
It is formed it should be noted that the 4 D data in the present invention refers to three-dimensional space data binding time dimension data
Data, three dimensions binding time dimension refers to:Multiple same time intervals or different time intervals, different angle, difference
The data acquisition system that the image or image of situations such as orientation or different conditions is formed.In other words:4 D data can be that multiple are identical
The 3D data acquisition systems of time interval or different time intervals, different angle, different direction, different expression forms etc..
In order to solve the above technical problems, an embodiment of the present invention provides a kind of product testing mirror compared based on temmoku point cloud
Determine method.Fig. 1 shows the flow of the Testing and appraisal method according to an embodiment of the invention based on multidimensional product information data
Figure:
S01. Product Feature Information acquires,
Several product feature images of product under different visual angles are acquired by camera, according to several product feature image structures
Four dimension modules for building product feature, to realize that product feature 4 D data acquires;
S02. the storage of Product Feature Information 4 D data,
The peculiar information (I1, I2 ... In) of scanning or typing product is made with the peculiar information (I1, I2 ... In) of the product
Storage is associated to collected product feature 4 D data for distinguishing mark, formation includes four dimensions of a plurality of product feature
According to the database of (D1, D2 ... Dn);
S03. product testing is identified,
The feature 4 D data (T1, T2 ... Tn) of target product is acquired, and target product described in scanning or typing is peculiar
Information (I1, I2 ... In) finds the product feature 4 D data (D1, D2 ... Dn) stored in the database, and the target is produced
The feature 4 D data (T1, T2 ... Tn) of product respectively with stored in the corresponding database product feature 4 D data (D1,
D2 ... Dn) it is compared, carry out Testing and appraisal product.
Preferably, as shown in Fig. 2, step S01 acquisitions Product Feature Information can also specifically include following steps S102 extremely
Step S108.
Step S102, by several product feature images of the more collected products of camera under different visual angles, preferably
, the camera matrix of more cameras composition is acquired Product Feature Information;
Step S104 handles several collected product feature images, extracts several described product feature images
In respective characteristic point;
Step S106 generates product feature based on respective characteristic point in several product feature images described in extraction
Feature point cloud data;
Step S108 builds four dimension modules of product feature according to the feature point cloud data, to realize product feature four
The acquisition of dimension data.
The present embodiment carries out the acquisition of Product Feature Information using more camera control technologies, can significantly improve product spy
The collecting efficiency of reference breath;Also, the embodiment of the present invention is using the characteristic information of product feature spatially is collected, completely
The various features of product feature spatially are restored, the application for subsequent product feature data provides unlimited possibility.
In another embodiment of the invention, a camera can be used to carry out Product Feature Information acquisition, at this moment, this
Camera can turn around shooting along planned orbit, to realize to the multi-angled shooting of Product Feature Information, obtain several productions
Product characteristic image.
Preferably, it by several product feature images of above-mentioned acquisition, is transmitted to image processor GPU and central processing
The processing unit of device CPU;The image information of several product feature images is assigned in the block block of GPU and carries out operation, and tied
Centralized dispatching and the distribution function for closing CPU, calculate several respective characteristic points of product feature image.It can be seen that the present invention is real
The acquisition that example carries out Product Feature Information using more photographing camera control technologies is applied, Product Feature Information can be significantly improved
Collecting efficiency.Also, parallel computation of the embodiment of the present invention based on central processing unit and graphics processor, can efficiently be realized
The processing of characteristic information.
Preferably, GPU is double GPU, and every GPU has a multiple block, such as 56 block, the embodiment of the present invention to this not
It is restricted.
In the alternative embodiment of the present invention, in above step S106 in several product feature images based on extraction respectively
Characteristic point, generate product feature point cloud data, can be specifically to include the following steps S1061 to step S1063.
Step S1061 carries out characteristic point according to the feature of respective characteristic point in several product feature images of extraction
Matching, establishes matched characteristic point data collection.
Step S1062 calculates each camera relative to product feature spatially according to the optical information of more cameras
Relative position, and calculate according to relative position the spatial depth information of the characteristic point in several product feature images.
Step S1063 generates product feature according to the spatial depth information of matched characteristic point data collection and characteristic point
Feature point cloud data.
In above step S1061, SIFT may be used in the feature of respective characteristic point in several product feature images
(Scale-Invariant Feature Transform, scale invariant feature conversion) Feature Descriptor describes.SIFT feature
Description has 128 feature description vectors, and the spy of 128 aspects of any characteristic point can be described on direction and scale
Sign significantly improves the precision to feature description, while Feature Descriptor has independence spatially.
In step S1062, according to the optical information of more Visible Light Cameras, each camera is calculated relative to product in sky
Between on relative position, an embodiment of the present invention provides a kind of optional schemes, in this scenario, can be according to more cameras
Optical information calculates opposite position of each camera at a time relative to product feature spatially using light-stream adjustment
It sets.
In the definition of light-stream adjustment, it is assumed that at a time there are one the point in 3d space, it is by positioned at different positions
The multiple cameras set are seen, then light-stream adjustment is to extract this 3d space of the moment from these various visual angles information
The coordinate and the relative position of each camera and the process of optical information of point.
Further, the space of the characteristic point at a time several product feature images referred in step S1062
Depth information may include:Spatial positional information and colouring information, that is, can be X axis coordinate of the characteristic point in spatial position, spy
Sign point is in the Y axis coordinate of spatial position, characteristic point in the channels R of the Z axis coordinate of spatial position, the colouring information of characteristic point
The colouring information of the value in the channels G of the colouring information of value, characteristic point, the value of the channel B of the colouring information of characteristic point, characteristic point
The value etc. in the channels Alpha.In this way, containing the spatial positional information and color letter of characteristic point in the feature point cloud data generated
Breath, the format of feature point cloud data can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, X axis coordinate of the Xn expression characteristic points in spatial position;Y axis coordinate of the Yn expression characteristic points in spatial position;
Z axis coordinate of the Zn expression characteristic points in spatial position;Rn indicates the value in the channels R of the colouring information of characteristic point;Gn indicates feature
The value in the channels G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;An indicates the color of characteristic point
The value in the channels Alpha of information.
In embodiments of the present invention, Viewing-angle information is added in the product feature of 3D, constitutes four-dimensional product feature, it is completely multiple
The various features of original product, the application for subsequent product feature data provide unlimited possibility.
In the alternative embodiment of the present invention, the four of product feature are built according to feature point cloud data in above step S108
Dimension module can be specifically the reference dimension for setting four dimension modules to be built;And then according to reference dimension and characteristic point cloud number
According to spatial positional information, determine the bulk of each characteristic point and time size in feature point cloud data, to build production
Four dimension modules of product feature.
May include the space shape for describing four dimension modules on different time in four dimension modules of the product feature of structure
Shape characteristic, surface texture feature data of four dimension modules of description on different time, four dimension modules of description are in different visual angles
Under Facing material and 4 D datas, the embodiment of the present invention such as light characteristic this is not restricted.
In the alternative embodiment of the present invention, using the camera matrix of more cameras composition to production in above step S102
Before product characteristic information is acquired, more cameras can also be laid out, the methods of more Visible Light Cameras of layout may include with
Lower step S202 to step S204.
Step S202 builds support construction, and arc bearing structure is arranged on the support structure;And
More cameras are arranged in arc bearing structure by step S204.
It, can be with it can be seen that the embodiment of the present invention carries out the acquisition of Product Feature Information using more camera control technologies
Significantly improve the collecting efficiency of Product Feature Information.Also, more cameras are arranged in formation camera matrix in arc bearing structure.
In an alternate embodiment of the invention, can also display be set in arc bearing structure;The four of product are obtained in structure
After dimension module, product 4 D data is shown by visual means over the display.
In an alternate embodiment of the invention, it is acquired to Product Feature Information in the camera matrix formed using more cameras
Before, the parameter of taking pictures of each camera can also be set, such as sensitivity, shutter speed, zoom magnification, light by display interfaces
Circle etc., the embodiment of the present invention is without being limited thereto.
Preferably, in step S02, the collected product feature 4 D data of storing step S01 institutes, and with the spy of product
There is information (I1, I2 ... In) to be stored to collected product feature 4 D data as distinguishing mark, formation includes a plurality of
The database of product feature 4 D data (D1, D2 ... Dn), such as:The peculiar information I1 of 4 D data D1 and the product is closed
Connection storage, the 4 D data D2 of another product and the peculiar information I2 of the product are associated storage, and so on, formation includes
The database of n product 4 D data.
Preferably, when target product is identified in step S03, using temmoku point cloud matching identification method to target product
Stored in the feature 4 D data (T1, T2 ... Tn) and database of (product i.e. to be identified) product feature 4 D data (D1,
D2 ... Dn) it is compared, to identify target product.First, by inputting the peculiar information of target product, such as the Quick Response Code of product
Deng, it can be quickly found out have stored in database with the peculiar associated 4 D data of information (D1, D2 ... Dn) in this way,
It is compared one by one without the mass data in the data and database by target product, improves the efficiency of matching identification,
The speed of product testing identification is greatly improved, then again the 4 D data (T1, T2 ... Tn) of the current collected product
It is compared with the 4 D data taken out in data, finally identifies whether the product is qualified, and then realize product testing mirror
It is fixed, specifically, being included the following steps using temmoku point cloud matching identification method:
S0301. characteristic point is fitted;
S0302. curved surface entirety best fit;
S0303. similarity calculation.
Preferably, temmoku point cloud matching identification method further includes following specific steps:
Characteristic point fitting is carried out using based on spatial domain directly matched method, in the corresponding rigid region of two clouds,
It chooses three and features above point is used as fitting key point, pass through coordinate transform, directly carry out characteristic point Corresponding matching;
After characteristic point Corresponding matching, the alignment of data of the point cloud after whole curved surface best fit;
Similarity calculation is carried out using least square method.
Temmoku point cloud matching identification method (Yare Eyes point cloud match recognition method) is known
Other process and operation principle are as follows:First, point cloud at a time is the basic element for forming four dimension modules, it includes space
Coordinate information (XYZ) and colouring information (RGB).The attribute of point cloud includes spatial resolution, positional accuracy, surface normal etc..
Its feature is not influenced by external condition, will not all be changed for translating and rotating.Reverse software can carry out a cloud
Editor and processing, such as:Imageware, geomagic, catia, copycad and rapidform etc..Temmoku point cloud, which compares, to be known
Other method is distinctive to include based on the directly matched method in spatial domain:Iteration closest approach method ICP (Iterative closest
Point), ICP methods are generally divided into two steps, the fitting of first step characteristic point, second step curved surface entirety best fit.First fitting alignment
The purpose of characteristic point is in order to which the shortest time is found and is aligned two clouds of fitting to be compared.But not limited to this.Such as it can be with
It is:
The first step chooses three and features above point is used as fitting key point in the corresponding rigid region of two clouds,
By coordinate transform, characteristic point Corresponding matching is directly carried out.
ICP is a very effective tool in 3D data reconstruction process, at certain for curve or the registration of curved surface segment
One moment gave the rough initial alignment condition of two 3D models, and ICP iteratively seeks rigid transformation between the two with minimum
Change alignment error, realizes the registration of the space geometry relationship of the two.
Given setWithSet element two model surfaces of expression
Coordinate points, ICP registration techniques iteratively solve apart from nearest corresponding points, establish transformation matrix, and implement transformation to one of,
Until reaching some condition of convergence, its coding of iteration stopping is as follows:
1.1ICP algorithm
Input .P1, P2.
P after output is transformed2
P2(0)=P2, l=0;
Do
For P2(l) each point in
In P1In look for a nearest point y1;
End For
It calculatesRegistration error E;
IfE is more than a certain threshold value
Calculate P2(l) the transformation matrix T (l) between Y (l);
P2(l+1)=T (l) P2(l), l=l+1;
Else
Stop;
End If
While||P2(l+l)-P2(l)||>threshold;
Wherein registration error
1.2 the matching based on local feature region:
Requirement to product feature point:
1) completeness contains object information as much as possible, is allowed to be different from the object of other classifications;2) compactedness tables
It is as few as possible up to required data volume;3) feature is also required preferably to be remained unchanged under model rotation, translation, mirror transformation.
In 3D product feature recognitions, using two 3D product feature model point clouds of alignment, the similar of input model is calculated
Degree, wherein registration error is as difference measure.
Second step:After characteristic point best fit, the alignment of data of the point cloud after whole curved surface best fit.
Third walks, similarity calculation.Least square method
Least square method (also known as least squares method) is a kind of mathematical optimization techniques.It by minimize error quadratic sum
Find the optimal function matching of data.Unknown data can be easily acquired using least square method, and these are acquired
Data and real data between error quadratic sum be minimum.Least square method can also be used for curve matching.It is some other excellent
Change problem can also be expressed by minimizing energy or maximizing entropy with least square method.It is usually used in solving curve fit problem,
And then solve the complete fitting of curved surface.It can accelerate Data Convergence by iterative algorithm, quickly acquire optimal solution.
If 3D data models at a time are inputted with stl file format, pass through calculating point cloud and triangle
The distance of piece determines its deviation.Therefore, this method needs to establish plane equation to each tri patch, and deviation arrives flat for point
The distance in face.And be IGES or STEP models for 3D data models at a time, since free form surface expression-form is
The faces NURBS, so the distance calculating in point to face needs the method for using numerical optimization to be calculated.By in iterative calculation point cloud
Each point expresses deviation to the minimum range of nurbs surface, or that nurbs surface carried out specified scale is discrete, with point and point
Apart from approximate expression point deviation, or it is converted into STL formats and carries out deviation calculating.Different coordinate alignment and deviation calculating side
The testing result of method, acquisition is also different.The size of alignment error will directly affect the confidence level of accuracy of detection and assessment report.
Best fit alignment is that detection error averagely arrives entirety, is terminated in terms of iteration by ensureing the whole minimum condition of deviation
The alignment procedure of calculation carries out 3D analyses to registration result, generates result object in the form of the root mean square of error between two figures
Output, root mean square is bigger, and difference of two models of reflection at this is bigger.Vice versa.Judge according to registration ratio is compared
Whether it is to compare subject matter.
It is specific in method an embodiment of the present invention provides a kind of product testing identification method compared based on temmoku point cloud
It is that the camera matrix formed using a plurality of types of cameras or more Visible Light Cameras is acquired Product Feature Information, obtains
Several product feature images in given time;And then several product feature images are handled, extract several product features
Respective characteristic point in image;Subsequently, based on respective characteristic point in several product feature images of extraction, product feature is generated
Feature point cloud data;Later, four dimension modules that product feature is built according to feature point cloud data, to realize the product feature four-dimension
The acquisition of data.It can be seen that the embodiment of the present invention carries out Product Feature Information using more Visible Light Camera control technologies
Acquisition, can significantly improve the collecting efficiency of Product Feature Information;Also, utilization of the embodiment of the present invention collects product feature and exists
Characteristic information spatially completely restores the various features of product feature spatially, is subsequent product feature data
Using providing unlimited possibility.
Further, parallel computation of the embodiment of the present invention based on central processing unit and graphics processor, can be rapidly and efficiently
It realizes the processing of characteristic information and puts the generation of cloud in ground.Also, using scale invariant feature conversion SIFT feature description son knot
The computation capability for closing special graph processor, can fast implement the generation of the matching and space characteristics point cloud of characteristic point.
In addition, using unique sizing calibration method, can quickly and accurately extract any characteristic point of product feature bulk and
Time size generates four dimension modules of product feature, to realize the acquisition of 4 D data.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention
Example can be put into practice without these specific details.In instances, well known method, structure and skill is not been shown in detail
Art, so as not to obscure the understanding of this description.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of each inventive aspect,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:It is i.e. required to protect
Shield the present invention claims the more features of feature than being expressly recited in each claim.More precisely, as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following specific implementation mode are expressly incorporated in the specific implementation mode, wherein each claim itself
All as a separate embodiment of the present invention.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of arbitrary
It mode can use in any combination.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability
Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference mark between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.The use of word first, second, and third does not indicate that any sequence.It can incite somebody to action
These words are construed to title.
So far, although those skilled in the art will appreciate that present invention has been shown and described in detail herein multiple shows
Example property embodiment still without departing from the spirit and scope of the present invention, still can according to the present disclosure directly
Determine or derive many other variations or modifications consistent with the principles of the invention.Therefore, the scope of the present invention is understood that and recognizes
It is set to and covers other all these variations or modifications.
Claims (10)
1. a kind of product testing identification method compared based on temmoku point cloud, which is characterized in that include the following steps:
S01. Product Feature Information acquires,
Several product feature images of product under different visual angles are acquired by camera, are produced according to several product feature picture constructions
Four dimension modules of product feature, to realize that product feature 4 D data acquires;
S02. the storage of Product Feature Information 4 D data,
The peculiar information (I1, I2 ... In) of scanning or typing product, using the peculiar information (I1, I2 ... In) of the product as knowledge
Do not indicate and storage is associated to collected product feature 4 D data, formation includes the 4 D data of a plurality of product feature
The database of (D1, D2 ... Dn);
S03. product testing is identified,
Acquire the feature 4 D data (T1, T2 ... Tn) of target product, and the peculiar information of target product described in scanning or typing
(I1, I2 ... In) finds the product feature 4 D data (D1, D2 ... Dn) stored in the database, by the target product
Feature 4 D data (T1, T2 ... Tn) respectively with product feature 4 D data (D1, D2 ... for being stored in the corresponding database
Dn it) is compared, carrys out Testing and appraisal product.
2. according to the method described in claim 1, it is characterized in that, step S01 further includes:
By several product feature images of the collected product of camera under different visual angles,
Several described product feature images are handled, respective characteristic point in several described product feature images is extracted;
Based on respective characteristic point in several product feature images described in extraction, the feature point cloud data of product feature is generated;
Four dimension modules that product feature is built according to the feature point cloud data, to realize the acquisition of product feature 4 D data.
3. according to the method described in claim 2, it is characterized in that, extracting respective feature in several described product feature images
The step of point, further comprises:
Several described product feature images are transmitted to the processing unit with image processor GPU and central processor CPU;It will
The image information of several product feature images, which is assigned in the block block of GPU, carries out operation, and combines the concentration tune of CPU
Degree and distribution function calculate several described respective characteristic points of product feature image.
4. according to the method described in claim 2, it is characterized in that,
It is described based on respective characteristic point in several product feature images described in extraction, generate the step of product feature point cloud data
Suddenly further comprise:
According to the feature of respective characteristic point in several product feature images described in extraction, the matching of characteristic point is carried out, is established
Matched characteristic point data collection;
According to the optical information of camera, relative position of the camera under different visual angles relative to product feature spatially is calculated,
And spatial depth of the characteristic point at a certain visual angle in several described product feature images is calculated according to the relative position and is believed
Breath;
According to the spatial depth information of matched characteristic point data collection and characteristic point on different time, the spy of product feature is generated
Levy point cloud data;
The feature of respective characteristic point is using scale invariant feature conversion SIFT feature description in several described product feature images
Son describes;
According to optical information of the camera under different visual angles, camera is calculated under a certain visual angle relative to production using light-stream adjustment
The relative position of product feature spatially;The spatial depth information of characteristic point in several described product feature images includes:It is empty
Between location information and colouring information.
5. according to the method described in claim 2, it is characterized in that, described build product feature according to the feature point cloud data
Four dimension modules the step of further comprise:
Set the reference dimension of four dimension modules of product feature to be built;
According to the spatial positional information of the reference dimension and the feature point cloud data, determine each in the feature point cloud data
The characteristic manner of a characteristic point, to build four dimension modules of product feature;Under four dimension modules of the product feature include
The 4 D data of at least one row:
Spatial form characteristic of four dimension modules in different visual angles is described;
Surface texture feature data of four dimension modules in different visual angles are described;
Facing material and light characteristic of four dimension modules in different visual angles are described.
6. according to the method described in claim 2, it is characterized in that, using camera form camera matrix to Product Feature Information into
Row acquisition.
7. according to the method described in claim 6, it is characterized in that, used camera can be Visible Light Camera, infrared
Camera, laser camera, light-field camera and/or grating is magazine a kind of or arbitrary several combination.
8. according to the method described in claim 6, it is characterized in that, layout camera matrix may be used such as under type:
Support construction is built, arc bearing structure is set in the support construction;
More cameras are arranged in the arc bearing structure.
9. according to the method described in claim 8, it is characterized in that,
Display is set in the arc bearing structure;
After structure obtains four dimension modules of product feature, 4 D data is shown by visual means over the display;
Before the camera matrix formed using more cameras is acquired product information, by display interfaces, setting is each
The parameter of taking pictures of platform Visible Light Camera.
10. according to the method described in claim 1, it is characterized in that, when target product is identified in the step S03, adopt
With temmoku point cloud matching identification method to storing in the target product feature 4 D data (T1, T2 ... Tn) and the database
Product feature 4 D data (D1, D2 ... Dn) is compared;The temmoku point cloud matching identification method includes the following steps:
S0301. characteristic point is fitted;
S0302. curved surface entirety best fit;
S0303. similarity calculation;
The temmoku point cloud matching identification method comprises the following specific steps that:
Characteristic point fitting is carried out using based on the directly matched method in spatial domain, in the corresponding rigid region of two clouds, is chosen
Three and features above point conduct fitting key point, pass through coordinate transform, directly carry out characteristic point Corresponding matching;
After characteristic point Corresponding matching, the alignment of data of the point cloud after whole curved surface best fit;
Similarity calculation is carried out using least square method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810302367.7A CN108520055A (en) | 2018-04-04 | 2018-04-04 | A kind of product testing identification method compared based on temmoku point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810302367.7A CN108520055A (en) | 2018-04-04 | 2018-04-04 | A kind of product testing identification method compared based on temmoku point cloud |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108520055A true CN108520055A (en) | 2018-09-11 |
Family
ID=63432167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810302367.7A Withdrawn CN108520055A (en) | 2018-04-04 | 2018-04-04 | A kind of product testing identification method compared based on temmoku point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108520055A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101388115A (en) * | 2008-10-24 | 2009-03-18 | 北京航空航天大学 | Depth image autoegistration method combined with texture information |
CN102447934A (en) * | 2011-11-02 | 2012-05-09 | 吉林大学 | Synthetic method of stereoscopic elements in combined stereoscopic image system collected by sparse lens |
CN103021017A (en) * | 2012-12-04 | 2013-04-03 | 上海交通大学 | Three-dimensional scene rebuilding method based on GPU acceleration |
CN103955963A (en) * | 2014-04-30 | 2014-07-30 | 崔岩 | Digital human body three-dimensional reconstruction method and system based on Kinect device |
CN105046807A (en) * | 2015-07-09 | 2015-11-11 | 中山大学 | Smart mobile phone-based counterfeit banknote identification method and system |
CN105513119A (en) * | 2015-12-10 | 2016-04-20 | 北京恒华伟业科技股份有限公司 | Road and bridge three-dimensional reconstruction method and apparatus based on unmanned aerial vehicle |
CN105654048A (en) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | Multi-visual-angle face comparison method |
US20170039436A1 (en) * | 2015-08-03 | 2017-02-09 | Nokia Technologies Oy | Fusion of RGB Images and Lidar Data for Lane Classification |
CN106548161A (en) * | 2016-11-23 | 2017-03-29 | 上海成业智能科技股份有限公司 | The collection of face recognition features' code and knowledge method for distinguishing under the conditions of disturbing for outdoor or light |
CN107748890A (en) * | 2017-09-11 | 2018-03-02 | 汕头大学 | A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image |
-
2018
- 2018-04-04 CN CN201810302367.7A patent/CN108520055A/en not_active Withdrawn
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101388115A (en) * | 2008-10-24 | 2009-03-18 | 北京航空航天大学 | Depth image autoegistration method combined with texture information |
CN102447934A (en) * | 2011-11-02 | 2012-05-09 | 吉林大学 | Synthetic method of stereoscopic elements in combined stereoscopic image system collected by sparse lens |
CN103021017A (en) * | 2012-12-04 | 2013-04-03 | 上海交通大学 | Three-dimensional scene rebuilding method based on GPU acceleration |
CN103955963A (en) * | 2014-04-30 | 2014-07-30 | 崔岩 | Digital human body three-dimensional reconstruction method and system based on Kinect device |
CN105046807A (en) * | 2015-07-09 | 2015-11-11 | 中山大学 | Smart mobile phone-based counterfeit banknote identification method and system |
US20170039436A1 (en) * | 2015-08-03 | 2017-02-09 | Nokia Technologies Oy | Fusion of RGB Images and Lidar Data for Lane Classification |
CN105513119A (en) * | 2015-12-10 | 2016-04-20 | 北京恒华伟业科技股份有限公司 | Road and bridge three-dimensional reconstruction method and apparatus based on unmanned aerial vehicle |
CN105654048A (en) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | Multi-visual-angle face comparison method |
CN106548161A (en) * | 2016-11-23 | 2017-03-29 | 上海成业智能科技股份有限公司 | The collection of face recognition features' code and knowledge method for distinguishing under the conditions of disturbing for outdoor or light |
CN107748890A (en) * | 2017-09-11 | 2018-03-02 | 汕头大学 | A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110009674B (en) | Monocular image depth of field real-time calculation method based on unsupervised depth learning | |
Barazzetti et al. | True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach | |
US9959625B2 (en) | Method for fast camera pose refinement for wide area motion imagery | |
Hanning | High precision camera calibration | |
GB2506411A (en) | Determination of position from images and associated camera positions | |
CN102472609A (en) | Position and orientation calibration method and apparatus | |
CN110807828B (en) | Oblique photography three-dimensional reconstruction matching method | |
CN104019799A (en) | Relative orientation method by using optimization of local parameter to calculate basis matrix | |
US10432915B2 (en) | Systems, methods, and devices for generating three-dimensional models | |
CN108416312B (en) | A kind of biological characteristic 3D data identification method taken pictures based on visible light | |
CN115345822A (en) | Automatic three-dimensional detection method for surface structure light of aviation complex part | |
Jin et al. | An indoor location-based positioning system using stereo vision with the drone camera | |
CN108520230A (en) | A kind of 3D four-dimension hand images data identification method and equipment | |
Yılmaztürk | Full-automatic self-calibration of color digital cameras using color targets | |
CN111739103A (en) | Multi-camera calibration system based on single-point calibration object | |
CN108898629B (en) | Projection coding method for enhancing aerial luggage surface texture in three-dimensional modeling | |
Knyaz et al. | Joint geometric calibration of color and thermal cameras for synchronized multimodal dataset creating | |
CN110796699B (en) | Optimal view angle selection method and three-dimensional human skeleton detection method for multi-view camera system | |
Ivanov et al. | Estimation of the height and angles of orientation of the upper leaves in the maize canopy using stereovision | |
Terpstra et al. | Accuracies in Single Image Camera Matching Photogrammetry | |
CN108520055A (en) | A kind of product testing identification method compared based on temmoku point cloud | |
CN108334873A (en) | A kind of 3D four-dimension hand data discrimination apparatus | |
Previtali et al. | Multi-step and multi-photo matching for accurate 3D reconstruction | |
Budianti et al. | Background blurring and removal for 3d modelling of cultural heritage objects | |
Knyaz et al. | Approach to Accurate Photorealistic Model Generation for Complex 3D Objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20180911 |
|
WW01 | Invention patent application withdrawn after publication |