CN116091559A - Assembly state identification method based on optimal viewing angle - Google Patents
Assembly state identification method based on optimal viewing angle Download PDFInfo
- Publication number
- CN116091559A CN116091559A CN202211532142.3A CN202211532142A CN116091559A CN 116091559 A CN116091559 A CN 116091559A CN 202211532142 A CN202211532142 A CN 202211532142A CN 116091559 A CN116091559 A CN 116091559A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- contour
- digital model
- assembly
- assembly state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an assembly state identification method based on a preferred view angle, which aims to solve the problems that the existing product assembly state identification method is easy to have identification errors and misjudgment and is high in calculation time. Firstly, establishing an external regular polyhedron of each assembly state of a digital model of a product, acquiring point cloud data of the digital model under different view angles through each vertex and surface center of the regular polyhedron, constructing shape vector descriptors for adjacent order sequence number mode point clouds under different view angles, performing cosine distance measure calculation, and selecting preferable view angles of different adjacent order sequences; then constructing a point cloud descriptor and a digital-to-analog point cloud layered projection profile descriptor of the packaged product, and carrying out local-to-global registration on the profile of the packaged product and the digital-to-analog profile; and then, according to the registration relation, converting the sampling view angle into a preferred view angle, resampling, analyzing the difference of the overlapping degree of the contour of the truncated layer of the product point cloud of the preferred view angle and the digital-analog point cloud, judging a key truncated layer, and finally, completing the assembly state identification by utilizing the overlapping degree of the key truncated layer.
Description
Technical Field
The invention belongs to the field of computer-aided assembly, and particularly relates to a method for identifying a preferred view angle of each adjacent sequence by using a product digital model, and further identifying the assembly state of a product under the preferred view angle.
Background
The manual assembly takes an important role in the complex product assembly process by virtue of the high flexibility and the strong adaptability. Along with the rapid development of intelligent manufacturing, the difficulty of the assembly process of complex products is gradually increased, the assembly progress and the learning cost of assembly operation are increased, and the problem of difficulty in restricting the improvement of the production efficiency of enterprises is solved. According to the real-time assembly scene analysis assembly condition, the assembly guide information is timely pushed to the operators and guided, and an assembly mode of man-machine fusion is adopted to solve the problem. The packaged product is a target object for part installation and contains process information of product assembly. The real-time assembly progress is analyzed by combining the process information of the product in assembly, and the assembly guiding information is provided, so that the learning cost of operators is reduced, the error operation is prevented, and the method has positive engineering application significance for improving the efficiency and quality of manual assembly. Several methods for assembly state recognition have emerged:
in the patent document with publication number of CN113111741A, an assembly state identification method based on three-dimensional feature points is disclosed, the method extracts feature points according to the local structural features of an assembly body, compares the digital model with the feature points between the assembled products, realizes point cloud registration and similarity calculation, and further completes product assembly state identification. However, when the method is used for identifying products in adjacent assembly stages, misjudgment of identification results may be caused by high similarity of models.
The patent document with publication number of CN114842221A discloses a label-free assembly state identification method based on product depth image point cloud, which comprises the steps of constructing a shape vector descriptor according to projection point cloud of a digital model, acquiring the depth image point cloud of an assembly state of a product in an actual assembly scene through a depth camera, constructing the shape vector descriptor as well, calculating similarity between the two descriptors by using a cosine similarity method, and completing identification of the assembly state according to a similarity calculation result. However, further researches find that under certain view angles, the shape vector descriptors of the point clouds of the adjacent sequence products are insufficient in discrimination, so that the situation that the assembly state recognition result is wrong possibly occurs, and the method belongs to a pair of multi-model indiscriminate traversal recognition and is relatively high in calculation time.
Disclosure of Invention
In order to solve the technical problems that the existing method for identifying the assembly state of the product is easy to identify errors and misjudgment in the process of identifying the state of the product due to high similarity of the assembly states of adjacent sequence models of the product and difficult to observe the difference under partial view angles, and the existing method is high in calculation time consumption, the invention provides an assembly state identification method, a storage medium and electronic equipment based on a preferable view angle.
The invention is characterized in that:
firstly, constructing shape vector descriptors for neighbor order sequence modular point clouds under different view angles, performing cosine distance measure calculation, quantifying view angle recognition level, and selecting preferred view angles of different neighbor order sequences;
then, constructing layered projection outlines of point clouds of products and point clouds of digital models, and carrying out local-to-global point cloud registration based on the point-to-edge Haoskov distance, so as to analyze the spatial mapping relation of the point clouds of the products and the point clouds of the digital models;
and finally, guiding the sampling points to the preferable view point through the mapping relation, and completing the identification of the assembly state according to the partial truncated contour coincidence degree of the key assembly part under the preferable view angle.
The technical scheme of the invention is as follows:
an assembly state recognition method based on a preferred viewing angle, comprising the steps of:
step 1: establishing m coaxial external regular polyhedrons for the digital model of each assembly state of the assembly body, wherein m is more than or equal to 1;
step 2: for the digital model of each assembly state, respectively taking the vertex of the external regular polyhedron and the central point of each surface as view angles, and obtaining local point clouds of the digital model at each view angle;
step 3: adjacent order sequence in assembly process of construction assembly body P i Refers to the i-th assembly state of the assembly body, P i+1 The i+1th assembly state of the assembly body is indicated, and n indicates the total number of parts of the assembly body;
step 4: for each adjacent order sequence, obtaining a model difference degree set of two related assembly state digital models under different view angles, and taking a view angle corresponding to the maximum value of the model difference degree as a preferred view angle of the related assembly state to obtain a preferred view angle of each assembly state;
step 5: for each assembly state, constructing an assembly identification template of the assembled product together based on the digital model local point cloud under the preferred view angle and the preferred view angle;
step 6: taking any free view angle as an initial view angle, acquiring the point cloud of the product in the current assembly state under the initial view angle, and converting the point cloud into a world coordinate system;
step 7: construction of point cloud self-adaptive hierarchical projection profile descriptor LPC of packaged product in current assembly state R And a digital model point cloud hierarchical projection profile descriptor LPC in the current assembly state Q ;
Step 8: locally registering each layer of contour of the product contour and the digital model contour to obtain an optimal registration matrix of each layer of contour;
step 9: realizing overall registration based on the contour coincidence degree, and selecting an overall optimal registration matrix from the optimal registration matrices obtained in the step 8 based on the maximum value of the overall matching value:
Step 10: acquiring a preferred view point coordinate corresponding to the current adjacent order sequence from the assembly identification template obtained in the step 5, converting the preferred view point coordinate from a digital-analog coordinate system to a world coordinate system through the integral optimal registration matrix obtained in the step 9, and resampling the on-load product point cloud by taking the coordinate converted to the world coordinate system as a resampling view point to obtain the on-load product point cloud under the preferred view angle;
step 11: and (3) analyzing the difference of the truncated layer contour coincidence degrees of the point cloud of the packaged product obtained in the step (10) and the point cloud of the corresponding digital model of the search query in the assembly identification template established in the step (5) under the preferred view angle, so as to judge the key truncated layer, and finally, completing the identification of the assembly state by utilizing the key truncated layer coincidence degree.
Further, the step 1 specifically comprises the following steps:
1.1 Establishing an external regular polyhedron for the digital model of each assembly state;
1.2 The external regular polyhedron of the digital model of each assembly state established in the step 1.1) is respectively rotated for m-1 times around the same rotation axis at an angle alpha, so that the digital model of each assembly state corresponds to m coaxial external regular polyhedrons.
Further, the angle alpha is less than or equal to 360 degrees/m.
Further, for each neighbor order sequencei is 1,2, … and n respectively, and the step 4 is specifically as follows:
step 4.1) for digital model point cloud Q under a certain view angle j i Is a local point cloud of (2)Random sampling x 2 Calculating Euclidean distance between sampling points in each sampling to obtain a set L,/I>l 1 Euclidean distance between two sampling points obtained for 1 st sampling …, l x Is the x th 2 The Euclidean distance between two sampling points obtained by sub-sampling;
step 4.2) representing the distribution of the set L by an equidistant histogram in x dimension;
step 4.3) constructing a digital model point cloud Q under the view j from the equidistant histogram i Is a local point cloud Q of (2) i (j) Shape distribution vector SV of (1) i ;
Step 4.4) Point cloud Q for digital model under the same viewing angle j i+1 Is a local point cloud of (2)Repeating steps 4.1), 4.2) and 4.3), and constructing the shape distribution vector SV thereof i+1 ;
Step 4.5) acquiring the adjacent order sequence under the visual angle jShape distribution vector SV of digital model i And SV(s) i+1 Included angle of (a)Based on the angle->Calculating the adjacent order sequence->The degree of difference between the two digital models involved Dif (SV i ,SV i+1 );
Step 4.6) repeating steps 4.1) -4.5) under the rest view angles to obtain the adjacent order sequenceModel difference degree under each visual angle between related digital models, and the visual angle corresponding to the maximum value of the model difference degree is taken as a neighbor order sequence +. >Preferred viewing angles for the two assembly states involved.
Further, the step 7 specifically comprises:
7.1 Point cloud R of on-load product by taking Z-axis positive direction as layering direction i Constructing a layered projection profile projection point set pc k ={(x,y)|(x,y,z)∈Pc k };
Wherein:
Pc k representing a kth segment layered point cloud after the point cloud is adaptively layered;
h * representing the maximum height of a point in the point cloud PC, i.e., h * =z max ;
h * X (k-1)/λ represents the starting ordinate, i.e. the minimum ordinate, of the k-th segment of the layered point cloud;
h * x k/lambda represents the termination ordinate, namely the maximum ordinate, of the k-th segment of layered point cloud;
7.2 A convex hull algorithm is adopted to project the layers to obtain a two-dimensional point set pc k Expressed as a contour
Wherein:
convex () represents the Convex hull solving algorithm;
7.3 Point cloud self-adaptive layered projection wheel for product in packageProfile descriptorThe method comprises the following steps: />
Construction of digital model point cloud hierarchical projection contour descriptorMethod and construction of->The same method as in (a).
Further, the step 8 specifically comprises: the contour registration method based on the Haoskov distance from point to edge realizes the local registration of the product contour and the digital model contour, and the optimal registration matrix of each contour layer is obtained.
Further, the step 9 specifically includes:
each layer of contour in the product point cloud hierarchical contour descriptor and each layer of contour in the digital model point cloud hierarchical descriptor pass through the optimal registration matrix T of the layer 1 contour 1 * Transforming, calculating the average value of the coincidence ratio of the two contours after transformation, and enabling each contour in the layered contour descriptor of the packed product point cloud and each contour in the layered contour descriptor of the digital model point cloud to pass through the optimal registration matrix T of the contour of the layer 2 2 * Transforming, calculating the average value of the coincidence ratio of the two contours after transformation, …, and enabling each contour in the layered contour descriptor of the point cloud of the packaged product and each contour in the layered contour descriptor of the point cloud of the digital model to pass through the optimal registration matrix of the lambda-th layerAnd (3) performing transformation, calculating the average value of the coincidence degrees of the two contours after the transformation, and selecting an optimal registration matrix of a layer corresponding to the average value of the maximum coincidence degrees as an overall optimal registration matrix.
Further, the step 11 specifically includes:
11.1 Acquiring a general truncated layer L of a neighboring sequence digital model point cloud under a preferred view angle - And critical section L * The following is shown:
wherein:
n * the number of the digital model point clouds to be matched with the packaged products is represented;
An r layer profile representing the LPC descriptor of an mth digital model point cloud at a preferred view;
COVER * representing the range of the overlapping degree threshold value of the truncated contour;
η represents a threshold range factor, (0, 1);
11.2 Adding the non-overlapping areas of each key section to obtain the non-overlapping total value of each state model
Wherein:
representing the total value of the non-coincidence degree of the key cross sections of the mth digital model point cloud and the product point cloud;
representing the non-overlapping area of the mth digital point cloud and the product point cloud at the critical section r, wherein r is E [1,2, …, r * ],r * Representing the set { L ] * Number of critical sections;
and representing the coincidence contour area of the r layer key section of the digital-analog point cloud and the product point cloud.
11.3 Taking the total value of non-overlapping degreeAnd at the minimum value, the assembly state corresponding to the digital model is taken as the assembly state of the packaged product.
The present invention also provides a storage medium having a computer program stored thereon; the method is characterized in that: the computer program, when being executed by a processor, performs the method described above.
The invention also provides an electronic device, which comprises a processor and a storage medium; the storage medium has a computer program stored thereon; the special feature is that: the computer program, when executed by the processor, performs the method described above.
The invention has the beneficial effects that:
1. firstly, establishing an external regular polyhedron of each assembly state of a digital model of a product, acquiring point cloud data of the digital model under different view angles through each vertex and surface center of the regular polyhedron, constructing shape vector descriptors for adjacent order sequence number mode point clouds under different view angles, performing cosine distance measurement calculation, quantifying view angle recognition level, and selecting preferred view angles of different adjacent order sequences; then constructing a point cloud descriptor and a digital-analog point cloud layered projection contour descriptor of the packaged product, and realizing local-to-whole registration of the contour of the packaged product and the digital-analog contour by a contour registration method based on the Hastethodor distance from point to edge; and then, according to the registration relation, converting the sampling view angle into a preferred view angle, resampling, analyzing the difference of the overlapping degree of the truncated layer contours of the product point cloud and the digital-analog point cloud of the preferred view angle, further judging a key truncated layer, and finally, completing the identification of the assembly state by utilizing the overlapping degree of the key truncated layer. Because the different view angles have differences on the distinguishing capability of the adjacent sequence models, the adjacent sequence models under the preferred view angles have higher degree of difference, the problems of recognition errors, misjudgment and the like caused by high similarity of the adjacent sequence models can be avoided, and the accuracy of product assembly state recognition is improved.
2. The method can be adopted to construct an offline database based on the product digital model in advance, wherein the offline database comprises information such as virtual multi-view digital model point clouds, shape vector descriptors, preferred view angles and the like, an operator can be guided to sample products at the preferred view angles in subsequent assembly identification, matching analysis is not required to be carried out on all virtual view point clouds without difference, the analysis times are greatly reduced, and the speed of product assembly state identification is improved.
Drawings
Fig. 1 is a schematic diagram of an assembly state of 4 selected groups of samples and their inclusion in an embodiment of the present invention.
FIG. 2 shows the digital model Product1 in the assembled state P according to the embodiment of the invention 5 An external regular icosahedron schematic diagram of the digital model below.
FIG. 3 is a schematic view of the digital model of FIG. 2 from various angles of view after rotation of an circumscribed regular icosahedron.
Fig. 4 is a partial point cloud model diagram of the digital model Product1 under a partial view angle in an embodiment of the present invention.
Fig. 5 is a schematic diagram of various coordinate systems involved in an assembly scenario in the present invention.
FIG. 6 is a schematic diagram of a neighbor sequence model of digital model Product1 in an embodiment of the invention.
FIG. 7 is a flow chart of a method for calculating the degree of difference between two adjacent order sequence models according to the present invention.
Fig. 8 is a corresponding shape distribution histogram at a view angle of 5 assembly state portions of the digital model Product1 in the embodiment of the present invention.
FIG. 9 shows the difference values of the adjacent order sequence model of the digital model Product1 under partial view in the embodiment of the invention.
FIG. 10 is a schematic view of sample Product1 in a point cloud of Product loading in an embodiment of the present invention.
FIG. 11 is a schematic diagram of a process for constructing a hierarchical projection profile descriptor in accordance with the present invention.
FIG. 12 is a schematic diagram of a process for calculating a hierarchical projection profile optimal registration matrix for an on-package product point cloud and a digital model point cloud in accordance with the present invention.
Fig. 13 is a flow chart of a method for global registration of point clouds based on contour coincidence in the present invention.
FIG. 14 shows a sample Product1P according to an embodiment of the present invention 5 And (3) carrying out optimal registration result schematic diagrams on the product point cloud and the digital model point cloud.
Fig. 15 is a preferred schematic view of the sampling perspective based on point cloud registration in the present invention.
FIG. 16 is a schematic diagram of the analysis of preferred view angle discrimination capability and initial view angle discrimination capability in the present invention.
Fig. 17 is a comparison of the recognition rate of the assembled state of each sample selected in the embodiment of the present invention at the preferred viewing angle and the initial viewing angle.
Detailed Description
The present invention is described in further detail below with reference to the accompanying drawings.
The invention provides an assembly state identification method based on a preferable view angle, which specifically comprises the following steps:
step 1: 1 or m coaxial external regular polyhedrons are established for the digital model of each assembly state of the assembly, and m is more than or equal to 2:
step 1.1, recording the assembly as product= { P 1 ,P 2 ,…,P i ,…,P n },P i The i-th assembly state of the assembly body is represented, n is the total number of parts of the assembly body, n assembly states are counted, and each assembly of one part obtains a new assembly state; establishing 1 external regular polyhedron for the digital model of each assembly state of the assembly body, so that the vertex of the outer contour of the digital model is positioned on or close to the surface of the regular polyhedron; the more complex the digital model is, the more the number of faces of the regular polyhedron is established, so as to ensure the accuracy of recognition;
step 1.2, rotating the external regular polyhedron of the digital model in each assembly state established in step 1.1 for m-1 times at an angle alpha around the same rotation axis (the general assembly plane is a horizontal plane, and the rotation around the Z axis of the world coordinate system can enable virtual and real point cloud registration to be more convenient), so that the digital model in each assembly state corresponds to m coaxial external regular polyhedrons; preferably, α=360°/m, and the obtained viewing angle is more uniform, so that the projection point cloud obtained from each viewing angle is sufficiently obvious; if it is larger than this angle, a repeated viewing angle may occur, and if it is smaller than this angle, the viewing angle obtained is not uniform.
Step 2: for each digital model in the assembly state, the vertex of the external regular polyhedron and the central point of each surface are respectively taken as view angles, and the local part of each view angle is obtainedPoint cloud Q i :
For the digital model of each assembly state, sampling the digital model point cloud of each assembly state by taking each vertex of the corresponding m coaxial external regular polyhedrons and the central point of each surface as the view angles (s view angles in total) to obtain the local point cloud Q of the digital model point cloud of each assembly state under the different view angles i ,Q i =(Q i (1) ,Q i (2) ,…,Q i (j) ,…,Q i (s) ) I is 1,2 … and n respectively; s is the total number of viewing angles; q (Q) i A digital model point cloud representing an i-th assembly state; q (Q) i (j) Digital model point cloud Q representing an ith assembly state i And (5) obtaining a local point cloud under the j-th view angle. The local point cloud of each view angle obtained by sampling is located in a sampling coordinate system (i.e. a coordinate system taking a sampling point as an origin of the coordinate system and taking the direction from the sampling point to the center of the digital model as a Z axis), and for subsequent registration with the point cloud of the product to be installed, the point cloud obtained by sampling needs to be converted into the coordinate system of the digital model (i.e. a coordinate system taking the center of the digital model as the origin and taking the vertical direction as the Z axis) through translation and rotation transformation.
Construction of the neighbor sequencesWherein P is i The i-th assembly state of the assembly body corresponds to the i-th assembly stage; p (P) i+1 The i+1th assembly state of the assembly body corresponds to the i+1th assembly stage; adjacent order sequence->The method is a set of two adjacent assembly stage digital models in the assembly process of an assembly body, wherein one assembly body is provided with (n-1) groups of adjacent order sequences, and n is the total number of parts of the assembly body.
Step 4: for each adjacent order sequence, obtaining a model difference degree set of two related assembly state digital models under different view angles, and taking a view angle corresponding to the maximum value of the model difference degree as a preferred view angle of the related assembly state to obtain a preferred view angle of each assembly state;
for each adjacent order sequence, the preferred viewing angles of the two assembly states involved are determined in particular according to the following steps:
step 4.1, for the digital model point cloud Q under a certain view angle j i Is a local point cloud Q of (2) i (j) Random sampling is carried out, and Euclidean distance between sampling points is calculated:
acquisition of neighbor sequencesDigital model point cloud Q i Reading local point cloud Q under view angle j i (j) At the local point cloud Q i (j) X is the same as 2 Sub-random sampling (the specific sampling times are selected according to the complexity of the model, the more complex the model is, the more sampling times are, generally 10242 is selected), and the local point cloud Q is selected for each sampling i (j) Calculating Euclidean distance between two sampling points in each sampling to obtain a set L, & gt>
l 1 The euclidean distance between the two sample points obtained for sample 1, …,is the x th 2 The Euclidean distance between two sampling points obtained by sub-sampling;
step 4.2, generating a distance statistical histogram:
the distribution of the set L is represented by an equidistant histogram in x dimension, and the group distance d is represented by the following formula:
wherein:
l max representing the maximum value of Euclidean distance of a sampling point;
l min representing the minimum value of Euclidean distance of the sampling point;
the height of each bin in the histogram represents the frequency at which the sampling distance is in that bin, the bin height h being shown as:
wherein:
h i the height of the ith bin in the histogram is shown, as well as the frequency at that bin distance;
n i representing the statistical frequency of the sampling distance in the ith interval;
step 4.3, constructing a digital model point cloud Q under the view angle j i Is a local point cloud Q of (2) i (j) Shape distribution vector SV of (1) i :
Construction of an x-dimensional shape distribution vector SV from a 4.2 equidistant histogram i The value of each dimension is the bin height h of the corresponding histogram i The following formula is shown:
wherein:
sv i representing a shape distribution vector SV i Is the i-th component of (a);
step 4.4, for the digital model point cloud of the (i+1) th assembly state under the same view angle j Repeating the steps 4.1, 4.2 and 4.3 to construct an x-dimensional shape distribution vector, wherein the x-dimensional shape distribution vector is the digital model point cloud under the view angle jShape distribution vector SV of (1) i+1 ;
Step 4.5, acquiring the adjacent order sequence under the visual angle jShape distribution vector SV of digital model i And SV(s) i+1 Included angle of (a)Based on the angle->Calculating the adjacent order sequence->The degree of difference between the two digital models involved Dif (SV i ,SV i+1 ):
For adjacent order sequence at view jDigital model local point cloud->And->Let its shape distribution vector SV i And SV(s) i+1 The included angle is->Similarity Sim (SV) of two point clouds i ,SV i+1 ) Cosine values that can be distributed by two shapesCalculated according to the following formula:
since the visibility of the view angle is the difference between solution models, and Sim (SV i ,SV i+1 ) Embodied is a shape distribution vector SV i And SV(s) i+1 Similarity between, thus defining model difference Dif (SV i ,SV i+1 ) The visual angle recognition capability is directly measured and expressed as follows:
Dif(SV i ,SV i+1 )=arccos[Sim(SV i ,SV i+1 )]
Dif(SV i ,SV i+1 ) The larger the adjacent order sequence model difference degree under the visual angle j is, the better the visual angle has the identification capability; otherwise, the recognition capability of the visual angle is not good if the adjacent order sequence model difference degree under the visual angle j is too small;
step 4.6, modeling adjacent order sequence digital model under other different view angles Repeating steps 4.1-4.5 to obtain adjacent sequence +.>Model difference Dif of the digital model under each view angle and sequencing the model difference Dif according to the size to obtain a neighbor order sequence +.>Digital model difference degree (or recognition capability) set DIF (i, i+1):
DIF(i,i+1)={Dif min (SV i ,SV i+1 ),…,Dif max (SV i ,SV i+1 )}
wherein:
Dif min (SV i ,SV i+1 ),Dif max (SV i ,SV i+1 ) Respectively represent adjacent sequencesMinimum and maximum values of the lower viewing angle recognition capability. Due to the difference maximum Dif max (SV i ,SV i+1 ) Corresponding toThe view is the neighbor order sequence +.>Is a preferred viewing angle for (a).
Step 5: building an assembly identification template IT of an assembled product:
and for each group of adjacent order sequences, constructing an assembly identification template IT of the packaged product based on the local point cloud of the digital model under the preferred view angle and the preferred view angle.
Wherein:
p v representing the preferential view point coordinates of the adjacent sequence under a digital model coordinate system;
digital model point cloud Q for ith assembled state at viewing angle j i Is a local point cloud of (2);
digital model point cloud Q for i+1th assembly state under view angle j i+1 Is a local point cloud of (c).
Step 6: acquiring the point cloud R of the current assembly state on-package product under the initial view angle i :
Shooting an assembled product under a real assembly scene by using a depth camera at an initial view angle (the initial view angle is a free view angle, namely, a randomly selected view angle is taken as the initial view angle), and recording an initial sampling view point coordinate p 0 Denoising the obtained point cloud, and then transforming through translation and rotationConverting the point cloud from a camera coordinate system to a world coordinate system to obtain the point cloud R of the product in the current assembly state i 。
Step 7: constructing point cloud R of on-load product in current assembly state i Adaptive hierarchical projection profile (Layered projection contour, LPC) descriptor and digital model point cloud Q i Hierarchical projection profile descriptor:
because the digital-analog coordinate system and the world coordinate system both take the assembly horizontal plane as the XOY plane, the point cloud R of the assembled product in the current assembly state i Digital-analog point cloud Q in current assembly state i Alignment in the Z-axis direction is achieved, but not in the other two coordinate directions, so that the LPC descriptor needs to be constructed to realize the point cloud R of the packaged product in a contour registration manner i And digital-to-analog point cloud Q i Alignment in the X-axis direction and the Y-axis direction. The specific method for constructing the LPC descriptor is as follows:
step 7.1, taking the positive direction of the Z axis as the layering direction to point cloud R of the packaged product i Build hierarchical projection profile descriptor (hierarchical projection point set): first, the visual height h of each part is measured k Counting all the visualized heights and selecting the minimum valueWhen the truncated layer height is lower than + >Can acquire any increment information, thus setting the slice height asThe i-th hierarchical point cloud is represented as follows:
wherein:
Pc k representing a kth segment layered point cloud after the point cloud is adaptively layered;
h * representing a point cloud R i Maximum height of the point in (h) * =z max ;
h * X (k-1)/λ represents the starting ordinate, i.e. the minimum ordinate, of the k-th segment of the layered point cloud;
h * x k/lambda represents the termination ordinate, namely the maximum ordinate, of the k-th segment of layered point cloud;
lambda is the total number of layers divided.
Layering the kth segment with a point cloud Pc k Projecting to an XOY plane along the negative direction of the Z axis to obtain a layered projection point set pc k ={(x,y)|(x,y,z)∈Pc k }。
Step 7.2, two-dimensional point set pc obtained in step 7.1 k There are a large number of points, and the two-dimensional point set pc obtained after layered projection is adopted by adopting a convex hull algorithm in consideration of the influence of noise, point cloud density and other factors k Expressed as a contourThe following formula is shown:
wherein:
convex () represents the Convex hull solving algorithm.
Step 7.3 final projection of the contours by layeringThe sequence (k is 1,2, …, lambda respectively) forms a point cloud R of the packaged product i Layered projection profile descriptor (LPC descriptor):
step 7.4, deducing the current theoretical assembly state of the packaged product according to the historical assembly state identification condition, and carrying out a digital model Q on the theoretical assembly state i 7.1-7.3 steps are carried out to obtain the hierarchical projection profile descriptor of the assembly state digital model
Step 8: the contour registration method based on the Hausdorff distance from the point to the edge realizes the local registration of the product contour and the digital model contour, and obtains the optimal registration matrix of each layer of contour:
step 8.1, in order to load product point cloud R i Layered projection profile of any one of the layersFor contours to be registered, using a digital model Q i Layered projection profile +.>For the target contour, two vertices are arbitrarily selected +.>And->As registration basis and calculate the roofA translation matrix between points u and v; then the counterclockwise near point u of the vertices u and v is extracted 0 And v 0 Calculating a registration edgeAnd->Is a rotation matrix of (a); finally, a registration matrix T taking vertexes u and v as references is obtained according to the translation matrix and the rotation matrix u·v By registering matrix T u·v Contours to be registered can be->Registering to the target contour->
Step 8.2, calculating the registered contourIs +.>Hausdorff distance between points to edgesThe following is shown:
wherein:
v i Representing a point in the contour, v i+1 Representing a point adjacent to the point, wherein the two points form an edge, namely e;
u' represents the projection of vertex u on the line where edge e is located;
the distance from the vertex u to the edge e is represented by the value of u, e, which is three cases:
(1) When the projection u' of the vertex u on the edge e is within the range of the edge, then |u, e| is the vertical distance of the vertex u to the edge e, i.e
(2) When the projection u' of the vertex u on the edge e is not within the edge range, and the vertices u and v i Closer distance, i.e. |u, v i |<|u,v i+1 I, then i u, e is equal to i u, v i |;
(3) When the projection u' of the vertex u on the edge e is not within the edge, and u and v i+1 Closer distance, i.e. |u, v i+1 |<|u,v i I, then i u, e is equal to i u, v i+1 |;
Step 8.3, traversing the contours to be registeredAnd target profile->Selecting one point each time as a pair of registration reference points, repeating the steps 8.1 and 8.2, obtaining the point-to-edge Haoskov distance VHD of each registration, selecting the optimal matching condition of the minimum VHD representing two contours, wherein the registration matrix is the optimal registration matrix of the contours of the layer
Step 8.4, traversing the point cloud of the product in the current assembly stateR i Hierarchical projection profile descriptorAnd digital model point cloud Q i Hierarchical outline descriptor->Repeating the steps 8.1-8.3 to obtain an optimal registration matrix for each layer of contours>
Step 9: realizing overall registration based on the contour coincidence degree, and selecting an overall optimal registration matrix from the optimal registration matrices obtained in the step 8 based on the overall matching value:
Step 9.1, for the point cloud R of the product in the current assembly state i Hierarchical projection profile descriptorAnd digital model point cloud Q i Hierarchical outline descriptor->Is>And->Calculating optimal registration matrix corresponding to the layer profile>The coincidence degree of the two contours after transformation>
Wherein:
representing the coincidence degree of the two contour areas after registration, wherein the larger the value is, the better the matching degree of the two contours is;
Step 9.2, the point cloud R of the product in the current assembly state is assembled i Hierarchical projection profile descriptorEach layer profile of (a) is passed through the optimal registration matrix of the kth layer +.>Transforming, and registering to obtain the contourAll registration profiles->And digital model descriptor->Profile of-> The average value of the coincidence degree is the optimal registration matrix of the kth layer>The overall matching value of the lower two-point cloud +.>
Step 9.3, traversing all registration matricesCalculating the overall matching value of the two point clouds under each registration matrix by adopting the method of the step 9.2, and sequencing to obtain the maximum overall matching value +.>
When the overall matching value is the largest, the registration matrix corresponding to the largest value is the overall optimal registration matrix T match 。
That is, each layer of contours in the packed product point cloud layered contour descriptor and each layer of contours in the digital model point cloud layered descriptor pass through the optimal registration matrix T of layer 1 contours 1 * Transforming, calculating the average value of the coincidence ratio of the two contours after transformation, and layering each layer of contour in the contour descriptor of the point cloud of the packaged productAnd each layer of contours in the digital model point cloud hierarchical descriptor passes through an optimal registration matrix T of layer 2 contours 2 * Transforming, calculating the average value of the coincidence ratio of the two contours after transformation, …, and enabling each contour in the layered contour descriptor of the point cloud of the packaged product and each contour in the layered contour descriptor of the point cloud of the digital model to pass through the optimal registration matrix of the lambda-th layerAnd (3) performing transformation, calculating the average value of the coincidence degrees of the two contours after the transformation, and selecting an optimal registration matrix of a layer corresponding to the average value of the maximum coincidence degrees as an overall optimal registration matrix.
Step 10: transforming the sampled view angles to the preferred view angle according to the registration relationship and resampling:
step 10.1, the current assembly state and the last assembly state form a neighbor order sequenceIn assembling the identification template->Acquiring the coordinate p of the preferential view point corresponding to the adjacent order sequence v The overall optimal registration matrix T obtained in the step 9 match Coordinates p of the preferred view point v Conversion from digital-to-analog to world coordinate system, the coordinate in world coordinate system being p r As resampled view points.
Step 10.2, according to the initial sampling of the view point p 0 And resampling the view point p r Establishing a visual angle transformation relationThe depth camera is guided to move from the initial sampling position to the preferred sampling position and resamples, thereby acquiring the point cloud of the packaged product under the preferred view angle.
Step 11: analyzing the difference of the overlapping degree of the truncated layer profile of the point cloud of the product to be assembled and the point cloud of the digital model under the preferable view angle to judge the key truncated layer, and finally completing the identification of the assembly state by utilizing the overlapping degree of the key truncated layer:
step 11.1, under the preferred view angle, the point clouds of the adjacent-order sequence digital model point clouds at the non-newly-installed part position are highly similar, the point cloud difference is mainly reflected at the newly-installed part position, and the self-adaptive layering means enables the newly-installed part to be displayed in a few truncated layer contours, so that the truncated layer contour overlap ratio threshold range COVER is utilized * The truncated elements of LPC descriptors are divided into two classes, typically truncated L - And critical section L * . The following is shown:
wherein:
n * The number of the digital model point clouds to be matched with the packaged products is represented;
it is shown that at a preferred viewing angle,the r layer profile of the LPC descriptor of the mth digital model point cloud;
COVER * representing the range of the overlapping degree threshold value of the truncated contour;
η represents a threshold range factor, and (0, 1) is generally taken as the case may be.
Step 11.2, cutting layer L at key * The non-overlapping area of each key section layer is added to obtain the non-overlapping total value of each state model when the overlap ratio of the packaged product and the target state model is higher, namely the non-overlapping area is smallerThe non-coincident total value corresponding to the target state model is the smallest:
wherein:
representing the total value of the non-coincidence degree of the key cross sections of the mth digital model point cloud and the product point cloud;
representing the non-overlapping area of the mth digital point cloud and the product point cloud at the critical section r, wherein r is E [1,2, …, r * ],r * Representing the set { L ] * Number of critical sections;
and representing the coincidence contour area of the r layer key section of the digital-analog point cloud and the product point cloud.
When the total value of the non-overlapping degree is takenAnd when the minimum value is obtained, the corresponding assembly state of the digital model is the assembly state of the in-package product, and the identification of the assembly state of the in-package product under the preferred view angle is completed.
Examples:
the following describes an assembly state recognition method based on a preferred view angle according to the present invention, taking the Product1 in 4 groups of samples as shown in fig. 1 as an example, where the Product1 is assembled in 5 groups. The method comprises the following specific steps:
step 1: for each assembled state of the assembly, 3 coaxial circumscribed regular icosahedrons were created:
step 1.1, establishing an external regular icosahedron for the digital model of each assembly state of the assembly body, so that the outline points of the digital model are positioned on or close to the surface of the regular icosahedron, wherein the 5 th assembly state P of the assembly body Product1 is taken as an example 5 An circumscribed positive icosahedron of the digital model of (a) is shown in figure 2.
Step 1.2, rotating the circumscribed regular icosahedron of the digital model established in the step 1.1 around the Z axis of the world coordinate system for 2 times, so that the digital model in each assembly state corresponds to 3 coaxial circumscribed regular icosahedrons; the regular icosahedron has 20 faces and 12 vertexes, and because the two vertexes on the Z axis are coincident, the new 20 faces and 10 vertexes are obtained by rotating once, if the vertexes of the regular icosahedron and the central point of each face are taken as viewing angles, the schematic diagrams of the viewing angles after rotation are shown in figure 3.
Step 2: acquiring local point clouds Q of digital model of each assembly state of assembly at multiple view angles i :
For each digital model of the assembly state, each vertex of the corresponding 3 coaxial circumscribed regular icosahedrons is respectively used forThe center point of each surface is a view angle (92 view angles in total), the digital model point clouds are sampled, and the local point clouds Q of the digital model of each assembly state under the 92 view angles are obtained i ,Q i =(Q i (1) ,Q i (2) ,…,Q i (j) ,…,Q i (92) ) I is 1,2, … and 5 respectively; q (Q) i A digital model point cloud representing an i-th assembly state; q (Q) i (j) The digital model point cloud representing the i-th assembly state is a local point cloud acquired at the j-th view angle. Product1 5 th mounting state P 5 A partial point cloud model of the digital model at a partial view angle is shown in fig. 4.
The sampled point cloud is located in a sampled coordinate system (i.e. a coordinate system with a sampling point as an origin of the coordinate system and a Z axis in a direction from the sampling point to a digital-analog center), and for subsequent registration with the point cloud of the packaged product, the sampled point cloud is converted into the digital-analog coordinate system (a coordinate system with the digital-analog center as the origin and the Z axis in a vertical direction) through translation and rotation transformation. Schematic diagrams of various coordinate systems in the assembly scene are shown in fig. 5.
Taking the assembly Product1 of FIG. 1 as an example, a neighbor sequence was constructedWherein P is i The i-th assembly state of the assembly body Product1 corresponds to the i-th assembly stage; p (P) i+1 Refers to the i+1th assembly state of assembly body Product1, corresponding to the i+1th assembly stage. Adjacent order sequence->Namely, the assembly body Product1 in fig. 1 is a set of digital models of two adjacent assembly stages in the assembly process, and 4 adjacent sequences are total, as shown in fig. 6.
Step 4: respectively calculating adjacent sequences(i) respectively taking model difference degree sets of two adjacent assembly state digital models in 1,2,3 and 4) under different view angles so as to determine the preferable view angles of the assembly states:
step 4.1, selecting a group of adjacent order sequences of the assembly body Product1The related digital model point cloud is Q 4 And Q 5 Calculating digital model point cloud Q 4 And Q 5 Model variability at the same viewing angle j (see fig. 7);
4.1.1 digital model Point cloud Q at viewing Angle j 4 Is a local point cloud of (2)Randomly sampling, and calculating Euclidean distance between sampling points:
acquisition of neighbor sequencesDigital model point cloud Q 4 Reading the local point cloud under view j>At a local point cloudMiddle run 1024 2 Sub-random sampling, selecting point cloud for each sampling>Calculating Euclidean distance between two sampling points in each sampling to obtain a set L, & gt>l 1 For the Euclidean distance between the two sampling points obtained for sample 1, …, < >>Is 1024 th 2 The Euclidean distance between two sampling points obtained by sub-sampling;
4.1.2, generating a distance statistical histogram:
the distribution of the set L is represented by a 1024-dimensional equidistant histogram, and the group distance d is calculated according to the following formula:
wherein:
l max representing the maximum value of Euclidean distance of a sampling point;
l min representing the maximum value of the euclidean distance of the sampling point.
The height of each bin in the histogram represents the frequency at which the sampling distance is in that bin, and the bin height h is calculated according to the following equation:
wherein:
h i the height of the ith bin in the histogram is represented, and the frequency at which the sampling distance is at the bin distance is also represented;
n i representing the statistical frequency of the sampling distance in the ith interval;
the corresponding shape distribution histogram at the 5 assembly state part view angles of Product1 is shown in fig. 8.
4.1.3, constructing digital model point cloud Q under view angle j 4 Is a local point cloud of (2)Shape distribution vector SV of (1) 4 :
Construction of 1024-dimensional shape distribution vector SV from equidistant histogram of step 4.1.2 4 Shape distribution vector SV 4 The value of each dimension is the bin height of the corresponding histogramDegree h i The following formula is shown:
wherein:
sv i representing a shape distribution vector SV 4 Is the i-th component of (c).
4.1.4, constructing digital model point cloud Q under view angle j 5 Is a local point cloud of (2)Shape distribution vector SV of (1) 5 :
For adjacent order sequence digital model point cloud Q under same view angle j 5 Is a local point cloud of (2)Repeating the steps 4.1.1-4.1.3 to construct 1024-dimensional shape distribution vector SV 5 ;
4.1.5 obtaining the adjacent order sequence under the visual angle jShape distribution vector SV of digital model 4 With SV 5 Based on which the degree of difference Dif (SV) 4 ,SV 5 ):
For adjacent order sequence at view jDigital model local point cloud->And->Note its shape distribution vector SV 4 And SV(s) 5 The included angle is->Similarity Sim (SV) of two point clouds 4 ,SV 5 ) Can be defined by the cosine values of the two shape distribution vectors +.>Calculated according to the following formula:
since the view angle recognition capability is the difference between solution models, and Sim represents the similarity between shape distribution vectors, the definition model difference Dif is directly used to measure the view angle recognition capability, and is expressed as follows:
Dif(SV 4 ,SV 5 )=arccos[Sim(SV 4 ,SV 5 )]
the larger Dif, the higher the resolution of the adjacent order sequence at view jThe larger the difference degree of the digital model is, the better the recognition capability of the visual angle j is; otherwise, the recognition capability of the view j is not good.
Step 4.2, under the other different view angles, for the group of adjacent order sequencesThe digital model repeats step 4.1 to obtain the adjacent order sequence +.>Model difference Dif of the digital model under other view angles is sequenced according to the size to obtain a neighbor order sequence +.>Digital model difference degree set DIF (4, 5) as shown in the following formula:
DIF(4,5)={Dif min (SV 4 ,SV 5 ),…,Dif max (SV 4 ,SV 5 )}
wherein:
DIF (4, 5) indicates the neighbor order sequenceA set of recognition capabilities for all view angles;
Dif min (SV 4 ,SV 5 ),Dif max (SV 4 ,SV 5 ) Respectively represent adjacent sequencesMinimum and maximum values of the lower viewing angle recognition capability. Due to the difference maximum Dif max (SV 4 ,SV 5 ) The corresponding view angle has the best recognition capability, and is the preferred view angle of the current adjacent order sequence;
and 4.3, repeating the steps 4.1 and 4.2 for all other adjacent sequences of the assembly to obtain model difference degree sets of all other adjacent sequences in each view angle, and respectively sorting according to the sizes to obtain the preferred view angles of all the adjacent sequences. The difference Dif values of all adjacent order sequence partial views of the assembly Product1 are shown in fig. 9.
Step 5: constructing an assembly identification template of an assembled product:
and for each group of adjacent order sequences, constructing an assembly identification template IT of the packaged product based on the local point cloud of the digital model under the preferred view angle and the preferred view angle.
Wherein:
p v and representing the preferable view point coordinates of the adjacent order sequence in the digital-analog coordinate system.
Step 6: acquiring the point cloud of the current assembly state on-package product under the initial view angleR i :
Shooting an assembled product in a real assembly scene by a depth camera under a free view angle, and recording an initial sampling view point coordinate p 0 Denoising the obtained point cloud, and then converting the point cloud from a camera coordinate system to a world coordinate system through translation and rotation transformation to obtain the point cloud R of the product in the current assembly state i . The on-package Product point cloud of assembly Product1 is shown in FIG. 10.
Step 7: build-on-package product adaptive hierarchical projection profile (Layered projection contour, LPC) descriptor:
since the digital-analog coordinate system and the world coordinate system both use the assembly horizontal plane as the XOY plane, the product point cloud R is installed i And digital-to-analog point cloud Q i Alignment in the Z-axis direction is achieved, but not in the other two coordinate directions, so that alignment in the X-axis direction and the Y-axis direction of the point cloud and the digital-analog point cloud is achieved in a contour registration mode by constructing the LPC descriptor.
Step 7.1, taking the positive direction of the Z axis as the layering direction to point cloud R of the packaged product i Constructing a hierarchical projection profile descriptor, first measuring the visual height h of each part k Counting all visualization heights to select minimum valueWhen the truncated layer height is lower than +>Any increment information can be acquired, and the cutting height is set to be +.>The i-th hierarchical point cloud is represented as follows:
wherein:
Pc k representing a kth segment layered point cloud after the point cloud is adaptively layered;
h * representing a point cloud R i Maximum height of the point in (h) * =z max ;
h * X (k-1)/λ represents the starting ordinate, i.e. the minimum ordinate, of the k-th segment of the layered point cloud;
h * x k/lambda represents the termination ordinate, namely the maximum ordinate, of the k-th segment of layered point cloud;
Layering the kth segment with a point cloud Pc k Projecting to the XOY plane along the negative direction of the Z axis to obtain a layered projection pc k ={(x,y)|(x,y,z)∈Pc k }。
Step 7.2, two-dimensional point set pc obtained in step 7.1 k There are a large number of points, and the two-dimensional point set pc obtained after layered projection is adopted by adopting a convex hull algorithm in consideration of the influence of noise, point cloud density and other factors k Expressed as a contourThe following formula is shown:
wherein:
convex () represents the Convex hull solving algorithm.
Finally, the point cloud R of the packaged product is obtained i From layered projection profiles The sequence constitutes the hierarchical projection profile descriptor (LPC descriptor) of the model:
a schematic diagram of the hierarchical projection profile descriptor construction process is shown in fig. 11.
Step 8: constructing a digital model point cloud layered projection contour descriptor, and realizing local registration of the profile of the packaged product and the profile of the digital model by using a point-to-edge Hastelloy distance-based contour registration method to obtain an optimal registration matrix of each layer of profile:
step 8.1, deducing a theoretical assembly state of the packaged product according to the historical assembly state confirmation condition, and carrying out a digital model Q on the theoretical assembly state i Step 7.1 and step 7.2 are carried out to obtain the hierarchical projection profile descriptor of the theoretical assembly state digital model
Step 8.2 toFor contours to be registered +.>For the target contour, two vertices are arbitrarily selected +.>And->As a means ofRegistering the base points and calculating a translation matrix between u and v, and then extracting the counterclockwise neighboring points u of u and v 0 And v 0 Calculating registration edge +.>And->Finally, a registration matrix T taking u and v as reference is obtained according to the translation matrix and the rotation matrix u·v By registering matrix T u·v Contours to be registered can be->Registering to the target contour->
Step 8.3, calculating the registered contourIs +. >Hausdorff distance between points to edgesThe following is shown:
Wherein:
e represents the target contourThe edge of the convex hull contour is composed of adjacent vertexes of the convex hull contour,
v i representing a point in the contour, v i+1 Representing a point adjacent to the point, wherein the two points form an edge, namely e;
u' represents the projection of vertex u on the line where edge e is located;
the distance from the vertex u to the edge e is represented by the value of u, e, which is three cases:
(1) When the projection u' of vertex u on edge e is within the range of the edge, then |u, e| is the vertical distance of u to e, i.e
(2) When the projection u' of the vertex u on the edge e is not within the edge, and u and v i Closer distance, i.e. |u, v i |<|u,v i+1 I, then i u, e is equal to i u, v i |;
(3) When the projection u' of the vertex u on the edge e is not within the edge, and u and v i+1 Closer distance, i.e. |u, v i+1 |<|u,v i I, then i u, e is equal to i u, v i+1 |;
Step 8.4, traversing the contours to be registeredAnd target profile->Selecting one point each time as a pair of registration reference points, repeating the steps 8.2 and 8.3, obtaining the point-to-edge Haoskov distance VHD of each registration, selecting the optimal matching condition of the minimum VHD representing two contours, wherein the registration matrix is the optimal registration matrix of the contours of the layerCalculate profile->And profile- >Optimal registration matrix->A schematic of the process of (2) is shown in figure 12.
Step 8.5, traversing the point cloud descriptor of the on-package productAnd digital model Point cloud descriptor->Repeating the steps 8.2-8.4 to obtain an optimal registration matrix for each layer of contour>
Step 9: overall registration is achieved based on contour coincidence, and a process diagram is shown in fig. 13:
step 9.1 forAnd->Is>And->Calculating an optimal registration matrix through the layerTransformed two contoursIs->The following formula is shown: />
Wherein:
representing the coincidence degree of the two contour areas after registration, wherein the larger the value is, the better the matching degree of the two contours is;
Step 9.2, product to be packaged descriptorEach layer profile in (a) passes through the optimal registration matrix of the kth layerTransform, all registration profiles->Contours in digital-to-analog descriptorsThe average value of the coincidence degree between the two is the registration matrix +.>Overall match value for lower two-point cloud
Step 9.3, traversing all registration matricesCalculating the overall matching value of the two point clouds under each registration matrix, and obtaining the maximum overall matching value according to the size sorting>
When the overall matching value is the largest, the registration matrix corresponding to the largest value is the overall optimal registration matrix T match The optimal registration results are shown in fig. 14.
Step 10: the sampled view is transformed to the preferred view and resampled according to the registration relationship, and the process diagram is shown in fig. 15:
step 10.1, the current assembly state and the last assembly state form a neighbor order sequenceIn recognition of templates->Acquiring the coordinate p of the preferential view point corresponding to the adjacent order sequence v By means of an overall optimal registration matrix T match Converting the point coordinate from a digital-to-analog coordinate system to a world coordinate system, wherein the coordinate is p r As resampling points.
Step 10.2, according to the initial sampling of the view point p 0 And resampling the view point p r Establishing a visual angle transformation relationThe depth camera is guided to move from the initial sampling position to the preferred sampling position and resamples, thereby acquiring the point cloud of the packaged product under the preferred view angle.
Step 11: analyzing the difference of the overlapping degree of the truncated layer profile of the product point cloud and the digital-analog point cloud under the preferable view angle, judging a key truncated layer, and finally completing the identification of the assembly state by utilizing the overlapping degree of the key truncated layer:
step 11.1, under a preferred view angle, the point clouds of adjacent-order sequential modular point clouds at the non-newly-installed part parts are highly similar, the point cloud difference is mainly reflected at the newly-installed part parts, the newly-installed part is enabled to be displayed in a few truncated layer contours by a self-adaptive layering means, a threshold range factor eta=0.2 is taken, and a truncated layer contour overlap ratio threshold range COVER is utilized * The truncated elements of LPC descriptors are divided into two classes, typically truncated L - And critical section L * . The following is shown:
wherein:
n * the number of digital-analog point clouds to be matched with the packaged products is represented;
an r layer profile representing the LPC descriptor of an mth digital-to-analog point cloud at a preferred view;
COVER * representing the range of the overlapping degree threshold value of the truncated contour;
η represents a threshold range factor, here taken to be η=0.2.
Step 11.2, cutting layer L at key * The non-overlapping area of each key section layer is added to obtain the non-overlapping total value of each state model when the overlap ratio of the packaged product and the target state model is higher, namely the non-overlapping area is smallerThe non-coincident total value corresponding to the target state model is the smallest:
wherein:
representing the total value of the non-coincidence degree of the key cross sections of the mth digital model point cloud and the product point cloud;
representing the non-overlapping area of the mth digital point cloud and the product point cloud at the critical section r, wherein r is E [1,2, …, r * ],r * Representing the set { L ] * Number of critical sections;
and representing the coincidence contour area of the r layer key section of the digital-analog point cloud and the product point cloud.
When the total value of the non-overlapping degree is takenAnd when the minimum value is obtained, the corresponding assembly state of the digital model is the assembly state of the in-package product, and the identification of the assembly state of the in-package product under the preferred view angle is completed.
Simulation verification:
in order to embody the significance of developing the visual angle optimization recognition process, firstly, carrying out the visual angle optimization recognition process analysis, as shown in the left diagram of fig. 16, the point cloud of the packaged product has obviously lower similarity with the point cloud of the digital model in the assembly stages 1, 2 and 3, and has higher similarity with the assembly stages 4 and 5, but the similarity of the assembly stages 4 and 5 is very similar, which indicates that the visual angle cannot effectively distinguish the assembly stages 4 and 5; for the limitation of the initial view distinguishing capability, the recognition process analysis based on view preference is carried out, as shown in the right graph of fig. 16, compared with the initial view, the digital model point cloud similarity in the 4 and 5 assembly stages is significantly different at the preferred view, which shows that the preferred view has better adjacent order model distinguishing level than the initial view, namely, the view preference plays a positive role in the recognition process. The assembly state recognition results of the 4 groups of samples under the initial view angle and the preferred view angle are shown in fig. 17, and the results show that the assembly state recognition method based on the preferred view angle can effectively improve the accuracy of the state recognition of the packaged products.
Claims (10)
1. An assembly state recognition method based on a preferred viewing angle, comprising the steps of:
Step 1: establishing m coaxial external regular polyhedrons for the digital model of each assembly state of the assembly body, wherein m is more than or equal to 1;
step 2: for the digital model of each assembly state, respectively taking the vertex of the external regular polyhedron and the central point of each surface as view angles, and obtaining local point clouds of the digital model at each view angle;
step 3: adjacent order sequence in assembly process of construction assembly bodyP i Refers to the i-th assembly state of the assembly body, P i+1 The i+1th assembly state of the assembly body is indicated, and n indicates the total number of parts of the assembly body;
step 4: for each adjacent order sequence, obtaining a model difference degree set of two related assembly state digital models under different view angles, and taking a view angle corresponding to the maximum value of the model difference degree as a preferred view angle of the related assembly state to obtain a preferred view angle of each assembly state;
step 5: for each assembly state, constructing an assembly identification template of the assembled product together based on the digital model local point cloud under the preferred view angle and the preferred view angle;
step 6: taking any free view angle as an initial view angle, acquiring the point cloud of the product in the current assembly state under the initial view angle, and converting the point cloud into a world coordinate system;
Step 7: construction of point cloud self-adaptive hierarchical projection profile descriptor LPC of packaged product in current assembly state R And in the current assembled stateDigital model point cloud hierarchical projection profile descriptor LPC Q ;
Step 8: locally registering each layer of contour of the product contour and the digital model contour to obtain an optimal registration matrix of each layer of contour;
step 9: realizing overall registration based on the contour coincidence degree, and selecting an overall optimal registration matrix from the optimal registration matrices obtained in the step 8 based on the maximum value of the overall matching value:
step 10: acquiring a preferred view point coordinate corresponding to the current adjacent order sequence from the assembly identification template obtained in the step 5, converting the preferred view point coordinate from a digital-analog coordinate system to a world coordinate system through the integral optimal registration matrix obtained in the step 9, and resampling the on-load product point cloud by taking the coordinate converted to the world coordinate system as a resampling view point to obtain the on-load product point cloud under the preferred view angle;
step 11: and (3) analyzing the difference of the truncated layer contour coincidence degrees of the point cloud of the packaged product obtained in the step (10) and the point cloud of the corresponding digital model of the search query in the assembly identification template established in the step (5) under the preferred view angle, so as to judge the key truncated layer, and finally, completing the identification of the assembly state by utilizing the key truncated layer coincidence degree.
2. The fitting state recognition method based on a preferred view angle according to claim 1, wherein: the step 1 specifically comprises the following steps:
1.1 Establishing an external regular polyhedron for the digital model of each assembly state;
1.2 The external regular polyhedron of the digital model of each assembly state established in the step 1.1) is respectively rotated for m-1 times around the same rotation axis at an angle alpha, so that the digital model of each assembly state corresponds to m coaxial external regular polyhedrons.
3. The fitting state recognition method based on a preferred view angle according to claim 2, wherein: the angle alpha is less than or equal to 360 degrees/m.
4. A preferred view angle based assembly state recognition method according to any one of claims 1 to 3, wherein:
for each adjacent order sequencei is 1,2, … and n respectively, and the step 4 is specifically as follows:
step 4.1) for digital model point cloud Q under a certain view angle j i Is a local point cloud of (2)Random sampling x 2 Calculating Euclidean distance between sampling points in each sampling to obtain a set L,/I>l 1 Euclidean distance between two sampling points obtained for 1 st sampling …, l x Is the x th 2 The Euclidean distance between two sampling points obtained by sub-sampling;
step 4.2) representing the distribution of the set L by an equidistant histogram in x dimension;
Step 4.3) constructing a digital model point cloud Q under the view j from the equidistant histogram i Is a local point cloud of (2)Shape distribution vector SV of (1) i ;
Step 4.4) Point cloud Q for digital model under the same viewing angle j i+1 Is a local point cloud of (2)Repeating steps 4.1), 4.2) and 4.3), and constructing the shape distribution vector SV thereof i+1 ;
Step 4.5) acquiring the adjacent order sequence under the visual angle jShape distribution vector SV of digital model i And SV(s) i+1 Included angle of (a)Based on the angle->Calculating the adjacent order sequence->The degree of difference between the two digital models involved Dif (SV i ,SV i+1 );
Step 4.6) repeating steps 4.1) -4.5) under the rest view angles to obtain the adjacent order sequenceModel difference degree under each visual angle between related digital models, and the visual angle corresponding to the maximum value of the model difference degree is taken as a neighbor order sequence +.>Preferred viewing angles for the two assembly states involved.
5. The fitting state recognition method based on a preferred view angle according to claim 4, wherein:
the step 7 is specifically as follows:
7.1 Point cloud R of on-load product by taking Z-axis positive direction as layering direction i Constructing a layered projection profile projection point set pc k ={(x,y)|(x,y,z)∈Pc k ];
Wherein:
Pc k representing a kth segment layered point cloud after the point cloud is adaptively layered;
h * representing the maximum height of a point in the point cloud PC, i.e., h * =z max ;
h * X (k-1)/λ represents the starting ordinate, i.e. the minimum ordinate, of the k-th segment of the layered point cloud;
h * X k/lambda represents the termination ordinate, namely the maximum ordinate, of the k-th segment of layered point cloud;
7.2 A convex hull algorithm is adopted to project the layers to obtain a two-dimensional point set pc k Expressed as a contour
Wherein:
convex () represents the Convex hull solving algorithm;
7.3 Point cloud adaptive hierarchical projection profile descriptor for packaged productThe method comprises the following steps:
6. The fitting state recognition method based on a preferred view angle according to claim 5, wherein:
the step 8 is specifically as follows: the contour registration method based on the Haoskov distance from point to edge realizes the local registration of the product contour and the digital model contour, and the optimal registration matrix of each contour layer is obtained.
7. The preferred view-based assembled state recognition method according to claim 6, wherein:
the step 9 is specifically as follows:
each layer of contour in the product point cloud hierarchical contour descriptor and each layer of contour in the digital model point cloud hierarchical descriptor pass through the optimal registration matrix T of the layer 1 contour 1 * Transforming, calculating the average value of the coincidence ratio of the two contours after transformation, and enabling each layer of contour in the product-filling point cloud layered contour descriptor and each layer of contour in the digital model point cloud layered descriptor to pass through the optimal registration matrix of the layer 2 contourTransforming, calculating the average value of the coincidence ratio of the two contours after transformation, …, and enabling each layer of contour in the product-filling point cloud layered contour descriptor and each layer of contour in the digital model point cloud layered descriptor to pass through a lambda-th layerIs>And (3) performing transformation, calculating the average value of the coincidence degrees of the two contours after the transformation, and selecting an optimal registration matrix of a layer corresponding to the average value of the maximum coincidence degrees as an overall optimal registration matrix.
8. The fitting state recognition method based on a preferred view angle according to claim 7, wherein:
the step 11 specifically comprises the following steps:
11.1 Acquiring a general truncated layer L of a neighboring sequence digital model point cloud under a preferred view angle - And critical section L * The following is shown:
wherein:
n * the number of the digital model point clouds to be matched with the packaged products is represented;
an r layer profile representing the LPC descriptor of an mth digital model point cloud at a preferred view;
COVER * representing the range of the overlapping degree threshold value of the truncated contour;
η represents a threshold range factor, (0, 1);
11.2 Adding the non-overlapping areas of each key section to obtain the non-overlapping total value of each state model
Wherein:
representing the total value of the non-coincidence degree of the key cross sections of the mth digital model point cloud and the product point cloud;
representing non-overlapping areas of mth digital-to-analog point cloud and product point cloud at critical section rWherein r.epsilon.1, 2, …, r * ],r * Representing the set { L ] * Number of critical sections;
and representing the coincidence contour area of the r layer key section of the digital-analog point cloud and the product point cloud.
9. A storage medium having a computer program stored thereon; the method is characterized in that: the computer program, when executed by a processor, performs the method of any of claims 1-8.
10. An electronic device comprising a processor and a storage medium; the storage medium has a computer program stored thereon; the method is characterized in that: the computer program, when executed by the processor, performs the method of any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211532142.3A CN116091559A (en) | 2022-12-01 | 2022-12-01 | Assembly state identification method based on optimal viewing angle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211532142.3A CN116091559A (en) | 2022-12-01 | 2022-12-01 | Assembly state identification method based on optimal viewing angle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116091559A true CN116091559A (en) | 2023-05-09 |
Family
ID=86207095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211532142.3A Pending CN116091559A (en) | 2022-12-01 | 2022-12-01 | Assembly state identification method based on optimal viewing angle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116091559A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116777909A (en) * | 2023-08-18 | 2023-09-19 | 德普数控(深圳)有限公司 | Quick positioning method for tool nose of numerical control machine tool based on point cloud data |
-
2022
- 2022-12-01 CN CN202211532142.3A patent/CN116091559A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116777909A (en) * | 2023-08-18 | 2023-09-19 | 德普数控(深圳)有限公司 | Quick positioning method for tool nose of numerical control machine tool based on point cloud data |
CN116777909B (en) * | 2023-08-18 | 2023-11-03 | 德普数控(深圳)有限公司 | Quick positioning method for tool nose of numerical control machine tool based on point cloud data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109147038B (en) | Pipeline three-dimensional modeling method based on three-dimensional point cloud processing | |
JP6681729B2 (en) | Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object | |
Xu et al. | Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor | |
CN111696210A (en) | Point cloud reconstruction method and system based on three-dimensional point cloud data characteristic lightweight | |
Liu et al. | Automatic segmentation of unorganized noisy point clouds based on the Gaussian map | |
CN111553409B (en) | Point cloud identification method based on voxel shape descriptor | |
CN108171780A (en) | A kind of method that indoor true three-dimension map is built based on laser radar | |
CN113470090A (en) | Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics | |
CN111932669A (en) | Deformation monitoring method based on slope rock mass characteristic object | |
CN114510772A (en) | Rapid generation method of building surface contour based on oblique scanning data | |
CN116091559A (en) | Assembly state identification method based on optimal viewing angle | |
CN115690184A (en) | Tunnel face displacement measurement method based on three-dimensional laser scanning | |
Kaushik et al. | Accelerated patch-based planar clustering of noisy range images in indoor environments for robot mapping | |
CN112561989B (en) | Recognition method for hoisting object in construction scene | |
Kampel et al. | An automated pottery archival and reconstruction system | |
CN112802101B (en) | Hierarchical template matching method based on multi-dimensional pyramid | |
CN116843742B (en) | Calculation method and system for stacking volume after point cloud registration for black coal loading vehicle | |
Hoffman et al. | CAD-driven machine vision | |
Liu et al. | Robust 3-d object recognition via view-specific constraint | |
CN117292181A (en) | Sheet metal part hole group classification and full-size measurement method based on 3D point cloud processing | |
Tazir et al. | Color-based 3D point cloud reduction | |
Zhongwei et al. | Automatic segmentation and approximation of digitized data for reverse engineering | |
Sablatnig et al. | 3d reconstruction of archaeological pottery using profile primitives | |
CN116091562A (en) | Building point cloud automatic registration method based on two-dimensional projection line segments | |
CN112231848B (en) | Method and system for constructing vehicle spraying model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |