CN113155054B - Automatic three-dimensional scanning planning method for surface structured light - Google Patents

Automatic three-dimensional scanning planning method for surface structured light Download PDF

Info

Publication number
CN113155054B
CN113155054B CN202110408473.5A CN202110408473A CN113155054B CN 113155054 B CN113155054 B CN 113155054B CN 202110408473 A CN202110408473 A CN 202110408473A CN 113155054 B CN113155054 B CN 113155054B
Authority
CN
China
Prior art keywords
viewpoint
sampling
bounding box
point
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110408473.5A
Other languages
Chinese (zh)
Other versions
CN113155054A (en
Inventor
梁晋
马金泽
宗玉龙
赫景彬
苗泽华
李成宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202110408473.5A priority Critical patent/CN113155054B/en
Publication of CN113155054A publication Critical patent/CN113155054A/en
Application granted granted Critical
Publication of CN113155054B publication Critical patent/CN113155054B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Abstract

The method comprises the steps of sampling on the surface of a CAD model to generate a sampling point set, geometrically gridding the CAD model based on the sampling points of the sampling point set to generate a gridded patch set, establishing a hierarchical bounding box tree based on the gridded patch set, planning a preselected viewpoint based on each sampling point of the sampling point set to generate a preselected viewpoint set, judging whether a ray taking a starting point normal vector as a direction intersects with a model in a scene or not based on the hierarchical bounding box tree, if the judgment of the intersection is visible, establishing a preselected viewpoint by a standard measurement distance, counting sampling points in a measurement range of each preselected viewpoint of the preselected viewpoint set, voting as evaluation of the viewpoint quality, screening, calculating and determining a final measurement viewpoint to form a measurement viewpoint set, and calculating a scanning gesture of each final measurement viewpoint of the measurement viewpoint set. And planning a robot motion path for the measurement viewpoint set based on the optimized robot motion angles of all axes.

Description

Automatic three-dimensional scanning planning method for surface structured light
Technical Field
The invention relates to the technical field of three-dimensional measurement, in particular to an automatic three-dimensional scanning planning method for surface structured light.
Background
With the continuous development of sensor technology, surface structured light three-dimensional scanning equipment is widely applied to the automobile and aerospace industries due to the advantages of high precision and high resolution. Because the breadth of the measuring equipment limits the shielding of the measured object or the environment, the relative position relation between the equipment and the measured workpiece needs to be continuously adjusted during each measurement, so that the workpiece is usually measured from different positions by manual operation equipment, or the robot is manually taught by an experienced worker, the editing of a measuring viewpoint is completed, the measuring process is very time-consuming, and the integrity of scanning point cloud is difficult to ensure. In recent years, many researchers have been working on automated three-dimensional measurement by cooperation of a robot and an external motion mechanism.
The automatic three-dimensional scanning planning comprises viewpoint planning and robot motion path planning. In the measuring process, the measuring pose of the three-dimensional scanner is directly determined by the pose measured each time, and the quality of integral scanning is greatly influenced by the viewpoint. In addition, factors such as the measurement range of equipment, sight shielding and the like are also considered for the measurement viewpoint calculation of the workpiece; and the three-dimensional measurement of the workpiece to be measured needs to be completed by using the minimum number of viewpoints for the efficiency requirement of the measurement. In actual measurement, the robot motion path also needs to be as short as possible to improve the overall measurement efficiency. Therefore, in the automatic surface structured light three-dimensional scanning planning process, a viewpoint generation and robot motion path planning technology is very important.
The above information disclosed in this background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not form the prior art that is well known to those of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide an automatic three-dimensional scanning planning method for surface structured light. Based on parameters such as the field angle and the measuring distance of the surface structured light three-dimensional measuring equipment, the measuring viewpoint is calculated for a given workpiece CAD model, and the movement path of the robot is planned, so that the automatic three-dimensional scanning with high efficiency and high integrity is realized. In order to achieve the above purpose, the invention provides the following technical scheme:
the automatic three-dimensional scanning planning method for the surface structured light comprises the following steps:
step S01: introducing a CAD model of an object to be measured, sampling on the surface of the CAD model to generate a sampling point set,
step S02: geometrically gridding the CAD model based on the sampling points of the set of sampling points to generate a set of gridded patches, building a hierarchical bounding box tree based on the set of gridded patches,
step S03: planning a preselected viewpoint based on each sampling point of the sampling point set to generate a preselected viewpoint set, wherein the sampling points are used as starting points, whether rays with normal vectors of the starting points as directions intersect with an in-scene model is judged based on a hierarchical bounding box tree, if the rays do not intersect with the in-scene model, the rays are judged to be visible, a preselected viewpoint is created according to a standard measurement distance,
step S04: and (4) counting sampling points in the measurement range of each preselected viewpoint of the preselected viewpoint set, voting as the evaluation of the viewpoint quality, screening, calculating and determining the final measurement viewpoint to form a measurement viewpoint set, and calculating the scanning attitude of each final measurement viewpoint of the measurement viewpoint set.
Step S05: and planning a robot motion path for the measurement viewpoint set based on the motion angle of each axis of the optimized robot, and performing automatic three-dimensional scanning control according to the robot axis configuration and the motion path sequence at each viewpoint.
In the automatic three-dimensional scanning planning method for the surface structure light, the surface of the CAD model is sampled equidistantly, the equidistant sampling calculation is firstly carried out on a parameter curved surface, and then whether a sampling point is in the boundary or not is judged, wherein k is u ×k v In the NURBS curved surface p (u, v),
Figure BDA0003021576960000021
wherein, P ij (i=0,1...,n u ,j=0,1...,n v ) For the control vertex of the NURBS surface,
Figure BDA0003021576960000022
for B-spline functions on the node vectors, k u ,k v For the order of B-spline function, p (uv) represents a curved surface, NURBS is non-uniform rational B-spline; omega ij For the weight factor of each control vertex, calculating the isoparametric line of u and v directions:
Figure BDA0003021576960000031
Figure BDA0003021576960000032
arc length integration is performed on iso-lines in one direction:
Figure BDA0003021576960000033
calculating parameter coordinates corresponding to the sampling points according to a preset distance, calculating a series of sampling points on a parameter line through the parameter coordinates, solving the intersection point of the u-direction isoparametric line p (u) and a polygonal boundary parameter curve, and arranging v parameter coordinates of an intersection point set from small to large into { v parameter coordinates c1 ,v c2 ...,v cn And for continuous intervals, if the number of intersection points on the left side and the right side of the interval is odd, all sampling points on the interval are in the boundary, calculating a normal vector of each sampling point in the boundary, and recording the coordinates of the sampling points on the parameter curved surface as (u) 0 ,v 0 ) Calculating partial derivative vector p of the two parameter curves at the point according to the parameter curve equation u (u 0 ,v 0 ),p v (u 0 ,v 0 ):
Figure BDA0003021576960000034
Cross-multiplying and normalizing the calculation vector n (u) by two partial derivative vectors 0 ,v 0 ):
Figure BDA0003021576960000041
In the method for planning the automatic three-dimensional scanning of the surface structure light, in the step S02, geometric gridding is triangular gridding, the gridding surface patch is a triangular surface patch, the step S01 sampling is executed on a CAD model at a second interval smaller than the equal interval to obtain a second sampling point set, the sampling points in the second sampling point set are sequentially connected with a grid along adjacent sampling curves based on the sampling points, and the vertex of the triangular grid is adjusted on the sampling points on a public side to eliminate grid non-manifold shapes so as to generate the triangular grid.
In the automatic three-dimensional scanning planning method for the surface structured light, in step S02, a hierarchical bounding box tree is constructed based on the triangular mesh, wherein,
step S021, for the triangular patch set P to be divided, the barycentric coordinates of each triangle are initialized and calculated,
step S022, creating tree nodes, calculating a directed bounding box of the triangular patch set P, and calculating a covariance matrix C of barycentric coordinates of all triangles in the triangular patch set P:
Figure BDA0003021576960000042
calculating the characteristic vector corresponding to the maximum characteristic value as the main axis of the bounding box, using the rest characteristic vectors as other two axes, calculating the side length of the bounding box according to the projection range of the vertex of the triangle in each axial direction, stopping the calculation if the number of the triangles in the triangle patch set P is less than a set threshold value,
step S023, calculating the projection coordinate of the barycentric coordinate of the triangular patch set P on the main shaft, and recording the median of the projection coordinate as P mid . The projection coordinate of the center of gravity in the set P on the main axis is less than P mid And is greater than p mid The triangle of (2) is divided into two parts, the two parts are collected and calculated in step S022, and after the circulation is finished, the hierarchical bounding box tree is constructed.
In the method for planning the automatic three-dimensional scanning of the surface structured light, in step S03, each sampling point plans a preselected viewpoint for the sampling point according to the position and neighborhood information of the sampling point, a preselected viewpoint set is generated, whether a ray with the normal vector of the starting point as the direction intersects with a model in a scene is judged based on a hierarchical bounding box tree, if the ray does not intersect with the model, the preselected viewpoint is judged to be visible, a preselected viewpoint is established according to a standard measurement distance, if the ray intersects with the model, the direction vector deflects for a certain angle every time, the viewpoint is calculated according to the direction vector on a conical surface until the intersection condition does not exist, and if the shielding condition exists in all directions, the sampling point is judged to be invisible.
In the automatic three-dimensional scanning planning method for the surface structured light, each time of calculating the sight line is shielded by an object, the calculation is accelerated by the hierarchical bounding box tree, wherein,
in step S031, a priority queue of nodes to be calculated is initialized, the priority queue is arranged with the distance from the sight starting point to the intersection with the bounding box as a key value, the tree root nodes of the hierarchical bounding box are enqueued,
in step S032, judging whether the queue is empty, stopping calculation when the queue is empty, if not, dequeuing the tree node at the head of the queue, calculating whether the ray with the sampling point as the origin has an intersection point with the corresponding bounding box,
in step S033, if there is no intersection, it will not intersect with the elements in the bounding box, go back to step S032, if there is an intersection and it is not a leaf node, enqueue its left and right child nodes successively according to the distance that the intersecting ray passes as a key value, if it is a leaf node, calculate the intersection condition of the ray and the triangle in the child node, if there is an intersection condition, it indicates that there is a shielding condition, stop calculating; if there is no crossing, go back to step S032,
and the observation direction of the generated preselected viewpoint is the opposite direction of the normal vector of the corresponding sampling point.
According to the automatic three-dimensional scanning planning method for the surface structured light, the common view field range is calculated according to the relative position relation between the camera and the projector and the respective view field angles, and the common view field range is simplified into a cone which takes the optical center of the projector as a vertex, the sight line direction of the projector as an axis and the view field angle as a cone angle.
In the automatic three-dimensional scanning planning method for the surface structure light, a triangular grid model of a measured object and three-dimensional coordinates of sampling points are projected to a viewpoint through projective transformation, depth buffering is calculated through the grid model, and the visibility of the viewpoint is judged by comparing the depth of the sampling points with a depth buffering value, wherein a pair viewpoint p is defined, and the obtained voting is the sum of viewpoint evaluation functions of each visible sampling point in a view cone:
Figure BDA0003021576960000051
wherein w (p) is the set of sampling points observed by the viewpoint, V (p, i) is the voting contribution of the sampling point i to the viewpoint p, and the evaluation function is as follows,
Figure BDA0003021576960000052
where theta (n) p ,n i ) Is the angle theta between the normal vector of the sampling point and the line of sight 0 ,θ 1 Setting a threshold value, K, for the angle between the normal vector and the line of sight p Is a slope constant.
In the automatic three-dimensional scanning planning method for the surface structured light, the viewpoint p i And p j Distance function between:
Figure BDA0003021576960000061
wherein v is i Is the three-dimensional coordinate of the viewpoint i, n i Is the normal vector of viewpoint i. d box For the distance between diagonal points of the bounding box of the object to be measured, d m The distance is measured for the standard of the distance,
firstly, initializing a final scanning viewpoint set to be empty, initializing all sampling point visibility evaluation to be 0, clustering by taking the highest vote number as a center after sampling points vote for preselected viewpoints in each cycle calculation process, and selecting a viewpoint P with the largest distance from the set to the final viewpoint set i The distance is calculated by taking the distance function defined above and counting the set w (P) of all the sampling points which contribute to the viewpoint evaluation 0 ) For a sample point v in the set i ∈W(P 0 ) Projecting the three-dimensional coordinates of the three-dimensional coordinates on a plane vertical to a preselected viewpoint, calculating a covariance matrix and characteristic values and characteristic vectors of projection points, using the two characteristic vectors as two coordinate system axes of viewpoint observation, planning the coordinate system axes to calculate the pose of the measuring equipment, performing collision detection to calculate whether collision exists, and performing collision interference calculation each time by using the levelThe bounding box tree speeds up the computation, where,
step S041, the OBB bounding box of the scanning device at the viewpoint is calculated,
step S042, initializing a priority queue of the nodes to be calculated, wherein the priority queue takes the distance from the sight starting point to the intersection point of the sight starting point and the bounding box as a key value, enqueues the tree root nodes of the hierarchical bounding box,
and step S043, judging whether the queue is empty, and stopping calculation when the queue is empty and no collision is described. If not, the head tree node of the queue is taken out of the queue, whether the bounding box corresponding to the node is intersected with the scanning equipment or not is calculated,
step S044, if the nodes are not intersected, the nodes do not collide with elements in the bounding box, the step S02 is returned, if an intersection point exists and the node is not a leaf node, the left and right child nodes are enqueued, if the node is a leaf node, the relation between the triangle top point in the node and the equipment bounding box is calculated, if the top point exists in the equipment bounding box, the collision is indicated,
if the point does not pass through collision detection, deleting the point and continuing to calculate; if the point passes the collision detection, the viewpoint is put into a final measurement viewpoint set, and each sampling point v in the current set is subjected to collision detection i ∈w(P 0 ) Adding the voting function value V (P) of the sampling point to the viewpoint for the visibility evaluation 0 ,v i ) And if the visual evaluation of the sampling point is more than or equal to 1, removing the preselected viewpoint corresponding to the sampling point, and circularly calculating until the preselected viewpoint set is empty to obtain a final scanning viewpoint set, wherein w (p) is the sampling point set which can be observed by the viewpoint, and V (p, i) is the voting contribution of the sampling point i to the viewpoint p.
The method comprises the steps of planning a motion path by optimizing motion angles of each axis of a robot, calculating the axis configuration of the robot at each viewpoint by inverse kinematics of the robot, partitioning the generated viewpoints according to rotating angles of opposite rotating discs until the number of viewpoints in each area is less than a threshold value, calculating the full arrangement of viewpoint connection combinations, taking the connection mode with the smallest axis motion angle as the motion path, sequentially connecting each partition to generate the motion path of the robot, and performing automatic three-dimensional scanning according to the axis configuration and the motion path sequence of the robot at each viewpoint.
In the above technical solution, the automatic three-dimensional scanning planning method for surface structured light provided by the present invention has the following beneficial effects: the automatic three-dimensional scanning planning method for the surface structured light considers the collision, sight shielding interference and the complexity of binocular camera planning in actual measurement, and effectively improves the integrity and efficiency of three-dimensional scanning.
Drawings
In order to more clearly illustrate the embodiments of the present application or technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic flow chart illustrating steps of a method for automatic three-dimensional scanning planning of a surface structured light;
FIG. 2 is a schematic diagram of a surface structured light automatic three-dimensional scanning planning method for determining whether a sampling point is within a boundary;
FIG. 3 is a pump model sampling point result diagram of the surface structured light automated three-dimensional scanning planning method according to an embodiment of the present invention;
fig. 4 is a pump body CAD model triangular gridding result diagram of the surface structured light automatic three-dimensional scanning planning method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of calculating a preselected viewpoint from sampling points according to the method for planning the automatic three-dimensional scanning of the structured light;
fig. 6 is a diagram illustrating a pump body model preselected viewpoint calculation result of the area structured light automated three-dimensional scanning planning method according to an embodiment of the present invention;
fig. 7 is a schematic view of a binocular structured light measurement view range of the method for planning the automatic three-dimensional scanning of the structured light according to one embodiment of the present invention;
fig. 8 is a schematic diagram of sample point voting calculation of the surface structured light automatic three-dimensional scanning planning method according to an embodiment of the present invention;
fig. 9 is a schematic view of a measurement viewpoint coordinate system planning of the area structured light automated three-dimensional scanning planning method according to an embodiment of the present invention;
fig. 10 is a diagram showing a pump body model measurement viewpoint result of the surface structured light automated three-dimensional scanning planning method according to the embodiment of the present invention;
fig. 11 is a schematic diagram illustrating a robot movement path planning of the method for planning the automatic three-dimensional scanning of the area structured light according to the embodiment of the present invention;
fig. 12 is a schematic diagram of an automated three-dimensional scanning point cloud result of a pump workpiece according to the method for planning an automated three-dimensional scanning of a structured light according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to fig. 1 to 12 of the accompanying drawings of the embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the equipment or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise explicitly stated or limited, the terms "mounted," "connected," "fixed," and the like are to be construed broadly, e.g., as being permanently connected, detachably connected, or integral; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. "beneath," "under" and "beneath" a first feature includes the first feature being directly beneath and obliquely beneath the second feature, or simply indicating that the first feature is at a lesser elevation than the second feature.
In order to make the technical solutions of the present invention better understood, those skilled in the art will now describe the present invention in further detail with reference to the accompanying drawings.
Three-dimensional scanning of surface structured light: and projecting one or more two-dimensional patterns onto the surface of the measured object to form a structure, and calculating the illumination to obtain the depth. In industrial three-dimensional measurement, most of the optical gratings are projected to the surface of an object to be measured, and the change of each optical grating on the surface of the object to be measured is recorded through a binocular camera so as to calculate the position and depth information of the surface.
NURBS: the non-uniform rational B-spline is a mathematical method for curve surface description and is widely applied in the field of computer aided geometric design.
Triangular mesh: a data structure for representing a three-dimensional model, the mesh being composed of triangular patches, the triangular mesh providing topological relationships between the patches.
Viewpoint: pose of the three-dimensional reconstruction device relative to the object coordinate system each time
Fig. 1 is a schematic flow chart of three-dimensional measurement viewpoint calculation for a part CAD model according to an embodiment of the present invention. The specific steps of the calculation process are as follows:
step S01: CAD model surface sampling
And reading the CAD model and carrying out equidistant sampling on the surface of the CAD model. The CAD model is represented in a boundary representation, i.e., each surface is represented by a parametric surface and the boundary is represented by a parametric curve on the parametric surface. For a sheet k u ×k v NURBS surface of (d):
Figure BDA0003021576960000101
wherein, P ij (i=0,1...,n u ,j=0,1...,n v ) Is the control vertex of the curved surface,
Figure BDA0003021576960000102
Figure BDA0003021576960000103
is a B spline function defined on the node vector; omega ij For each weight factor controlling the vertex. And (3) calculating an isoparametric line in the u and v directions:
Figure BDA0003021576960000104
arc length integration is performed on the isoparametric line in one direction:
Figure BDA0003021576960000105
and calculating parameter coordinates corresponding to the sampling points according to the set distance in the numerical calculation process of the integral, and calculating a series of sampling points on the parameter line through the parameter coordinates. Each surface of the CAD model is represented by a parameterized curve, and whether the current sampling point is in the boundary needs to be judged. For the parameter line p (u) of u direction, the intersection point of the parameter line p (u) and the polygon boundary parameter curve is obtained. Arranging the v parameter coordinates of the intersection point set from small to large as { v c1 ,v c2 ...,v cn }. For the continuous interval, if the number of the intersection points on the left side and the right side of the interval is odd, all the sampling points on the interval are in the boundary. As an example shown in FIG. 2, the intersection of the parametric curve p (u) with the boundary curve, V on the curve, is calculated c1 ~V c2 、V c3 ~V c4 、V c5 ~V c6 The sample points in between are within the boundary.
Calculating normal vector of each sampling point in the boundary, and calculating partial derivative vector p of two parameter curves at the point u (u 0 ,v 0 ),p v (u 0 ,v 0 ):
Figure BDA0003021576960000111
Cross-multiplying and normalizing the calculation vector n (u) by two partial derivative vectors 0 ,ν 0 ):
Figure BDA0003021576960000112
Taking a pump CAD model as an example, the set of calculated sampling points is shown in FIG. 3.
Step S02: CAD model gridding and hierarchical bounding box tree establishment
The CAD model is triangulated, and the sampling operation of step S01 is first performed on the model at smaller intervals. For the obtained sampling point set, sequentially connecting the in-plane sampling points along adjacent sampling curves to grid, carrying out triangular grid vertex adjustment on the sampling points on the common edge to eliminate grid non-manifold conditions, and carrying out triangularization on the pump body CAD model as shown in FIG. 4
And constructing a hierarchical bounding box tree for the generated triangular meshes for calculation of subsequent sight shielding and collision interference. The generation algorithm is as follows:
(1) For the triangular patch set P to be divided, the barycentric coordinates of each triangle are initially calculated
(2) Creating tree nodes, calculating an Oriented Bounding Box (OBB) of the set P, and calculating a covariance matrix C for barycentric coordinates of all triangles in the set P:
Figure BDA0003021576960000113
and calculating a characteristic vector corresponding to the maximum characteristic value as a main shaft of the bounding box, using the rest characteristic vectors as other two shafts, and calculating the side length of the bounding box according to the projection range of the vertex of the triangle in each axial direction. And if the number of the triangles in the set P is less than the set threshold value, stopping the calculation.
(3) Calculating the projection coordinate of the barycentric coordinate of the triangular patch set P on the principal axis, and recording the median of the projection coordinate as P mid . The projection coordinate of the center of gravity in the set P on the main axis is less than P mid And is greater than p mid The triangle of (2) is divided into two parts, and the two parts are collected to be returned to the calculation and circulation nodeThe bundles are built into a hierarchical bounding box tree.
Step S03: preselected viewpoint generation
And planning a preselected viewpoint for each sampling point according to the position and neighborhood information of the sampling point, and generating a preselected viewpoint set. And calculating whether the ray taking the point as a starting point and the normal vector of the point as a direction intersects with the model in the scene, and if the ray does not intersect with the model in the scene, establishing a preselected viewpoint by using the standard measurement distance. If the intersection occurs, the direction vector is deflected by a certain angle every time, and the viewpoint is calculated by the direction vector on the conical surface until there is no intersection, as shown in fig. 5. And if the shielding condition exists in all directions, judging that the point is invisible. Each time the calculation sight line is shielded by the object, the calculation is accelerated by the hierarchical bounding box tree constructed in the step S02, and the specific process is as follows:
(1) Initializing a priority queue of nodes to be calculated, wherein the priority queue is arranged by taking the distance from a sight starting point to a point of intersection with the bounding box as a key value, and the tree root nodes of the hierarchical bounding box are enqueued
(2) And judging whether the queue is empty or not, wherein the empty queue does not indicate the condition of no sight line shielding, and stopping calculation. And if not, dequeuing the tree node at the queue head, and calculating whether the ray with the sampling point as the origin has an intersection point with the corresponding bounding box.
(3) If no intersection exists, the intersection will not intersect with the element in the bounding box, and the operation returns to the step (2). And if the intersection exists and the node is not a leaf node, the left child node and the right child node are sequentially enqueued by taking the distance passed by the intersection ray as a key value. If the node is a leaf node, calculating the intersection condition of the ray and the triangle in the child node, if the intersection condition exists, indicating that the shielding condition exists, and stopping calculation; if no intersection exists, returning to (2).
And the observation direction of the generated preselected viewpoint is the opposite direction of the normal vector of the corresponding sampling point. The set of preselected viewpoints generated for the pump body model is shown in fig. 6.
Step S04: sample point vote calculation
And calculating the vision range and simplifying the vision range according to the surface structured light three-dimensional reconstruction principle and the relative position relationship of the camera projector. The surface structure light scanning equipment synchronously acquires the image of the projected stripe by a binocular camera from the stripe image projected by the projector, and carries out three-dimensional reconstruction by a multi-frequency heterodyne method, binocular matching and triangulation. From the relative positional relationship of the camera and the projector and the respective angles of view, the common visual field range thereof can be calculated as a range surrounded by a thick solid line as shown in fig. 7. For convenience of calculation, the method is simplified into a cone with the optical center of the projector as a vertex, the visual line direction of the projector as an axis and the smaller visual angle as a cone angle, as shown in fig. 8.
The preselected viewpoint calculated in step S03 is too redundant. A set of pre-selected viewpoints needs to be reduced to obtain the final scanning viewpoint. And (4) evaluating the quality of the video points by adopting sampling point voting. For each sample point it is calculated whether it is within the viewing cone of the viewpoint. If the three-dimensional coordinates of the triangular mesh model and the sampling point of the object to be measured are in the cone, the three-dimensional coordinates are projected to the viewpoint through projective transformation, the depth buffer is calculated through the mesh model, and the visibility of the viewpoint is judged by comparing the depth of the sampling point with the depth buffer value. As shown in fig. 8. Defining a pair of viewpoints p, wherein the obtained votes are the sum of viewpoint evaluation functions of each visible sampling point in the view cone:
Figure BDA0003021576960000131
w (p) is a set of sampling points observable by the viewpoint, and V (p, i) is a voting contribution of the sampling point i to the viewpoint p. The evaluation function is as follows, where θ (n) p ,n i ) Is the angle theta between the normal vector of the sampling point and the line of sight 0 ,θ 1 Setting a threshold value, K, for the angle between the normal vector and the line of sight p Is a slope constant.
Figure BDA0003021576960000132
Step S05: measurement viewpoint pose determination
Therefore, a preselected viewpoint set with sampling point voting evaluation can be obtained, and due to the fact that the distance between sampling points is small, the generated preselected viewpoint is excessively redundant. For the finalAnd the generated viewpoint set is expected to be capable of reconstructing the point cloud of the workpiece to be detected more completely, and the number of viewpoints cannot be too large to ensure the efficiency of the whole scanning process. Defining a viewpoint p i And p j Distance function between:
Figure BDA0003021576960000133
wherein v is i Is the three-dimensional coordinate of the viewpoint i, n i Is the normal vector of viewpoint i. d box For the distance between diagonal points of the bounding box of the object to be measured, d m The distance is measured for the standard.
First, the final scanning view point set is initialized to be empty, and all the sampling points are initialized to be evaluated to be 0 in visibility. In each cyclic calculation process, after the sampling points vote for the preselected viewpoints, clustering is carried out by taking the highest vote number as the center, and the viewpoint P with the largest distance from the viewpoint P in the set to the final viewpoint set is selected i The distance is calculated using the distance function defined above. All sampling point sets w (P) which have voting contribution to the viewpoint evaluation are counted 0 ). As shown in fig. 9, for a sample point v in the set i ∈W(P 0 ) Projecting the three-dimensional coordinates of the three-dimensional coordinates on a plane vertical to a preselected viewpoint, calculating a covariance matrix for a projection point, calculating an eigenvalue and an eigenvector, and taking the two eigenvectors as two coordinate system axes observed by the viewpoint. And after the coordinate system axis is planned, the pose of the measuring equipment can be calculated, and then collision detection is carried out to calculate whether the collision condition exists. Each collision interference calculation is accelerated by the hierarchical bounding box tree constructed in the step S02, and the specific process is as follows:
(1) Calculating OBB bounding boxes for a scanning device at a viewpoint
(2) Initializing a priority queue of nodes to be calculated, wherein the priority queue takes the distance from a sight starting point to the intersection point of the sight starting point and the bounding box as a key value, and enqueuing the root nodes of the hierarchical bounding box tree
(3) And judging whether the queue is empty or not, if so, not indicating collision, and stopping calculation. If not, the head tree node of the queue is taken out of the queue, and whether the bounding box corresponding to the node is intersected with the scanning equipment or not is calculated
(4) If the two elements do not intersect, the two elements do not collide with the elements in the bounding box, and the operation returns to the step (2). If the intersection exists and the node is not a leaf node, the left child node and the right child node are enqueued. If the vertex is a leaf node, calculating the relation between the vertex of the triangle in the node and the equipment bounding box, and if the vertex is in the equipment bounding box, indicating that collision occurs.
If the point does not pass through collision detection, deleting the point and continuing to calculate; and if the point passes the collision detection, putting the viewpoint into a final measurement viewpoint set. For the sampling point v in the current set i ∈W(P 0 ) Adding the voting function value V (P) of the sampling point to the viewpoint for the visibility evaluation 0 ,v i ) And if the visual evaluation of the sampling point is more than or equal to 1, removing the preselected viewpoint corresponding to the sampling point. And (4) circularly calculating until the preselected viewpoint set is empty, namely obtaining a final scanning viewpoint set, wherein the scanning viewpoint set is the scanning viewpoint of the pump body model as shown in fig. 10.
Step S06: robot motion path planning
And planning the motion path by optimizing the motion angle of each axis of the robot. For the constructed automatic three-dimensional scanning system, a hand-eye calibration algorithm can be used for establishing a conversion relation among an object coordinate system, a robot Center Point (TCP) coordinate system and a scanning equipment coordinate system. The mechanical arm is used for fixedly driving the scanning equipment, and the rotary table is used for bearing the measured object to rotate, so that the scanning equipment can reach a planned viewpoint. The axis configuration (i.e., the angle per axis) of the robot at each viewpoint is calculated from the robot inverse kinematics. And planning the motion path of the robot by taking the optimized shaft rotation angle as a target. As shown in fig. 11, the generated viewpoints are partitioned according to the rotation angle of the opposite turntable, the connection mode with the minimum movement angle of the robot axis in each partition is calculated as the movement path, and then the movement paths of the robot are generated by connecting and combining the partitions.
And scanning the pump body model at each viewpoint, and splicing the point clouds collected at each viewpoint according to the relative pose relationship of the viewpoints to obtain the point cloud model of the measured object. Fig. 12 shows the scanning point cloud result of the actual model of the pump body.
Finally, it should be noted that: the embodiments described are only a part of the embodiments of the present application, and not all embodiments, and all other embodiments obtained by those skilled in the art without making creative efforts based on the embodiments in the present application belong to the protection scope of the present application.
While certain exemplary embodiments of the present invention have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that the described embodiments may be modified in various different ways without departing from the spirit and scope of the invention. Accordingly, the drawings and description are illustrative in nature and are not to be construed as limiting the scope of the invention.

Claims (9)

1. An automatic three-dimensional scanning planning method for surface structured light is characterized by comprising the following steps:
step S01: introducing a CAD model of an object to be measured, sampling on the surface of the CAD model to generate a sampling point set,
step S02: geometrically gridding the CAD model based on the sampling points of the set of sampling points to generate a set of gridded patches, building a hierarchical bounding box tree based on the set of gridded patches,
step S03: planning a preselected viewpoint based on each sampling point of the sampling point set to generate a preselected viewpoint set, wherein the sampling points are used as starting points, whether rays with normal vectors of the starting points as directions intersect with an in-scene model is judged based on a hierarchical bounding box tree, if the rays do not intersect with the in-scene model, the rays are judged to be visible, a preselected viewpoint is created according to a standard measurement distance,
step S04: sampling points in the measuring range of each preselected viewpoint of the preselected viewpoint set are counted and voted as the evaluation of the viewpoint quality, the final measuring viewpoint is determined by screening calculation to form a measuring viewpoint set, the scanning attitude at each final measuring viewpoint of the measuring viewpoint set is calculated,
step S05: planning a robot motion path for the measurement viewpoint set based on optimizing the motion angle of each axis of the robot, and configuring the robot axis and the motion path according to each viewpointAnd sequentially carrying out automatic three-dimensional scanning control, wherein equidistant sampling is carried out on the surface of the CAD model, firstly, the equidistant sampling calculation is carried out on the parameter curved surface, and then, whether the sampling point is within the boundary or not is judged, wherein the k is u ×k v In NURBS surface p (u, v),
Figure FDA0004025675140000011
wherein, P ij (i=0,1...,n u ,j=0,1...,n v ) For the control vertex of the NURBS surface,
Figure FDA0004025675140000012
as B-splines on the node vectors, k u ,k v For the order of B-spline function, p (uv) represents a curved surface, NURBS is non-uniform rational B-spline; omega ij For the weight factor of each control vertex, calculating the isoparametric line of u and v directions:
Figure FDA0004025675140000021
arc length integration is performed on iso-lines in one direction:
Figure FDA0004025675140000022
calculating parameter coordinates corresponding to the sampling points according to a preset distance, calculating a series of sampling points on a parameter line through the parameter coordinates, solving the intersection point of the u-direction isoparametric line p (u) and a polygonal boundary parameter curve, and arranging v parameter coordinates of an intersection point set from small to large into { v parameter coordinates c1 ,v c2 ...,v cn And for the continuous interval, if the number of intersection points of the left side and the right side of the interval is odd, all sampling points on the interval are in the boundary, calculating the normal vector of each sampling point in the boundary, and recording the coordinates of the sampling points on the parameter curved surface as (u) 0 ,v 0 ) Calculating partial derivative vector p of the two parameter curves at the point according to the parameter curve equation u (u 0 ,v 0 ),p v (u 0 ,v 0 ):
Figure FDA0004025675140000023
Cross-multiplying and normalizing the calculation vector n (u) by two partial derivative vectors 0 ,v 0 ):
Figure FDA0004025675140000024
2. The method for automatic three-dimensional scanning and planning of the surface structure light according to claim 1, characterized in that in step S02, the geometric gridding is triangular gridding, the gridding patch is a triangular patch, the step S01 sampling is performed on the CAD model at a second interval smaller than the equidistant sampling to obtain a second sampling point set, based on that the sampling points in the second sampling point set are sequentially connected to a grid-connection grid along adjacent sampling curves, triangular grid vertex adjustment is performed on the sampling points on the common edge to eliminate grid non-manifold to generate a triangular grid.
3. The method according to claim 2, wherein in step S02, a hierarchical bounding box tree is constructed based on the triangular mesh, wherein,
step S021, initializing and calculating barycentric coordinates of each triangle for the triangular patch set P to be divided,
step S022, creating tree nodes, calculating a directed bounding box of the triangular patch set P, and calculating a covariance matrix C of barycentric coordinates of all triangles in the triangular patch set P:
Figure FDA0004025675140000031
calculating the characteristic vector corresponding to the maximum characteristic value as the main shaft of the bounding box, using the rest characteristic vectors as other two shafts, calculating the side length of the bounding box according to the projection range of the vertex of the triangle in each axial direction, stopping calculation if the number of the triangles in the triangle patch set P is less than a set threshold value,
step S023, calculating the projection coordinate of the barycentric coordinate of the triangular patch set P on the main axis, recording the median of the projection coordinate as pmid, and enabling the projection coordinate of the barycentric coordinate in the set P on the main axis to be smaller than P mid And is greater than p mid The triangle of (2) is divided into two parts, the two parts are collected and calculated in step S022, and after the circulation is finished, the hierarchical bounding box tree is constructed.
4. The method according to claim 3, wherein in step S03, each sampling point plans a preselected viewpoint for the sampling point from its position and neighborhood information, generating a preselected viewpoint set, determining whether a ray whose normal vector of the starting point is in the direction intersects with the model in the scene based on the hierarchical bounding box tree, if not, determining that the ray is visible, creating a preselected viewpoint from a standard measurement distance, if so, deflecting the direction vector for a certain angle each time, calculating the viewpoint from the direction vector on the conical surface until no intersection exists, and if there is an occlusion situation in all directions, determining that the sampling point is invisible.
5. The method according to claim 4, wherein each time the calculation line of sight is blocked by an object, the calculation is accelerated by the hierarchical bounding box tree, wherein,
in step S031, a priority queue of nodes to be computed is initialized, the priority queue is arranged by using the distance from the starting point of the sight line to the intersection with the bounding box as a key value, the tree root nodes of the hierarchical bounding box are enqueued,
in step S032, judging whether the queue is empty, stopping calculation when the queue is empty, if not, dequeuing the tree node at the head of the queue, calculating whether the ray with the sampling point as the origin has an intersection point with the corresponding bounding box,
in step S033, if there is no intersection, it will not intersect with the elements in the bounding box, go back to step S032, if there is an intersection and it is not a leaf node, enqueue its left and right child nodes successively according to the distance that the intersecting ray passes as a key value, if it is a leaf node, calculate the intersection condition of the ray and the triangle in the child node, if there is an intersection condition, it indicates that there is a shielding condition, stop calculating; if there is no crossing, go back to step S032,
and the observation direction of the generated preselected viewpoint is the opposite direction of the normal vector of the corresponding sampling point.
6. The method of claim 5, wherein the common viewing area is calculated according to the relative position relationship between the camera and the projector and the respective viewing angles, and is simplified to be a cone with the optical center of the projector as the vertex, the viewing direction of the projector as the axis, and the viewing angle as the cone angle.
7. The method of claim 6, wherein the three-dimensional scanning comprises a three-dimensional scanning planning system,
projecting a triangular grid model of a measured object and three-dimensional coordinates of sampling points to a viewpoint through projective transformation, calculating depth buffering by using the grid model, and judging the visibility of the viewpoint by comparing the depth of the sampling points with a depth buffer value, wherein a pair viewpoint p is defined, and the obtained vote is the sum of viewpoint evaluation functions of each visible sampling point in a view cone:
Figure FDA0004025675140000041
where w (p) is the set of sampling points observable at the viewpoint, and V (p, i) is the voting contribution of the sampling point i to the viewpoint p, the evaluation function is as follows,
Figure FDA0004025675140000042
wherein theta (n) p ,n i ) Is the angle between the normal vector of the sampling point and the line of sight, theta 0 ,θ 1 Setting a threshold value, K, for the angle between the normal vector and the line of sight p Is a slope constant.
8. The method of claim 7, wherein the three-dimensional scanning comprises a three-dimensional scanning planning system,
viewpoint p i And p j Distance function between:
Figure FDA0004025675140000043
wherein v is i Is the three-dimensional coordinate of the viewpoint i, n i Is the normal vector of viewpoint i, d box For the distance between diagonal points of the bounding box of the object to be measured, d m The distance is measured for the standard of the distance,
firstly, initializing a final scanning viewpoint set to be null, initializing all sampling point visibility evaluation to be 0, in each cycle calculation process, after a sampling point votes for a preselected viewpoint, clustering by taking the highest vote number as the center, and selecting a viewpoint P with the largest distance from the sampling point to the final viewpoint set in the set i The distance is calculated by taking the distance function defined above and counting the set w (P) of all the sampling points which contribute to the viewpoint evaluation 0 ) For a sample point v in the set i ∈w(P 0 ) Projecting the three-dimensional coordinates of the three-dimensional coordinates on a plane vertical to a preselected viewpoint, calculating a covariance matrix and characteristic values and characteristic vectors of projection points, using the two characteristic vectors as two coordinate system axes observed by the viewpoint, planning the coordinate system axes to calculate the pose of the measuring equipment, detecting collision to calculate whether collision exists, and performing collision interference calculation each time by using the hierarchical bounding box tree to accelerate calculation, wherein,
step S041, the OBB bounding box of the scanning device at the viewpoint is calculated,
step S042, initializing a priority queue of the nodes to be calculated, wherein the priority queue takes the distance from the sight starting point to the intersection point of the bounding box as a key value, enqueues the tree root nodes of the hierarchical bounding box,
step S043, judging whether the queue is empty, if so, not indicating collision, stopping calculation, if not, taking out the head tree node of the queue, calculating whether the bounding box corresponding to the node is intersected with the scanning equipment or not,
step S044, if the nodes are not intersected, the nodes do not collide with elements in the bounding box, the step S02 is returned, if an intersection point exists and the node is not a leaf node, the left and right child nodes are enqueued, if the node is a leaf node, the relation between the triangle top point in the node and the equipment bounding box is calculated, if the top point exists in the equipment bounding box, the collision is indicated,
if the point does not pass through collision detection, deleting the point and continuing to calculate; if the point passes the collision detection, the viewpoint is put into a final measurement viewpoint set, and each sampling point v in the current set is subjected to collision detection i ∈w(P 0 ) Adding the voting function value V (P) of the sampling point to the viewpoint for the visibility evaluation 0 ,v i ) If the visual evaluation of the sampling point is more than or equal to 1, removing the preselected viewpoint corresponding to the sampling point, and performing loop calculation until the preselected viewpoint set is empty to obtain a final scanning viewpoint set, wherein w (p) is the sampling point set which can be observed by the viewpoint, and V (p, i) is the voting contribution of the sampling point i to the viewpoint p.
9. The method for planning three-dimensional scanning of surface structured light according to claim 1,
planning a motion path by optimizing the motion angle of each axis of the robot, calculating the axis configuration of the robot at each viewpoint by inverse kinematics of the robot, partitioning the generated viewpoints according to the rotation angles of opposite rotating discs until the number of viewpoints in each area is less than a threshold value, calculating the full arrangement of viewpoint connection combinations, taking the connection mode with the minimum axis motion angle as the motion path, sequentially connecting each partition to generate the motion path of the robot, and performing automatic three-dimensional scanning according to the axis configuration and the motion path sequence of the robot at each viewpoint.
CN202110408473.5A 2021-04-15 2021-04-15 Automatic three-dimensional scanning planning method for surface structured light Active CN113155054B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110408473.5A CN113155054B (en) 2021-04-15 2021-04-15 Automatic three-dimensional scanning planning method for surface structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110408473.5A CN113155054B (en) 2021-04-15 2021-04-15 Automatic three-dimensional scanning planning method for surface structured light

Publications (2)

Publication Number Publication Date
CN113155054A CN113155054A (en) 2021-07-23
CN113155054B true CN113155054B (en) 2023-04-11

Family

ID=76868011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110408473.5A Active CN113155054B (en) 2021-04-15 2021-04-15 Automatic three-dimensional scanning planning method for surface structured light

Country Status (1)

Country Link
CN (1) CN113155054B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113778130B (en) * 2021-09-24 2022-04-15 南京航空航天大学 Unmanned aerial vehicle coverage path planning method based on three-dimensional model
CN115409950B (en) * 2022-10-09 2024-02-06 卡本(深圳)医疗器械有限公司 Optimization method for surface drawing triangular mesh
CN116227120B (en) * 2022-12-01 2023-09-19 中国航空综合技术研究所 Sampling point and reverse surface based aviation jet pump casting pump body wall thickness optimization method
CN117195592B (en) * 2023-11-06 2024-01-26 龙门实验室 Interference-free part selection and matching method for cycloidal gear reducer rotating arm bearing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376900A (en) * 2018-09-07 2019-02-22 北京航空航天大学青岛研究院 Unmanned plane orbit generation method based on cloud
CN111060006A (en) * 2019-04-15 2020-04-24 深圳市易尚展示股份有限公司 Viewpoint planning method based on three-dimensional model

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092577A (en) * 2011-10-28 2013-05-08 鸿富锦精密工业(深圳)有限公司 System and method for generating three-dimensional image measuring program
EP2852932A1 (en) * 2012-05-22 2015-04-01 Telefónica, S.A. A method and a system for generating a realistic 3d reconstruction model for an object or being
WO2016081499A1 (en) * 2014-11-17 2016-05-26 Markforged, Inc. Composite filament 3d printing using complementary reinforcement formations
US10543044B2 (en) * 2016-09-27 2020-01-28 Covidien Lp Systems and methods for detecting pleural invasion for surgical and interventional planning
US10204422B2 (en) * 2017-01-23 2019-02-12 Intel Corporation Generating three dimensional models using single two dimensional images
CN107953334A (en) * 2017-12-25 2018-04-24 深圳禾思众成科技有限公司 A kind of industrial machinery arm Collision Free Path Planning based on A star algorithms
CN108955520B (en) * 2018-04-17 2020-09-15 武汉工程大学 Structured light three-dimensional scanning accessibility analysis method and analysis system
CN111652855B (en) * 2020-05-19 2022-05-06 西安交通大学 Point cloud simplification method based on survival probability
CN112476434B (en) * 2020-11-24 2021-12-28 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN112577447B (en) * 2020-12-07 2022-03-22 新拓三维技术(深圳)有限公司 Three-dimensional full-automatic scanning system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376900A (en) * 2018-09-07 2019-02-22 北京航空航天大学青岛研究院 Unmanned plane orbit generation method based on cloud
CN111060006A (en) * 2019-04-15 2020-04-24 深圳市易尚展示股份有限公司 Viewpoint planning method based on three-dimensional model

Also Published As

Publication number Publication date
CN113155054A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN113155054B (en) Automatic three-dimensional scanning planning method for surface structured light
Gross et al. Extraction of lines from laser point clouds
Lazebnik et al. Projective visual hulls
CN109307478B (en) Resolution adaptive mesh for performing 3-D measurements of an object
EP2430588B1 (en) Object recognition method, object recognition apparatus, and autonomous mobile robot
Stamos et al. Integration of range and image sensing for photo-realistic 3D modeling
KR101497260B1 (en) Shape measuring method
WO2017195228A1 (en) Process and system to analyze deformations in motor vehicles
JP5955028B2 (en) Image processing apparatus, image processing method, and image processing program
US9147279B1 (en) Systems and methods for merging textures
EP3455799B1 (en) Process and system for computing the cost of usable and consumable materials for painting of motor vehicles, from the analysis of deformations in motor vehicles
Branch et al. Automatic hole-filling of triangular meshes using local radial basis function
Zhang et al. Optimisation of camera positions for optical coordinate measurement based on visible point analysis
Bodenmüller et al. Online surface reconstruction from unorganized 3d-points for the DLR hand-guided scanner system
Stulp et al. Reconstruction of surfaces behind occlusions in range images
Liu Robust segmentation of raw point clouds into consistent surfaces
Frommholz et al. Reconstructing buildings with discontinuities and roof overhangs from oblique aerial imagery
CN115861547B (en) Model surface spline generating method based on projection
Fernandez et al. Laser scan planning based on visibility analysis and space partitioning techniques
Loriot et al. Non-model based method for an automation of 3D acquisition and post-processing
CN113129348B (en) Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene
Denker et al. On-line reconstruction of CAD geometry
CN113432558A (en) Device and method for measuring irregular object surface area based on laser
Hassanpour et al. Delaunay triangulation based 3d human face modeling from uncalibrated images
CN117197215B (en) Robust extraction method for multi-vision round hole features based on five-eye camera system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant