CN114638795A - Multi-structure light measurement unit online measurement method and system - Google Patents

Multi-structure light measurement unit online measurement method and system Download PDF

Info

Publication number
CN114638795A
CN114638795A CN202210220284.XA CN202210220284A CN114638795A CN 114638795 A CN114638795 A CN 114638795A CN 202210220284 A CN202210220284 A CN 202210220284A CN 114638795 A CN114638795 A CN 114638795A
Authority
CN
China
Prior art keywords
scanning
viewpoint
measured
light measuring
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210220284.XA
Other languages
Chinese (zh)
Inventor
唐正宗
张一弛
任茂栋
杨鹏斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xtop 3d Technology Shenzhen Co ltd
Original Assignee
Xtop 3d Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xtop 3d Technology Shenzhen Co ltd filed Critical Xtop 3d Technology Shenzhen Co ltd
Priority to CN202210220284.XA priority Critical patent/CN114638795A/en
Publication of CN114638795A publication Critical patent/CN114638795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a multi-structure light measuring unit online measuring method and a system, wherein the method comprises the following steps: calibrating an on-line measuring system of the multi-structure light measuring units to obtain a calibration result, wherein the system comprises the multi-structure light measuring units and a rotary objective table; the rotary object stage is used for bearing an object to be measured and adjusting the position of the object to be measured; aligning the CAD model of the object to be measured with the actually measured three-dimensional model; based on the aligned CAD model of the object to be detected, an optimal scanning path is obtained by taking the minimum system cost function as a target; teaching scanning detection is carried out on the object to be detected based on the scanning path, and scanning detection parameters in the teaching scanning detection process are stored as a scanning detection template; the scanning detection template is used for detecting the objects to be detected in batches, complete model data can be quickly and conveniently obtained, and quick and efficient three-dimensional measurement is achieved.

Description

Multi-structure light measurement unit online measurement method and system
Technical Field
The invention relates to the technical field of industrial detection, in particular to an on-line measuring method and system of a multi-structure light measuring unit.
Background
With the development of market economy and the increase of labor cost, industrial measurement increasingly depends on equipment rather than traditional manpower. The common image sensor can only acquire two-dimensional information of scene objects, and can not meet the requirements of industrial development. It is highly desirable to more accurately grasp the three-dimensional geometric information of the workpiece for analysis or measurement of the workpiece. Since three-dimensional information can be obtained conveniently, a non-contact measurement technical means represented by structured light gradually plays an increasingly important role in the fields of industrial detection and the like.
Taking the surface structured light measurement system as an example, the grating image modulated on the surface of the measured object is processed by the computing unit and then used for reconstructing the surface characteristics of the measured object, and the surface measurement (such as the whole size deviation detection) of the measured object can be realized by matching with a related detection system. Compared with the traditional contact type measurement technology, although the surface structure light measurement technology can quickly obtain the surface area information of the object, due to the shielding problem of the object, the measurement equipment or the measured object is often required to be moved for many times to complete the scanning of the workpiece to be measured, and the data obtained by the multiple scanning is unified into the same coordinate system through the mark points or the surface characteristics of the object so as to obtain the complete data of the measured object. The complicated measurement process causes low measurement efficiency, and the batch detection requirement of workpieces in an actual industrial scene is difficult to meet.
No matter be hand-held type, posture or be based on the structured light measuring unit of arm at present, actual measurement can't avoid above-mentioned problem. Therefore, a rapid and convenient measuring system is an urgent need in the field of industrial detection.
The above background disclosure is only for the purpose of assisting understanding of the concept and technical solution of the present invention and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
Disclosure of Invention
The invention provides a multi-structure light measuring unit online measuring method and system for solving the existing problems.
In order to solve the above problems, the technical solution adopted by the present invention is as follows:
a multi-structure light measuring unit online measuring method comprises the following steps: s1: calibrating an on-line measuring system of a multi-structure light measuring unit to obtain a calibration result, wherein the on-line measuring system of the multi-structure light measuring unit comprises a plurality of structure light measuring units and a rotary objective table; the rotary object stage is used for bearing an object to be measured and adjusting the position of the object to be measured; s2: aligning the CAD model of the object to be measured with the actually measured three-dimensional model of the object to be measured; s3: based on the aligned CAD model of the object to be measured, obtaining an optimal scanning path by taking the minimum cost function of the multi-structure light measuring unit online measuring system as a target, wherein the cost function comprises the rotation times of the rotary object stage, the distance between a viewpoint on the CAD model and the structure light measuring unit, and the included angle between the direction of the viewpoint on the CAD model and the direction of the structure light measuring unit; s4: teaching scanning detection is carried out on the object to be detected based on the optimal scanning path, and scanning detection parameters in the teaching scanning detection process are stored as a scanning detection template; s5: and carrying out batch detection on the object to be detected by adopting the multi-structure light measuring unit online measuring system based on the scanning detection template.
Preferably, aligning the CAD model of the object to be measured and the actually measured three-dimensional model of the object to be measured comprises: s21: the object to be measured is rotationally scanned for a circle according to a preset angle, and all scanning point clouds are aligned by using the calibration result and the preset angle to obtain the actually measured three-dimensional model of the object to be measured; s22: aligning the CAD model of the object to be measured and the actually measured three-dimensional model to obtain an alignment relation, and calculating by using the alignment relation to obtain a spatial rotation matrix R and a translation matrix T; s23: converting the CAD model into the coordinate system of the actually measured three-dimensional model based on the spatial rotation matrix R and the translation matrix T.
Preferably, obtaining the optimal scan path comprises the following steps: s31: sampling each surface of the CAD model to obtain a series of sampling points; s32: acquiring sampling points on each surface of the CAD model and normal directions of the sampling points, and generating a viewpoint according to the positions and normal directions of the sampling points and the standard scanning distance of the structured light measuring unit, wherein the viewpoint is a scanning point location, and the scanning direction is the direction of a connecting line between the viewpoint and the sampling points; s33: simplifying the viewpoint to obtain an effective viewpoint; s34: and corresponding the effective viewpoint to the structured light measuring unit, carrying out full arrangement on the corresponding relation, then calculating the cost function under each arrangement, and selecting a group of arrangements with the minimum cost function as an actual corresponding relation to generate a series of poses of the rotary object stage to obtain the optimal scanning path.
Preferably, sampling each face of the surface of the CAD model to obtain a series of sample points comprises: and acquiring the boundary of each patch in a parameter domain, then taking a straight line in one direction in the parameter domain to intersect with the parameter domain, and taking the central point in the intersection point as a sampling point.
Preferably, sampling each face of the surface of the CAD model to obtain a series of sample points further comprises: dividing each patch of the surface of the CAD model according to the scanning breadth of the structured light measuring unit, wherein the specific formula is as follows:
Figure BDA0003537003480000031
wherein the face is the face skin,s is the area of the patch, S0K is a proportional coefficient, and is the scanning breadth area of the structured light measuring unit;
for a patch with an area smaller than a preset area, a central point is taken from the parameter domain;
for the surface patches with the areas larger than or equal to the preset area, equally dividing the parameter domain to obtain a plurality of regions, and then taking the central point of each region as a sampling point;
for a patch with curvature greater than the preset curvature, inserting at least one more point in the direction with curvature greater than the preset curvature.
Preferably, after obtaining the sampling points on each surface of the CAD model and the normal directions of the sampling points, the method further includes orienting the normal directions in a consistent manner, and specifically includes: dispersing the CAD model into a point cloud by using a model sampling mode to obtain a point cloud model and obtain the sampling points; triangularization is carried out on the point cloud model by utilizing the principle of Delou triangulation, and the normal direction of each surface patch is calculated; calculating the normal direction of the nearest triangle of the sampling points, and correcting the normal direction through a ring of neighborhood triangles of the sampling points: and judging the included angle between the average direction of the triangles in the neighborhood and the original normal n and-n of the sampling point, wherein the included angle smaller than the preset angle is the correct direction.
Preferably, collision detection and occlusion detection are also included; the collision detection principle adopts collision detection based on a directional bounding box; calculating the directional bounding boxes of the structured light measuring unit according to the viewpoint poses of the viewpoints, calculating the bounding box of the measured object and the bounding box of the scene, and then judging whether the two bounding boxes collide; if collision occurs, deleting the viewpoint; the occlusion detection comprises occlusion of a viewpoint and occlusion of a sight line; the occlusion of the viewpoint is the occlusion that exists in the scanning direction of the CAD model itself; judging whether the intersection point of a ray taking the viewpoint and two camera optical centers of the structured light measuring unit as starting points and taking a connecting line of sampling points corresponding to the viewpoint as a direction and the CAD model is the sampling point, if so, no shielding occurs; if not, occlusion occurs, and adjusting a facet where a viewpoint where the occlusion is determined to occur through the occlusion detection specifically includes: and taking a surface patch where the viewpoint which is shielded is located as a target surface patch, taking a sampling point of the target surface patch as a starting point, rotating around the scanning direction by an angle a, taking a series of points on a rotating arc according to an angle b as candidate viewpoints, and judging whether the points still have viewpoint shielding and sight shielding one by one until finding out points without shielding, wherein the points are the viewpoint after adjustment.
Preferably, the narrowing down the view point to obtain an effective view point comprises: traversing the neighborhood points of each sampling point, deleting the sampling points with the distance less than the first preset distance from the sampling point according to the first preset distance r, and deleting the viewpoint corresponding to the sampling point; searching neighborhood viewpoints of the viewpoints according to a second preset distance D, calculating scanning areas on all the CAD models scanned by each viewpoint in the neighborhood viewpoints, deleting points with the same scanning areas or difference within a preset threshold value, recording the scanning areas corresponding to each viewpoint, continuously traversing the next viewpoint and the neighborhood viewpoints, and stopping calculation after each patch on the CAD models is scanned to obtain the effective test point.
Preferably, the teaching scanning and detecting the object to be detected based on the optimal scanning path includes: s41: adjusting the position posture of the object to be measured according to the optimal scanning path, and scanning the object to be measured by adopting a plurality of structured light measuring units in sequence to obtain point cloud; s42: splicing the point clouds obtained by scanning together by using the calibration result and the optimal scanning path to obtain a complete three-dimensional model of the object to be detected; s43: the three-dimensional model is precisely aligned with the CAD model after being gridded, and then corresponding point pair distances are obtained according to the relation of adjacent point pairs, wherein the point pair distance set is an object size deviation detection result; s44: and saving the scanning detection parameters in the teaching scanning detection process as the scanning detection template.
The invention also provides a multi-structure light measuring unit on-line measuring system, which comprises: the structured light measuring units comprise two high-resolution industrial cameras and a blue light grating generator and are used for scanning an object to be measured to obtain three-dimensional point cloud; the frame is used for adjusting the position and the posture of the structured light measuring unit; the rotary object stage is used for bearing an object to be measured and adjusting the position of the object to be measured; a control unit for use in a method as claimed in any one of the preceding claims.
The invention has the beneficial effects that: the method comprises the steps of aligning a CAD model of an object to be measured and an actually measured three-dimensional model of the object to be measured, obtaining an optimal scanning path by taking the minimum system cost function as a target based on the aligned CAD model, teaching, scanning and detecting the object to be measured by the structured light measuring unit based on the optimal scanning path, storing scanning and detecting parameters to a scanning and detecting template, and calling the scanning and detecting template by the system to perform batch detection on the object to be measured; the problem that measuring equipment or a measured object needs to be frequently moved when an object is scanned is avoided, complete model data can be quickly and conveniently obtained, and quick and efficient three-dimensional measurement is achieved.
Furthermore, the whole scanning efficiency is improved by sequencing and controlling the plurality of measuring units, the scanning sequence and time of the plurality of measuring units are reasonably planned, the system scanning efficiency is improved, and the mutual interference of the structured light patterns among the structured light measuring units is avoided.
Furthermore, teaching scanning detection is carried out on the object to be detected by adopting the multi-structure light measuring unit online measuring system to form automatic detection capability, and the measuring system can be flexibly adjusted according to the actual object to be detected, so that the capability of automatic rapid scanning is achieved.
Drawings
Fig. 1 is a schematic diagram of an online measurement method of a multi-structure light measurement unit in an embodiment of the present invention.
Fig. 2(a) and 2(b) are schematic diagrams of an on-line measuring system of a multi-structure light measuring unit in the embodiment of the invention.
Fig. 3 is a schematic diagram of an arrangement of calibration plates according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a method for aligning a CAD model of an object to be measured and an actually measured three-dimensional model of the object to be measured in the embodiment of the present invention.
Fig. 5 is a schematic diagram of a path planning process according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a path planning method according to an embodiment of the present invention.
Fig. 7(a) is a schematic diagram of a three-dimensional CAD patch during model sampling according to an embodiment of the present invention.
Fig. 7(b) is a parameter domain diagram of a CAD patch at the time of model sampling in the embodiment of the present invention.
Fig. 8(a) is a schematic diagram of adaptive sampling for a patch with a larger area according to an embodiment of the present invention.
Fig. 8(b) is a schematic diagram of large-curvature patch sampling adaptation in the embodiment of the present invention.
Fig. 9 is a schematic diagram of viewpoint generation in the embodiment of the present invention.
FIG. 10 is a schematic illustration of a normal orientation in an embodiment of the present invention.
Fig. 11 is a schematic diagram of collision detection in an embodiment of the present invention.
FIG. 12(a) is a schematic view of view occlusion during occlusion detection in the embodiment of the present invention.
FIG. 12(b) is a schematic view of line-of-sight occlusion during occlusion detection in the embodiment of the present invention.
FIG. 13(a) is a schematic diagram illustrating a non-occlusion situation during occlusion detection according to an embodiment of the present invention.
FIG. 13(b) is a schematic diagram of an occlusion situation during occlusion detection according to an embodiment of the present invention.
Fig. 14(a) is a schematic diagram of the occlusion adjustment in the embodiment of the present invention.
FIG. 14(b) is a schematic diagram of adjusting occlusion on a curved surface according to an embodiment of the present invention.
FIG. 15 is a schematic diagram of an embodiment of the invention.
FIG. 16 is a schematic diagram of an embodiment of the invention.
FIG. 17 is a schematic diagram of a teaching scanning detection method for a plurality of structured light measurement units according to an embodiment of the present invention.
FIG. 18 is a schematic diagram of a batch inspection process according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the embodiments of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. In addition, the connection may be for either a fixing or a circuit communication.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the embodiments of the present invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be in any way limiting of the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
As shown in fig. 1, the present invention provides an on-line measurement method for a multi-structure light measurement unit, comprising the following steps:
s1: calibrating an on-line measuring system of a multi-structure light measuring unit to obtain a calibration result, wherein the on-line measuring system of the multi-structure light measuring unit comprises a plurality of structure light measuring units and a rotary objective table; the rotary object stage is used for bearing an object to be measured and adjusting the position of the object to be measured;
s2: aligning the CAD model of the object to be measured with the actually measured three-dimensional model of the object to be measured;
s3: based on the aligned CAD model of the object to be measured, obtaining an optimal scanning path by taking the minimum cost function of the multi-structure light measuring unit online measuring system as a target, wherein the cost function comprises the rotation times of the rotary object stage, the distance between a viewpoint on the CAD model and the structure light measuring unit, and the included angle between the direction of the viewpoint on the CAD model and the direction of the structure light measuring unit;
s4: teaching scanning detection is carried out on the object to be detected based on the optimal scanning path, and scanning detection parameters in the teaching scanning detection process are stored as a scanning detection template;
s5: and carrying out batch detection on the object to be detected by adopting the multi-structure light measuring unit online measuring system based on the scanning detection template.
The method comprises the steps of aligning a CAD model of an object to be detected and an actually measured three-dimensional model of the object to be detected, obtaining an optimal scanning path by taking the minimum system cost function as a target based on the aligned CAD model, performing teaching scanning detection on the object to be detected based on the optimal scanning path by a structured light measurement unit, storing scanning detection parameters to a scanning detection template, and calling the scanning detection template by a system to perform batch detection on the object to be detected; the problem that measuring equipment or a measured object needs to be frequently moved when an object is scanned is avoided, complete model data can be quickly and conveniently obtained, and quick and efficient three-dimensional measurement is achieved.
As shown in fig. 2(a) and 2(b), a multi-structure light measuring unit online measuring system includes: the structured light measuring units comprise two high-resolution industrial cameras and a blue light grating generator and are used for scanning an object to be measured to obtain three-dimensional point cloud;
the frame is used for adjusting the position and the posture of the structured light measuring unit;
the rotary object stage is used for bearing an object to be measured and adjusting the position of the object to be measured;
a control unit for use in any of the methods described below.
Under the condition that the hardware and the layout of the system are determined, the on-line measuring system of the multi-structure light measuring unit comprises the following working steps:
(1) calibrating the on-line measuring system of the multi-structure light measuring unit to obtain a calibration result
The calibration comprises the following steps: calibrating the internal and external parameters of a single structural light measuring unit, calibrating the external parameters between the structural light measuring units and calibrating the external parameters of a reference structural light measuring unit relative to the rotary objective table;
a. internal and external parameter calibration of single structured light measuring unit
Each structured light measuring unit is calibrated, the left camera is used as a reference camera, and internal parameters (camera focal lengths fx and fy, principal point deviations u0 and v0, distortion parameters k1, k2, k3, p1 and p2) and external parameters (a rotation matrix R and a translation matrix T) of the two cameras are calculated based on a photogrammetry principle.
Fig. 3 shows an arrangement of calibration plates in an embodiment of the present invention. And completing binocular structure light point cloud reconstruction of the single structured light measuring unit by utilizing the internal and external reference calibration results of the single structured light measuring unit.
b. External parameter calibration between structured light measurement units
And designating a reference camera of one structured light measuring unit as a global reference camera, shooting a calibration plate image (the reference cameras in two adjacent structured light measuring units have a common visual angle and can reconstruct mark points on the calibration plate), and aligning the mark points reconstructed by the two structured light measuring units to calculate external parameters (a rotation matrix R and a translation matrix T) between the two reference cameras. Using the external parameter (y ═ R × x + T), the cameras in neighboring structured-light measurement units can be switched to the coordinate system of the global reference camera.
c. External parameter calibration of reference structured light measuring unit relative to rotary object stage
And (4) calibrating external parameters (a rotation matrix R and a translation matrix T) of the measurement unit and the turntable where the global reference camera specified in the step b is located by using an eye-out-of-hand method. The point clouds before and after the position posture of the rotary stage is adjusted can be spliced together by using the external parameter (y ═ R × x + T).
2. As shown in fig. 4, aligning the CAD model of the object to be measured and the actually measured three-dimensional model of the object to be measured includes:
s21: the object to be measured is rotationally scanned for a circle according to a preset angle, and all scanning point clouds are aligned by using the calibration result and the preset angle to obtain the actually measured three-dimensional model of the object to be measured;
s22: aligning the CAD model of the object to be measured and the actually measured three-dimensional model to obtain an alignment relation, and calculating by using the alignment relation to obtain a spatial rotation matrix R and a translation matrix T;
s23: and converting the CAD model into a coordinate system of the actually measured three-dimensional model based on the spatial rotation matrix R and the translation matrix T.
3. Performing path planning on the object to be measured by adopting a multi-structure light measurement unit online measurement system
As shown in fig. 5, the path planning includes the following main steps: model sampling, viewpoint calculation, viewpoint simplification and path generation.
As shown in fig. 6, obtaining the optimal scan path includes the following steps:
s31: sampling each surface of the CAD model to obtain a series of sampling points;
s32: acquiring sampling points on each surface of the CAD model and normal directions of the sampling points, and generating a viewpoint according to the positions and normal directions of the sampling points and the standard scanning distance of the structured light measuring unit, wherein the viewpoint is a scanning point location, and the scanning direction is the direction of a connecting line between the viewpoint and the sampling points;
s33: simplifying the viewpoint to obtain an effective viewpoint;
s34: and corresponding the effective viewpoint to the structured light measuring unit, carrying out full arrangement on the corresponding relation, then calculating the cost function under each arrangement, and selecting a group of arrangements with the minimum cost function as an actual corresponding relation to generate a series of poses of the rotary object stage to obtain the optimal scanning path.
a. Model sampling
Model sampling is a method of sampling each side of a CAD model surface to obtain a series of sample points. The sampling points are used for the generation of the viewpoint positions, so model sampling is crucial. During sampling, firstly, the boundary of each surface patch in a parameter domain is obtained, then a straight line is taken in one direction (u or v) in the parameter domain to intersect with the parameter domain, and the central point in the intersection point is taken as a sampling point, so that the sampling point can be ensured to be on the CAD surface patch.
In one embodiment of the invention, sampling each face of the surface of the CAD model to obtain a series of sample points comprises:
and acquiring the boundary of each patch in a parameter domain, then taking a straight line in one direction in the parameter domain to intersect with the parameter domain, and taking the central point in the intersection point as a sampling point.
As shown in fig. 7(a), the three-dimensional surface is a sampled three-dimensional surface in the embodiment of the present invention, the corresponding parameter domain is as shown in fig. 7(b), and the three-dimensional point corresponding to the sampling point P (u0, v0) in the parameter domain is P (x (u0, v0), y (u0, v0), z (u0, v 0)).
In order to scan normally according to the sizes and curvatures of different patches, optimization needs to be performed according to different characteristics during sampling. Sampling each face of the surface of the CAD model to obtain a series of sample points further comprises:
dividing each patch of the surface of the CAD model according to the scanning breadth of the structured light measuring unit, wherein the specific formula is as follows:
Figure BDA0003537003480000091
wherein, the face is the dough skin, S is the area of the dough sheet, S0The scanning breadth area of the structured light measuring unit is shown, and k is a proportionality coefficient; in one specific embodiment, it is generally from 0.6 to 0.8;
for a patch with an area smaller than a preset area, a central point is taken from the parameter domain;
as shown in fig. 8(a), for a patch with an area greater than or equal to the preset area, equally dividing the parameter domain to obtain a plurality of regions, and then taking a center point of each region as a sampling point;
for a patch with a curvature greater than the predetermined curvature, at least one more point is inserted in the direction of the curvature greater than the predetermined curvature, as shown in fig. 8(b), to ensure that the details of the patch can be swept.
b. Viewpoint calculation
After sampling is finished, a sampling point on each surface of the CAD model and the normal direction of the sampling point can be obtained, then a viewpoint is generated according to the position and the normal direction of the sampling point and the standard scanning distance of the structured light measuring unit, the viewpoint is a scanning point position, and the scanning direction is the direction of a connecting line between the viewpoint and the sampling point. The viewpoint position pos and the scanning direction dir are parameters to be saved.
As shown in fig. 9, a point P on the surface S, whose normal direction is n, is a structured light measuring unit standard scanning distance D, and a scanning viewpoint Vp is obtained by extending the distance D along the normal direction n. But the normal at the sampling point sometimes does not face exactly outward, so a consistent orientation of the normal is required. After the viewpoint is generated, collision detection and occlusion detection are required, and the viewpoint is adjusted to a scannable optimal state.
I. Accurate orientation of scan direction
After obtaining the sampling points on each surface of the CAD model and the normal directions of the sampling points, the method further includes orienting the normal directions in a consistent manner, and specifically includes:
dispersing the CAD model into a point cloud by using a model sampling mode to obtain a point cloud model and obtain the sampling points; at the moment, when each surface is sampled, the sampling interval is set to be a small point so as to obtain as many sampling points as possible;
triangularization is carried out on the point cloud model by utilizing the principle of Delou triangulation, and the normal direction of each surface patch is calculated;
as shown in fig. 10, the normal direction of the nearest triangle of the sampling points is calculated, and the normal direction is corrected by a ring of neighboring triangles of the sampling points:
and judging the included angle between the average direction of the triangles in the neighborhood and the original normal n and-n of the sampling point, wherein the included angle smaller than the preset angle is the correct direction.
Collision detection
The collision detection of the viewpoint needs to detect the collision of the structured light measuring unit with the CAD model and the collision of the structured light measuring unit with the scanned scene. The principle of collision detection employs collision detection based on a directed bounding box.
As shown in fig. 11, the directional bounding box 1 of the structured light measuring unit is first calculated according to the viewpoint pose, then the bounding box 2 of the measured object and the bounding box 3 of the scene are calculated, then whether the two bounding boxes collide is judged, and if so, the viewpoint is deleted.
Occlusion detection
Occlusion detection includes occlusion of a viewpoint and occlusion of a line of sight. Occlusion of a viewpoint means that there is occlusion of the model itself in the scanning direction.
Fig. 12(a) is a schematic view of view occlusion in an embodiment of the present invention, in which the arrow direction is a scanning direction.
In the scanning direction, the point P on the surface S is occluded by an element of the CAD model itself, resulting in inability to scan. The occlusion of the line of sight means that there is an occlusion in the camera line of sight direction.
As shown in fig. 12(b), which is a schematic view of line-of-sight occlusion in an embodiment of the present invention, a camera center line-of-sight 4 of the structured light measurement unit is occluded by an element of the CAD model itself, so that at least one camera cannot shoot a point on the curved surface S, and therefore cannot scan, and the line-of-sight occlusion area 5 is an area that cannot be scanned. In the case of view occlusion, there is always a view occlusion, but when the view is not occluded, there may be a view occlusion, and therefore, it is necessary to detect the view occlusion at the same time when detecting the view occlusion.
The occlusion of the viewpoint is the occlusion that exists in the scanning direction of the CAD model itself; judging whether the intersection point of a ray taking the viewpoint and two camera optical centers of the structured light measuring unit as starting points and a sampling point connecting line corresponding to the viewpoint as a direction and the CAD model is the sampling point, if so, no shielding occurs; if not, the occlusion occurs, and the patch where the viewpoint which is determined to be occluded is located is adjusted through the occlusion detection.
As shown in fig. 13(a), the ray intersects the model, the closest intersection point 6 is P1, and P1 is also the sampling point P, and it is assumed that no occlusion occurs.
As in fig. 13(b), if intersection points of the ray and the model are 7, 8 and 9 respectively, and a point P1 closest to the starting point is not a sampling point P, the occlusion is considered to occur.
If occlusion occurs, the viewpoint cannot be deleted directly, because direct deletion may cause that the target patch corresponding to the viewpoint cannot be scanned, and therefore occlusion adjustment needs to be performed after occlusion is detected, so that a scannable state of the patch is achieved.
Fig. 14(a) is a schematic diagram of an occlusion adjustment principle in an embodiment of the present invention, in which a candidate test point 11 and a sampling point 12 at an occlusion part are shown.
As shown in fig. 14(b), which is a schematic diagram of adjusting occlusion on a curved surface in an embodiment of the present invention, a patch where a viewpoint where occlusion occurs is located is taken as a target patch, a sampling point of the target patch is taken as a starting point, and an angle a is rotated around a scanning direction, as shown in an occlusion adjustment direction 10, a series of points are taken as candidate viewpoints according to an angle b on a rotation arc, whether viewpoint occlusion and sight line occlusion still exist at the points is determined one by one until a point without occlusion is found and stopped, where the points are viewpoints after adjustment, and sizes of the angles a and b generally take about 15 degrees.
c. Viewpoint simplification
Each surface of the CAD model has at least one sampling point, and each sampling point corresponds to a viewpoint, so when the model is complex and contains more surfaces, the viewpoints are more, and the scanning efficiency is low. It is necessary to perform viewpoint reduction under the condition that the scanning efficiency is improved and the integrity of the scanning is ensured.
In one embodiment, the pruning the view to obtain the valid view comprises:
traversing the neighborhood points of each sampling point, deleting the sampling points with the distance less than the first preset distance from the sampling point according to the first preset distance r, and deleting the viewpoint corresponding to the sampling point;
searching neighborhood viewpoints of the viewpoints according to a second preset distance D, calculating scanning areas on all the CAD models scanned by each viewpoint in the neighborhood viewpoints, deleting points with the same scanning areas or difference within a preset threshold value, recording the scanning areas corresponding to each viewpoint, continuously traversing the next viewpoint and the neighborhood viewpoints, and stopping calculation after each patch on the CAD models is scanned to obtain the effective test point.
Specifically, the viewpoint reduction algorithm of the present embodiment includes two parts, first performing preliminary reduction according to sampling points, and then performing reduction according to viewpoints. According to the simplification principle of the sampling points, the neighborhood points of each point are traversed, the closer sampling points are deleted according to the first preset distance r, and meanwhile, the viewpoints corresponding to the sampling points are deleted, so that the initial simplification of the viewpoints is realized.
As shown in fig. 15, which is a schematic diagram of a preliminary simplification principle in the embodiment of the present invention, in the diagram, P0 is a current sampling point, r is a search radius, the size of r is generally not greater than half of the radius of an inscribed circle of a standard scanning format of a structured light measurement unit, and Pn1 and Pn2 are neighborhood points to be deleted.
After the preliminary reduction is completed, a large number of viewpoints still exist, and the viewpoints need to be directly reduced again. In the simplification process, viewpoint neighborhood viewpoints are searched according to a second preset distance D, regions on all models which can be scanned by each viewpoint are calculated in the neighborhood viewpoints, firstly, points with the same scanning region or the difference within 10% are deleted, then, the scanning region corresponding to each viewpoint is recorded, the next point and the neighborhood viewpoints are continuously traversed, and the calculation is stopped after each patch on the models is scanned. The search distance D is generally the inscribed circle radius of the standard scanning format of the structured light measuring unit, and may be reduced or increased appropriately without affecting the reduction result.
As shown in fig. 16, which is a simplified view point diagram in the embodiment of the present invention, the scanning position of the area to be scanned has three views P1, P2, and P3, where P2 is the most complete scanning and the optimal scanning angle, and therefore two adjacent points P2 and P3 of P1 are deleted.
d. Path generation
After all the viewpoints are generated, the viewpoints are associated with the actual structured light measuring means. At this time, the corresponding relations are fully arranged, then a cost function of the system under each arrangement is calculated (the cost function is the rotation times of the rotary object stage, the distance between the viewpoint and the structured light measuring unit and the included angle between the viewpoint direction and the structured light measuring unit direction), the group of arrangement with the minimum cost function is replaced to be used as an actual corresponding relation, namely an optimal scanning path, a series of poses of the rotary object stage, namely the rotation angle of the rotary object stage is generated, in a specific embodiment, two scanning positions are finally generated, and actually, a plurality of positions can be provided according to the specific type of the object to be detected.
(3) Teaching scan detection
As shown in fig. 17, after completing the path planning, the teaching scanning and detecting the object to be detected based on the optimal scanning path includes:
s41: adjusting the position posture of the object to be measured according to the optimal scanning path, and scanning the object to be measured by adopting a plurality of structured light measuring units in sequence to obtain point cloud;
s42: splicing the point clouds obtained by scanning together by using the calibration result and the optimal scanning path to obtain a complete three-dimensional model of the object to be detected;
s43: the three-dimensional model is precisely aligned with the CAD model after being gridded, and then corresponding point pair distances are obtained according to the relation of adjacent point pairs, wherein the point pair distance set is an object size deviation detection result;
s44: and saving the scanning detection parameters in the teaching scanning detection process as the scanning detection template.
(4) Batch testing
And the system calls a scanning detection template to carry out batch detection on the objects to be detected.
As shown in fig. 18, after the object to be measured moves to the measured position, the scanning detection template automatically adjusts the position and posture of the object to be measured, and simultaneously, the full-field high-speed scanning is realized in cooperation with the sequential control of the plurality of structured light measuring units, and the plurality of structured light measuring units sequentially perform projection, shooting and calculation to realize batch measurement.
The size deviation of the object obtained by the invention reflects the deviation of design and manufacture, which indicates the problems in the manufacture of the object to be tested, thereby guiding the production improvement and the screening of the defective object.
An embodiment of the present application further provides a control apparatus, including a processor and a storage medium for storing a computer program; wherein a processor is adapted to perform at least the method as described above when executing the computer program.
Embodiments of the present application also provide a storage medium for storing a computer program, which when executed performs at least the method described above.
Embodiments of the present application further provide a processor, where the processor executes a computer program to perform at least the method described above.
The storage medium may be implemented by any type of volatile or non-volatile storage device, or combination thereof. The nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAMEN), Synchronous linked Dynamic Random Access Memory (DRAM), and Direct Random Access Memory (DRMBER). The storage media described in connection with the embodiments of the invention are intended to comprise, without being limited to, these and any other suitable types of memory.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (10)

1. A multi-structure light measuring unit online measuring method is characterized by comprising the following steps:
s1: calibrating an online measuring system of a multi-structure light measuring unit to obtain a calibration result, wherein the online measuring system of the multi-structure light measuring unit comprises a plurality of structure light measuring units and a rotary objective table; the rotary object stage is used for bearing an object to be measured and adjusting the position of the object to be measured;
s2: aligning the CAD model of the object to be measured with the actually measured three-dimensional model of the object to be measured;
s3: based on the aligned CAD model of the object to be measured, obtaining an optimal scanning path by taking the minimum cost function of the multi-structure light measuring unit online measuring system as a target, wherein the cost function comprises the rotation times of the rotary object stage, the distance between a viewpoint on the CAD model and the structure light measuring unit, and the included angle between the direction of the viewpoint on the CAD model and the direction of the structure light measuring unit;
s4: teaching scanning detection is carried out on the object to be detected based on the optimal scanning path, and scanning detection parameters in the teaching scanning detection process are stored as a scanning detection template;
s5: and carrying out batch detection on the object to be detected by adopting the multi-structure light measuring unit online measuring system based on the scanning detection template.
2. The method of claim 1, wherein aligning the CAD model of the object under test with the actually measured three-dimensional model of the object under test comprises:
s21: the object to be measured is rotationally scanned for a circle according to a preset angle, and all scanning point clouds are aligned by using the calibration result and the preset angle to obtain the actually measured three-dimensional model of the object to be measured;
s22: aligning the CAD model of the object to be measured and the actually measured three-dimensional model to obtain an alignment relation, and calculating by using the alignment relation to obtain a spatial rotation matrix R and a translation matrix T;
s23: converting the CAD model into the coordinate system of the actually measured three-dimensional model based on the spatial rotation matrix R and the translation matrix T.
3. The multi-structure light measuring unit on-line measuring method of claim 2, wherein obtaining the optimal scanning path comprises the steps of:
s31: sampling each surface of the CAD model to obtain a series of sampling points;
s32: acquiring sampling points on each surface of the CAD model and normal directions of the sampling points, and generating a viewpoint according to the positions and normal directions of the sampling points and the standard scanning distance of the structured light measuring unit, wherein the viewpoint is a scanning point location, and the scanning direction is the direction of a connecting line between the viewpoint and the sampling points;
s33: simplifying the viewpoint to obtain an effective viewpoint;
s34: and corresponding the effective viewpoint to the structured light measuring unit, carrying out full arrangement on the corresponding relation, then calculating the cost function under each arrangement, and selecting a group of arrangements with the minimum cost function as an actual corresponding relation to generate a series of poses of the rotary object stage to obtain the optimal scanning path.
4. The multi-structured light measuring unit on-line measuring method of claim 3, wherein sampling each face of the surface of the CAD model to obtain a series of sample points comprises:
and acquiring the boundary of each patch in a parameter domain, then taking a straight line in one direction in the parameter domain to intersect with the parameter domain, and taking the central point in the intersection point as a sampling point.
5. The method of on-line measurement by a multi-structured light measuring unit of claim 4, wherein sampling each side of the surface of the CAD model to obtain a series of sample points further comprises:
dividing each patch of the surface of the CAD model according to the scanning breadth of the structured light measuring unit, wherein the specific formula is as follows:
Figure FDA0003537003470000021
wherein, the face is the dough sheet, S is the area of the dough sheet, S0K is a proportional coefficient, and is the scanning breadth area of the structured light measuring unit;
for a patch with an area smaller than a preset area, a central point is taken from the parameter domain;
for the surface patches with the areas larger than or equal to the preset area, equally dividing the parameter domain to obtain a plurality of regions, and then taking the central point of each region as a sampling point;
for a patch with curvature greater than the preset curvature, inserting at least one more point in the direction with curvature greater than the preset curvature.
6. The multi-structure light measuring unit online measuring method of claim 5, wherein obtaining the sampling points on each face of the CAD model and the normal directions of the sampling points further comprises orienting the normal directions consistently, specifically comprising:
dispersing the CAD model into a point cloud by using a model sampling mode to obtain a point cloud model and obtain the sampling points;
triangulating the point cloud model by using the delaunay triangulation principle and calculating the normal direction of each patch;
calculating the normal direction of the nearest triangle of the sampling points, and correcting the normal direction through a ring of neighborhood triangles of the sampling points:
and judging the included angle between the average direction of the triangles in the neighborhood and the original normal n and-n of the sampling point, wherein the included angle smaller than the preset angle is the correct direction.
7. The method of claim 6, further comprising collision detection and occlusion detection;
the collision detection principle adopts collision detection based on a directional bounding box; calculating the directional bounding boxes of the structured light measuring unit according to the viewpoint poses of the viewpoints, calculating the bounding box of the measured object and the bounding box of the scene, and then judging whether the two bounding boxes collide; if collision occurs, deleting the viewpoint;
the occlusion detection comprises occlusion of a viewpoint and occlusion of a line of sight; the occlusion of the viewpoint is the occlusion of the CAD model in the scanning direction; judging whether the intersection point of a ray taking the viewpoint and two camera optical centers of the structured light measuring unit as starting points and a sampling point connecting line corresponding to the viewpoint as a direction and the CAD model is the sampling point, if so, no shielding occurs; if not, occlusion occurs, and adjusting a facet where a viewpoint where the occlusion is determined to occur through the occlusion detection specifically includes:
and taking a surface patch where the viewpoint which is shielded is located as a target surface patch, taking a sampling point of the target surface patch as a starting point, rotating around the scanning direction by an angle a, taking a series of points on a rotating arc according to an angle b as candidate viewpoints, and judging whether the points still have viewpoint shielding and sight shielding one by one until finding out points without shielding, wherein the points are the viewpoint after adjustment.
8. The multi-structured light measuring unit online measuring method of claim 7, wherein the reducing the viewpoint to obtain the valid viewpoint comprises:
traversing the neighborhood points of each sampling point, deleting the sampling points with the distance less than the first preset distance from the sampling point according to the first preset distance r, and deleting the viewpoint corresponding to the sampling point;
searching neighborhood viewpoints of the viewpoints according to a second preset distance D, calculating scanning areas on all the CAD models scanned by each viewpoint in the neighborhood viewpoints, deleting points with the same scanning areas or differences within a preset threshold value, recording the scanning areas corresponding to each viewpoint, continuously traversing the next viewpoint and the neighborhood viewpoints, and stopping calculation after each patch on the CAD models is scanned to obtain the effective test points.
9. The multi-structured light measuring unit on-line measuring method according to claim 8, wherein the teaching scan detection of the object to be measured based on the optimal scan path comprises:
s41: adjusting the position posture of the object to be measured according to the optimal scanning path, and scanning the object to be measured by adopting a plurality of structured light measuring units in sequence to obtain point cloud;
s42: splicing the point clouds obtained by scanning together by using the calibration result and the optimal scanning path to obtain a complete three-dimensional model of the object to be detected;
s43: the three-dimensional model is precisely aligned with the CAD model after being gridded, and then corresponding point pair distances are obtained according to the relation of adjacent point pairs, wherein the point pair distance set is an object size deviation detection result;
s44: and saving the scanning detection parameters in the teaching scanning detection process as the scanning detection template.
10. A multi-structure light measurement unit online measurement system is characterized by comprising:
the structured light measuring units comprise two high-resolution industrial cameras and a blue light grating generator and are used for scanning an object to be measured to obtain three-dimensional point cloud;
the frame is used for adjusting the position and the posture of the structured light measuring unit;
the rotary object stage is used for bearing an object to be measured and adjusting the position of the object to be measured;
a control unit for use in a method according to any of claims 1-9.
CN202210220284.XA 2022-03-08 2022-03-08 Multi-structure light measurement unit online measurement method and system Pending CN114638795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210220284.XA CN114638795A (en) 2022-03-08 2022-03-08 Multi-structure light measurement unit online measurement method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210220284.XA CN114638795A (en) 2022-03-08 2022-03-08 Multi-structure light measurement unit online measurement method and system

Publications (1)

Publication Number Publication Date
CN114638795A true CN114638795A (en) 2022-06-17

Family

ID=81947017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210220284.XA Pending CN114638795A (en) 2022-03-08 2022-03-08 Multi-structure light measurement unit online measurement method and system

Country Status (1)

Country Link
CN (1) CN114638795A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116341327A (en) * 2023-03-28 2023-06-27 北京科技大学 Automatic planning method and device for high-precision measuring field
CN117201685A (en) * 2023-11-06 2023-12-08 中国民航大学 Surface coverage scanning method, device, equipment and medium for three-dimensional object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103995496A (en) * 2014-04-28 2014-08-20 南京航空航天大学 Aircraft part high-precision matching component processing method based on digital measurement
CN110966932A (en) * 2019-11-22 2020-04-07 杭州思看科技有限公司 Structured light three-dimensional scanning method based on known mark points
CN112577447A (en) * 2020-12-07 2021-03-30 新拓三维技术(深圳)有限公司 Three-dimensional full-automatic scanning system and method
CN112818428A (en) * 2020-12-31 2021-05-18 新拓三维技术(深圳)有限公司 Light full-automatic scanning path planning method for CAD model surface structure
CN113962932A (en) * 2021-09-10 2022-01-21 上海大学 Thread detection method based on three-dimensional modeling

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103995496A (en) * 2014-04-28 2014-08-20 南京航空航天大学 Aircraft part high-precision matching component processing method based on digital measurement
CN110966932A (en) * 2019-11-22 2020-04-07 杭州思看科技有限公司 Structured light three-dimensional scanning method based on known mark points
CN112577447A (en) * 2020-12-07 2021-03-30 新拓三维技术(深圳)有限公司 Three-dimensional full-automatic scanning system and method
CN112818428A (en) * 2020-12-31 2021-05-18 新拓三维技术(深圳)有限公司 Light full-automatic scanning path planning method for CAD model surface structure
CN113962932A (en) * 2021-09-10 2022-01-21 上海大学 Thread detection method based on three-dimensional modeling

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116341327A (en) * 2023-03-28 2023-06-27 北京科技大学 Automatic planning method and device for high-precision measuring field
CN117201685A (en) * 2023-11-06 2023-12-08 中国民航大学 Surface coverage scanning method, device, equipment and medium for three-dimensional object
CN117201685B (en) * 2023-11-06 2024-01-26 中国民航大学 Surface coverage scanning method, device, equipment and medium for three-dimensional object

Similar Documents

Publication Publication Date Title
CN106408609B (en) A kind of parallel institution end movement position and posture detection method based on binocular vision
CN112161619B (en) Pose detection method, three-dimensional scanning path planning method and detection system
US7010157B2 (en) Stereo image measuring device
CN114638795A (en) Multi-structure light measurement unit online measurement method and system
EP2551633B1 (en) Three dimensional distance measuring device and method
Gai et al. A novel dual-camera calibration method for 3D optical measurement
EP2161537A2 (en) Optical position measuring apparatus based on projection of grid patterns
WO2018201677A1 (en) Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system
CN112733428B (en) Scanning attitude and coverage path planning method for optical measurement
US20020169586A1 (en) Automated CAD guided sensor planning process
CN114460093B (en) Aeroengine defect detection method and system
Zong et al. A high-efficiency and high-precision automatic 3D scanning system for industrial parts based on a scanning path planning algorithm
CN112381921B (en) Edge reconstruction method and system
Jokinen Self-calibration of a light striping system by matching multiple 3-d profile maps
CN105574812A (en) Multi-angle three-dimensional data registration method and device
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN111353997A (en) Real-time three-dimensional surface defect detection method based on fringe projection
Jin et al. A new multi-vision-based reconstruction algorithm for tube inspection
CN109186942A (en) The test parallelism detection method, apparatus and readable storage medium storing program for executing of structure light video camera head
CN115187612A (en) Plane area measuring method, device and system based on machine vision
Wu et al. A measurement method of free-form tube based on multi-view vision for industrial assembly
JP4112077B2 (en) Image measurement processing method and apparatus, and recording medium recording image measurement processing program
Jin et al. A multi-vision-based system for tube inspection
CN111583388A (en) Scanning method and device of three-dimensional scanning system
Qin et al. A novel hierarchical iterative hypothesis strategy for intrinsic parameters calibration of laser structured-light weld vision sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination