WO2021226716A1 - System and method for discrete point coordinate and orientation detection in 3d point clouds - Google Patents

System and method for discrete point coordinate and orientation detection in 3d point clouds Download PDF

Info

Publication number
WO2021226716A1
WO2021226716A1 PCT/CA2021/050659 CA2021050659W WO2021226716A1 WO 2021226716 A1 WO2021226716 A1 WO 2021226716A1 CA 2021050659 W CA2021050659 W CA 2021050659W WO 2021226716 A1 WO2021226716 A1 WO 2021226716A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
point
point cloud
global coordinate
interest
Prior art date
Application number
PCT/CA2021/050659
Other languages
French (fr)
Inventor
Mohammad Mahdi SHARIF
Shang Kun Li
Carl Thomas HAAS
Mark Pecen
Original Assignee
Glove Systems Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glove Systems Inc. filed Critical Glove Systems Inc.
Publication of WO2021226716A1 publication Critical patent/WO2021226716A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the disclosure is generally directed at feature detection and more specifically at a method and system for discrete point coordinate and orientation detection.
  • termination points are points of connection between assemblies, sub-assemblies, or modules. Termination points are also defined in local coordinate systems where assemblies are connected or constraints. Furthermore, termination points are identifiable parametric features on assemblies that are idealized by points. As such, the accurate mating of termination points in connected assemblies is required.
  • Existing methods to assist in the manufacture of components include the use of (1) discrete point collection devices such as laser trackers and total stations, or (2) using manual hand measurement tools. These industrially accepted methods are highly manual, and time-consuming to implement and operate. As a result of the sophistication required in using surveying grade tools, these procedures are only applied on projects with a high tolerance demand.
  • 3D as-built point-clouds offers an alternative approach for finding and verifying the compliance of termination points.
  • a challenge exists in using 3D as-built point clouds for fabrication and control of termination points.
  • the challenge is the detection of termination (critical) points in a point cloud.
  • point clouds include millions of data points, making it difficult to discern termination points from other points.
  • the first technology that is used is photogrammetry.
  • this reconstruction method two-dimensional (2D) images taken by a camera with known parameters are used.
  • To reconstruct a point cloud of a certain object at least two images of the object are taken and the common features between corresponding images are detected. Using the relative position of the camera to the images, a point cloud is then reconstructed.
  • TLS terrestrial laser scanning
  • Laser scanners may be seen as another source of obtaining 3D data.
  • phase shift based scanners use the difference between the wavelength of an outbound and inbound laser beam to calculate the distance of a point.
  • Time of Flight based scanners calculate the time that it takes for a laser beam to hit an object and come back. Using the time and knowing the speed of the laser beam, the distances between the objects and the scanner are calculated.
  • utilizing laser scanners involves several manual tasks and sophisticated knowledge for post-processing the data that is obtained which is very time consuming. Therefore, the use of laser scanners has been limited to engineering firms employing well-trained engineers using complex software due to time and value costs.
  • a third technology that is used is Simultaneous Localization and Mapping (SLAM).
  • SLAM Simultaneous Localization and Mapping
  • SLAM-based scanners automate the registration process by keeping track of the location of the scanning device itself. These scanners use various sensors and different algorithms such as GraphSLAM, particle filter, and extended Kalman filter to approximately map the location of the scanning unit with respect to the object.
  • structured lighting is one of the most commonly used methods.
  • An infrared (IR) projector and one sensor within a certain distance of the projector are combined.
  • the projector projects speckle patterns on the objects, and the sensor calculates the distance of a point to itself.
  • two separate images must be captured. However, the accuracy of point clouds obtained can vary based on the path taken for mapping, and the condition of the environment’s circumstances such as lighting.
  • the disclosure is directed at a method and apparatus for discrete point coordinate and orientation detection.
  • the disclosure is directed at a physical fixture, or target, that is used to assist in obtaining a three-dimensional (3D) point cloud (and specific points within the 3D point cloud) in an improved manner.
  • the disclosure is directed at methods for the detection of coordinates and orientation of termination, or reference, points in a 3D point cloud representing an assembly or workspace.
  • the target is designed with a geometric configuration to allow for automatic detection of the target’s reference points’ coordinate and orientation.
  • the target is designed such that edge detection, feature detection, and/or machine learning based detection may be used to find the coordinates and orientation of the target’s reference points. The calculation of these reference points supports existing measurement approaches that rely on the accurate calculation of these points on assemblies or within workspaces.
  • Figure 1 is a schematic diagram of a system for discrete point coordinate and orientation detection in its operational environment
  • Figure 2 is a schematic diagram of a component for the system of Figure 1 ;
  • Figure 3 is a flowchart outlining a method of discrete point coordinate and orientation detection
  • Figure 4 are photos showing a placement of a target atop an object of interest
  • Figures 5a to 5d are views of a first embodiment of a target
  • Figure 6 is a flowchart outlining of a method determining a position of an object of interest within a 3D point cloud using the target of Figures 5a to 5d;
  • Figure 7 is a point cloud representation of a target
  • Figure 8 is picture of a target in a scanned point cloud
  • Figures 9a to 9c are views of a second embodiment of a target
  • Figure 10 is a flowchart outlining of a method determining a position of an object of interest within a 3D point cloud using the target of Figures 9a to 9c;
  • Figure 11 is a schematic diagram showing a relationship between the spheres and a unit vector of the target of Figures 9a to 9c;
  • Figures 12a and 12b are schematic diagrams of a third embodiment of a target.
  • Figure 13 is a flowchart outlining of a method determining a position of an object of interest within a 3D point cloud using the target of Figures 12a to 12b. Description of the Disclosure
  • the disclosure is directed at a method and system for discrete point coordinate and orientation detection.
  • the disclosure is directed at a three-dimensional (3D) target for assisting in automatic, reliable, and repeatable discrete point detection in 3D as-built point clouds.
  • the disclosure is directed at a method of determining termination points within the 3D point cloud using a target.
  • the system includes a physical fixture, or target, that is used to provide a reference point to determine a 3D point cloud.
  • the physical figure may be based on a specific geometry to calculate the coordinate and orientation of reference points.
  • the physical fixture may be based on unique patterns that can be detected by a 3D acquisition unit.
  • Termination points the coordinate points in a coordinate space, or system, of an object of interest within an assembly that is connected to another assembly or is constrained. Along with the location of the coordinate points within the coordinate system, the orientation of the assembly associated with the termination points is required. This is because, for assemblies to fit in practice, the connection surfaces (for example, flange faces) on both ends of the connecting assemblies must meet each other at the same location with the same orientation so that they may be “flush” with each other.
  • Target a physical fixture that is installed on or mounted to a termination point of an object of interest.
  • Reference Point a known location within the target that is used to determine a location of the target in the global coordinate system, or space.
  • the reference point may also be used to determine an orientation of the target within the coordinate space along with a normal vector.
  • the reference point may or may not be the center point of a target and depends on how the reference point is defined with respect to the target.
  • the target is installed or positioned at the termination point such that one of the reference points on the target is superimposed on the termination point.
  • a location of a reference point within the target is found within the 3D point cloud and a transformation matrix calculated that enables specific, or other points, in the global coordinate space to be plotted or located within the 3D point cloud.
  • the system 100 includes a 3D target 102 that is placed atop an object of interest 104.
  • the target includes at least one reference point that is, or are, defined with respect to the design of the target. More specifically, the target 102 is placed atop a termination point of the object of interest 104.
  • the object of interest 104 is typically located within an assembly or workspace 106 along with other objects 108.
  • the system 100 further includes a camera 110 (or position determining device) connected to a central processing unit (CPU) 111.
  • CPU central processing unit
  • the camera, or position determining device, or data acquisition processor may be a laser scanner, a calibrated still camera; a calibrated video camera or a SLAM (Simultaneous Localization and Mapping) based data acquisition system or any other 3D image acquisition unit.
  • SLAM Simultaneous Localization and Mapping
  • the camera 110 is directed at the target 102 to determine a position of the target within the global coordinate space and to also capture a 3D point cloud of the target in the 3D point cloud. Once the location of the target is determined, the system 102 may then determine the position of the target 102 and/or the object of interest 104 within a 3D point cloud of the workspace or assembly 106.
  • FIG. 2 a schematic diagram of one embodiment of the CPU of the system is shown.
  • the CPU 111 calculates or determines the location or position of the termination points within a 3D point cloud.
  • the CPU relies on RANSAC (RANdom SAmple Consensus), spatial geometry, pattern recognition, feature detection, and/or deep learning to determine or calculate the targets’ reference point locations within the 3D point cloud to assist in generating a transformation matrix.
  • RANSAC Random SAmple Consensus
  • the system may provide more accurate detection of termination points for assembly and structure fabrication.
  • the CPU 111 includes a data processing module 120, a data acquisition and processing module 122 and a data convergence module 124.
  • the data acquisition and processing module 122 may include a position detector processor 126 (which may be seen as a module that performs the functionality to locate the target in the global coordinate space and the 3D point cloud) and a point of interest vector processor, or module, 128 that performs the functionality to find reference points within the target in the 3D point cloud based on the transformation matrix.
  • the point of interest vector processor 126 may also provide the functionality of determining either, or both, of the reference point and the termination point, in the global coordinate space and then determining their correspondence in the resulting 3D point cloud.
  • the data acquisition and processing module 122 may receive inputs from an external source 130 such as, but not limited to, a laser scanning system, a photogrammetry system or a simultaneous localization and mapping (SLAM) system which may provide the 3D point cloud to the system.
  • the data convergence module 124 may include a point of interest detector 132. [0039] In operation, the data processing module 120 may determine a position of interest (or termination point) within, or on, the object of interest.
  • the target is then placed atop the position of interest.
  • placement of the mobile target may be automated with the data processing module 120 (or other module) providing instructions to a robot or the like to place the target on the position of interest.
  • the target may be manually placed atop the position of interest.
  • the data acquisition and processing module 122 may then generate, or receive, a 3D scan point cloud based on the inputs from the external source 130 which are then fed to the position detector processor 126 and/or the point of interest vector processor 128.
  • the data acquisition and processing module 122 receives the point of interest information from the data processing module 120 and then calculates or determines a transformation matrix or correspondence matrix to translate the position of the point of interest in the normal, or the global coordinate, space with the position of the point of interest in the 3D point cloud. This will be described in more detail below.
  • the transformation matrix Once the transformation matrix has been calculated, it is transmitted to the data convergence module 124 which then performs the functionality of reporting, or displaying, termination points in either or both of the global coordinate system and 3D point cloud.
  • FIG. 3 a flowchart outlining a first method of discrete point coordinate orientation detection is shown.
  • the system determines a point of interest (1000). In one embodiment, this may be performed by the data processing module 120 after receiving an input from a user who has selected a termination point.
  • the point of interest may also be the termination point of the object of interest within an assembly.
  • the target is placed on the point of interest
  • An image of the target is then obtained (1004) in the global coordinate space either by a camera or retrieved from memory.
  • the obtained image may then be processed (1006) to determine the locations or coordinates of specific components of the target within the global coordinate space. For instance, the system may determine centers of spheres that are part of the target and then determine their position with respect to a reference point within the target. In another embodiment, the system may use edge detection to determine shapes or spaces within the target with respect to a reference point or reference points within the target.
  • the system may then calculate locations or positions of the target within the 3D point cloud (1010).
  • the 3D point cloud may be generated by an external source and then inputted and received by the system.
  • the 3D point cloud may be generated via known methodologies, such as, but not limited to, photogam metry, terrestrial laser scanning or Simultaneous Localization and Mapping.
  • a transformation matrix is then calculated (1012).
  • the transformation matrix is based on a comparison of the locations of specific components of the target in the global coordinate space with the location of the specific components of the target in the 3D point cloud.
  • the transformation matrix may then be applied to the coordinates or location of the termination point in the global coordinate space to determine its corresponding position within the 3D point cloud (1014).
  • a normal vector may also be generated to indicate a direction of the object of interest.
  • Figure 4a shows an object of interest with the point of interest (or termination point) circled.
  • Figure 4b shows a 3D model of the object of interest with the point of interest circled and
  • Figure 4c shows the target placed on the point of interest so that correspondence between the point of interest and the 3D point cloud may be determined.
  • Figures 5a to 5d schematic diagrams of a first embodiment of a 3D target, or mobile target, 120 is provided.
  • Figure 5a is a front view
  • Figure 5b is a top view
  • Figure 5c is a bottom view
  • Figure 5d is a perspective view of the target.
  • the target 102 may be seen as one that allows for accurate detection of reference point coordinate and orientation due to its geometry. It is understood that other geometric configurations may be utilized without affecting the scope of the disclosure.
  • the 3D target 102 includes a main base 140 that may include a magnetic bottom layer 142 that enables the target 102 to be easily attached to a stainless-steel structure or metal object of interest.
  • the main base 140 may not include a magnetic bottom but the bottom of the main base 140 may be shaped to correspond with the shape of an object of interest.
  • the base 140 may include a connection point 143 that may also be seen as a reference point for the target 102 in some embodiments.
  • the target 102 includes three (3) spheres 144 (including a first sphere 144a, a second sphere 144b and a third sphere 144c) with corresponding shafts 146 (including first shaft 146a, second shaft 146b and third shaft 146c) connecting the spheres 144 to the base 140.
  • D1 , D2, and D3 represent diameters of the first sphere, the second sphere and the third sphere, respectively; L1 , L2, and L3 represent lengths of the first shaft, the second and the third shaft, respectively; A1 , A2, and A3 represent diameters of first shaft, the second shaft and the third shaft; respectively; Db represents a diameter of the main base; and R1 , R2, and R3 represent a distance of a base of the first shaft, the second shaft and the third shaft with respect to a reference point.
  • the reference point may be the same as the point of the interest within the object of interest if the connection point 143 of the target is placed directly on the termination point of the object of interest (and the connection point is the reference point of the target).
  • Calculation of the reference point in the global coordinate space and then applying the transformation matrix to the reference point (termination point) provides a correspondence between the position of the object of interest in the 3D scan point cloud and the object of interest in the global coordinate system. This will be described in more detail below.
  • the shafts may not be cylindrical and therefore A1 , A2 and A3 may represent a cross-section of the shaft measured perpendicular to the length of the shaft.
  • the base may be any shape and does not have to be cylindrical whereby Db may represent a cross-section of the base measured through its centre perpendicular to the lengths of the shaft.
  • FIG. 6 is a flowchart outlining a specific embodiment of the method of Figure 3 using the target of Figures 5a to 5d.
  • the methodology of Figure 6 includes three sections, seen as an input section 150, a processing section 152 and an output section 154.
  • there are two inputs in the input section 150 namely, a 3D model of the target 156 and a 3D point cloud of the target placed on the termination point 158 that are received by the system.
  • the 3D point cloud may be provided by a photogrammetry, TLS or SLAM, or any other 3D acquisition device system as discussed above.
  • the distance between the centers of the three spheres (R1 , R2, R3) with respect to the connection point (or reference point or the termination point of the object in this example) are known and the coordinates of these points are known from the 3D model 156.
  • the relationships between the center of the spheres and the reference point are known since the shape and design of the target is known. Based on this understanding, certain parameters may be calculated as follows:
  • N CD ( N X D , N Y D . N Z d )
  • the coordinates of the spheres within the 3D point cloud are unknown, however, they may be calculated, such as discussed below.
  • the ‘ designation is used to differentiate the nomenclature.
  • This further processing may be performed either using an automated process or a semi-automated process. It may also be manual.
  • an octree data structure may be implemented to structure the data, or coordinate, points.
  • An octree is used to structure the data in a way that the closest points to an arbitrary point are searchable.
  • Implementation of the octree data structure allows for compartmentalization of points into neighbourhoods based on their Euclidean distances.
  • the octree methodology allows for stopping the subdivision based on the number of points inside a bin. The information regarding the neighbouring bins also becomes accessible.
  • RANSAC may be used to search for spheres in each bin.
  • the input parameter for the RANSAC algorithm is the radius of the sphere that is being searched for and the level of confidence which is a measure that controls how strictness of the acceptance threshold.
  • C A , C B ,,and C c may require or include an initiation point for the RANSAC algorithm.
  • the user will be asked to provide an initiation point for sphere 1 in the scan point cloud. Using that point or input, the best sphere will be fitted to Sphere 1 and the coordinate of C A , will be determined.
  • the potential positions for the center of sphere 2 can be determined as it is limited to the circumference of a sphere whereby the radius is the distance between the C A and C B . Since the potential answer space has been substantially reduced, the RANSAC algorithm can find the center of sphere 2.
  • a similar method can be applied for sphere 3 to find, or determine C c ,.
  • C A C B C c are determined or calculated using this semi-automated method.
  • the target may be mapped or drawn in the 3D point scan.
  • a mapping in the 3D point cloud of these centres C A C B C c , C A , C B , C c is then performed since they are expressed in different coordinate systems.
  • the mapping allows the coordinates of the centres of the spheres in the 3D model coordinate system to be mapped to the 3D point cloud coordinate system to determine a correspondence between the coordinates 160.
  • a sorting algorithm is applied to find a correct correspondence between spheres. In other words, to correspond the coordinates of the centers of spheres 1 , 2 and 3 in the global coordinate system to their locations in the 3D point cloud.
  • a principal component analysis may be used to find the transformation matrix (T) 162.
  • the system may then display or report these values. It is understood that if the reference point and the termination point are not the same, further calculations may be performed to adjust for the spacing between the reference point and the termination point.
  • the target 200 includes a main base 202, a magnetic layer 204 located at a bottom of the main base 202, a set of spheres 204 (which in the current embodiment is two) including a first sphere 204a and a second sphere 204b stacked on top of each other and a set of shafts 206 connecting the spheres 204 to each other and the base 202.
  • a first shaft 206a may be seen as the shaft connecting the two spheres 204a and 204b and a second shaft 206b may be seen as the shaft connecting the spheres to the base.
  • the spheres may be other shapes and the shafts may not necessarily be cylindrical.
  • the connection point, or reference point, of the target is placed on the termination point such that the reference point and the termination point are at the same location or position in the global coordinate space.
  • the dimensions in Figures 9a to 9c may be seen as D1 and D2 which represents the diameters of the first 204a and second 204b spheres, respectively, L1 and L2 which represents lengths of first shaft 206a and second shaft 206b; and A1 , A2, and A3 which represent a length, width and height of the base of the current embodiment.
  • the base may also be cylindrical.
  • FIG 10 a schematic diagram of a methodology for determining correspondence between an object in a global coordinate space and its position in a 3D point cloud using the target of Figures 9a to 9c is shown.
  • the methodology includes an input section 210, a processing section 212 and an output section 214 that determine, and then report, the termination point’s coordinate in the 3D point cloud.
  • a 3D model of the target 216 is input to the system along with a
  • 3D point cloud 218 which may be provided by an external source.
  • C A ⁇ (X A ⁇ , YA ⁇ , Z a ) for sphere 1
  • C B (X B ⁇ , Y B ⁇ , Z B ) for sphere 2
  • Cc (Xc , Yc, Zc) for the reference point.
  • the coordinates of the centers of the spheres in the point cloud may be determined using an octree data structure to structure the data points.
  • RANSAC may be used to search for spheres in each bin.
  • a principal length of the bins may be changed.
  • an input parameter of the RANSAC algorithm is the radius of the sphere that is being searched for and a level of confidence which is a measure that controls a strictness of an acceptance threshold.
  • RANSAC method is inputted and received by the system. For instance, a user may be requested to provide a point on sphere 1 in the 3D point cloud. Using that point, the best sphere will be fitted to sphere 1 and the coordinates of C A , may be determined. Knowing C A , and L1 , potential locations, or coordinates, for the center of sphere 2 are limited to a circumference of sphere whereby the radius is L1. With a list of possible coordinates for the center of sphere 2 determined, the RANSAC algorithm can find the center of Sphere 2 and hence C A and C B , are found. [0075] Knowing the coordinates of C An and C B , a unit vector connecting the spheres and the direction of the normal vector can be calculated (220). The unit vector can be calculated as follows:
  • the coordinates of the termination point in the 3D point cloud may be displayed or reported (226).
  • the coordinates of the termination point may be transmitted to a manufacturing control system. The transmission may be performed via a local area network, a wireless network or a direct connection to the manufacturing control system.
  • the target may be seen as one that uses edge detection functionality to determine the coordinates.
  • the target 180 includes a frame portion 182 that houses or holds a screen or plate
  • the plate 184 may be seen as a window portion that includes a dark colored portion 185 and a set of light colored portions 186. Alternatively, the dark and light colored portions may be reversed. In another embodiment, the plate 184 may include a solid portion and a set of holes, or voids, therein. In the current embodiment, the target 180 includes a set of three reference points 187. In the current embodiment, the holes light colored portions are circular, however, other shapes are contemplated.
  • the frame portion 182 further includes corner portions 188 along with pins 190 along sides of the frame portion 182 to hold parts of the frame or target together although other framing mechanisms may be contemplated.
  • FIG. 12 shows one embodiment used to calculate reference points using this target type where the reference points are represented by R1 , R2 and R3 (as shown in Figure 12).
  • the methodology includes an input section 300, a processing section 302 and an output section 304.
  • input section 300 receives a 3D model of the target 306 and a 3D point cloud of the target placed on the termination point 308. The methodology will work regardless of the line of sight, as long as the three (3) circles are visible.
  • the coordinates of the circles in the 3D point cloud are unknown but may be calculated.
  • the inputs (306, 308 and possibly others) may be processed by an automated or a semi-automated process although other processes may be contemplated.
  • the center of the circles in the 3D point cloud are determined or calculated.
  • an edge detection algorithm is used. Once the edge image is produced (such as based on different intensity values between black and white pixels), a RANSAC algorithm may be used to best fit the circles to the edge image. Knowing the three circles' relative location and diameters, the RANSAC algorithm determines the coordinates of the three centers in the 3D point cloud space 310. Using the three centers, correspondence between the centres in the two spaces are found and sorted 312. A transformation matrix (T) is then calculated to transform the 3D model of the target within the 3D point cloud 314. The transformed location of reference points will be taken as the coordinate and orientation of reference points.
  • T transformation matrix
  • the output section may then report ordisplay the position of the termination point within the 3D point cloud 318.
  • One advantage of the current disclosure is the provision of an apparatus for integrating 3D data as part of the fabrication of components and/or assemblies.
  • the incorporated or integrated data creates a feedback loop system where the data can then be leveraged for more accurate and less error prone fabrication.
  • Another advantage is that the system and method of the disclosure allows for creating more accurate and/or repeatable, one-to-one point detection between 3D as-built objects and 3D as-built point clouds. This system and method also allows for detection of specific points as part of the 3D fabrication control apparatus

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A method and system of discrete point coordinate and orientation detection of an object of interest for use determining a location of the object of interest in a 3D point cloud. The system includes a target that includes a set of 3D components. The positions of the 3D components in a global coordinate space are compared with the positions of the 3D components in the 3D point cloud to generate a transformation matrix that may then be applied to a termination point associated with the object of interest to determine a position of the termination point within the 3D point cloud.

Description

SYSTEM AND METHOD FOR DISCRETE POINT COORDINATE AND ORIENTATION
DETECTION IN 3D POINT CLOUDS
Cross-reference to other applications
[oooi] The disclosure claims priority from US Provisional Application No. 63,023,497 filed
May 12, 2020 which is hereby incorporated by reference.
Field
[0002] The disclosure is generally directed at feature detection and more specifically at a method and system for discrete point coordinate and orientation detection.
Background
[0003] In the context of industrial fabrication, termination points are points of connection between assemblies, sub-assemblies, or modules. Termination points are also defined in local coordinate systems where assemblies are connected or constraints. Furthermore, termination points are identifiable parametric features on assemblies that are idealized by points. As such, the accurate mating of termination points in connected assemblies is required. Existing methods to assist in the manufacture of components include the use of (1) discrete point collection devices such as laser trackers and total stations, or (2) using manual hand measurement tools. These industrially accepted methods are highly manual, and time-consuming to implement and operate. As a result of the sophistication required in using surveying grade tools, these procedures are only applied on projects with a high tolerance demand.
[0004] Using 3D as-built point-clouds offers an alternative approach for finding and verifying the compliance of termination points. However, a challenge exists in using 3D as-built point clouds for fabrication and control of termination points. The challenge is the detection of termination (critical) points in a point cloud. Unlike the existing methods, point clouds include millions of data points, making it difficult to discern termination points from other points.
[0005] Currently, three main technologies exist for acquiring a three-dimensional (3D) point cloud of an object.
[0006] The first technology that is used is photogrammetry. In this reconstruction method, two-dimensional (2D) images taken by a camera with known parameters are used. To reconstruct a point cloud of a certain object, at least two images of the object are taken and the common features between corresponding images are detected. Using the relative position of the camera to the images, a point cloud is then reconstructed.
[0007] Currently, substantial research focus has been dedicated to the use of drones utilizing photogrammetry on construction sites with the drones being used for quality inspection, safety inspection, and field survey. However, photogrammetry can be time-consuming and inaccurate. For more accurate programmetry, high-resolution cameras are utilized and multiple images of the inspected scene are captured. However, having to do so results in the manipulation of massive data which becomes substantially more challenging, resulting in a rapid increase in cost which may be unacceptable for certain projects.
[0008] A second technology that is currently being used is terrestrial laser scanning (TLS).
Laser scanners may be seen as another source of obtaining 3D data. To capture a point cloud with a laser scanner, two main technologies exist, namely phase shift and time of flight. Phase shift based scanners use the difference between the wavelength of an outbound and inbound laser beam to calculate the distance of a point. Time of Flight based scanners calculate the time that it takes for a laser beam to hit an object and come back. Using the time and knowing the speed of the laser beam, the distances between the objects and the scanner are calculated. However, utilizing laser scanners involves several manual tasks and sophisticated knowledge for post-processing the data that is obtained which is very time consuming. Therefore, the use of laser scanners has been limited to engineering firms employing well-trained engineers using complex software due to time and value costs.
[0009] A third technology that is used is Simultaneous Localization and Mapping (SLAM).
SLAM-based scanners automate the registration process by keeping track of the location of the scanning device itself. These scanners use various sensors and different algorithms such as GraphSLAM, particle filter, and extended Kalman filter to approximately map the location of the scanning unit with respect to the object. For localizing, structured lighting is one of the most commonly used methods. An infrared (IR) projector and one sensor within a certain distance of the projector are combined. The projector projects speckle patterns on the objects, and the sensor calculates the distance of a point to itself. In order to use triangulation, two separate images must be captured. However, the accuracy of point clouds obtained can vary based on the path taken for mapping, and the condition of the environment’s circumstances such as lighting.
[0010] Therefore, there is provided a novel method and system for discrete point coordinate and orientation detection.
Summary
[ooii] The disclosure is directed at a method and apparatus for discrete point coordinate and orientation detection. In one embodiment, the disclosure is directed at a physical fixture, or target, that is used to assist in obtaining a three-dimensional (3D) point cloud (and specific points within the 3D point cloud) in an improved manner. In another embodiment, the disclosure is directed at methods for the detection of coordinates and orientation of termination, or reference, points in a 3D point cloud representing an assembly or workspace.
[0012] In one embodiment of a target, the target is designed with a geometric configuration to allow for automatic detection of the target’s reference points’ coordinate and orientation. In another embodiment of a target, the target is designed such that edge detection, feature detection, and/or machine learning based detection may be used to find the coordinates and orientation of the target’s reference points. The calculation of these reference points supports existing measurement approaches that rely on the accurate calculation of these points on assemblies or within workspaces.
[0013] The utilization of a target in conjunction with 3D data acquisition units provides a novel measurement method as an intermediary solution between manual tools (i.e., tape measure) and surveying grade tools, such as total stations and laser trackers.
Brief Description of the Drawings
[0014] The foregoing and otherfeatures and advantages of the disclosure will be apparent from the following description of embodiments thereof as illustrated in the accompanying drawings. The accompanying drawings, which are incorporated herein and form a part of the specification, further serve to explain the principles of the disclosure and to enable a person skilled in the pertinent art to make and use the invention. The drawings are not to scale.
[0015] Figure 1 is a schematic diagram of a system for discrete point coordinate and orientation detection in its operational environment;
[0016] Figure 2 is a schematic diagram of a component for the system of Figure 1 ;
[0017] Figure 3 is a flowchart outlining a method of discrete point coordinate and orientation detection;
[0018] Figure 4 are photos showing a placement of a target atop an object of interest;
[0019] Figures 5a to 5d are views of a first embodiment of a target;
[0020] Figure 6 is a flowchart outlining of a method determining a position of an object of interest within a 3D point cloud using the target of Figures 5a to 5d;
[0021] Figure 7 is a point cloud representation of a target;
[0022] Figure 8 is picture of a target in a scanned point cloud;
[0023] Figures 9a to 9c are views of a second embodiment of a target;
[0024] Figure 10 is a flowchart outlining of a method determining a position of an object of interest within a 3D point cloud using the target of Figures 9a to 9c;
[0025] Figure 11 is a schematic diagram showing a relationship between the spheres and a unit vector of the target of Figures 9a to 9c;
[0026] Figures 12a and 12b are schematic diagrams of a third embodiment of a target; and
[0027] Figure 13 is a flowchart outlining of a method determining a position of an object of interest within a 3D point cloud using the target of Figures 12a to 12b. Description of the Disclosure
[0028] The disclosure is directed at a method and system for discrete point coordinate and orientation detection. In one embodiment, the disclosure is directed at a three-dimensional (3D) target for assisting in automatic, reliable, and repeatable discrete point detection in 3D as-built point clouds. In another embodiment, the disclosure is directed at a method of determining termination points within the 3D point cloud using a target.
[0029] In one embodiment, the system includes a physical fixture, or target, that is used to provide a reference point to determine a 3D point cloud. In one embodiment, the physical figure may be based on a specific geometry to calculate the coordinate and orientation of reference points. In another embodiment, the physical fixture may be based on unique patterns that can be detected by a 3D acquisition unit.
[0030] For ease of understanding of the disclosure, certain definitions are provided.
[0031] Termination points: the coordinate points in a coordinate space, or system, of an object of interest within an assembly that is connected to another assembly or is constrained. Along with the location of the coordinate points within the coordinate system, the orientation of the assembly associated with the termination points is required. This is because, for assemblies to fit in practice, the connection surfaces (for example, flange faces) on both ends of the connecting assemblies must meet each other at the same location with the same orientation so that they may be “flush” with each other.
[0032] Target: a physical fixture that is installed on or mounted to a termination point of an object of interest.
[0033] Reference Point: a known location within the target that is used to determine a location of the target in the global coordinate system, or space. The reference point may also be used to determine an orientation of the target within the coordinate space along with a normal vector. The reference point may or may not be the center point of a target and depends on how the reference point is defined with respect to the target.
[0034] In one embodiment of the disclosure, to find correspondence between a termination point of an object of interest within an assembly in the global coordinate system and its location or position within a 3D point cloud, the target is installed or positioned at the termination point such that one of the reference points on the target is superimposed on the termination point. After positioning the target, a location of a reference point within the target is found within the 3D point cloud and a transformation matrix calculated that enables specific, or other points, in the global coordinate space to be plotted or located within the 3D point cloud.
[0035] Turning to Figure 1 , a schematic diagram of a system for discrete point coordinate orientation detection in its operational environment is shown. The system 100 includes a 3D target 102 that is placed atop an object of interest 104. The target includes at least one reference point that is, or are, defined with respect to the design of the target. More specifically, the target 102 is placed atop a termination point of the object of interest 104. The object of interest 104 is typically located within an assembly or workspace 106 along with other objects 108. The system 100 further includes a camera 110 (or position determining device) connected to a central processing unit (CPU) 111. In some embodiments, the camera, or position determining device, or data acquisition processor may be a laser scanner, a calibrated still camera; a calibrated video camera or a SLAM (Simultaneous Localization and Mapping) based data acquisition system or any other 3D image acquisition unit.
[0036] In use, the camera 110 is directed at the target 102 to determine a position of the target within the global coordinate space and to also capture a 3D point cloud of the target in the 3D point cloud. Once the location of the target is determined, the system 102 may then determine the position of the target 102 and/or the object of interest 104 within a 3D point cloud of the workspace or assembly 106.
[0037] Turning to Figure 2, a schematic diagram of one embodiment of the CPU of the system is shown. Based on inputs from the camera and other external sources, the CPU 111 calculates or determines the location or position of the termination points within a 3D point cloud. In some embodiments, the CPU relies on RANSAC (RANdom SAmple Consensus), spatial geometry, pattern recognition, feature detection, and/or deep learning to determine or calculate the targets’ reference point locations within the 3D point cloud to assist in generating a transformation matrix. In another embodiment, the system may provide more accurate detection of termination points for assembly and structure fabrication.
[0038] In the current embodiment, the CPU 111 includes a data processing module 120, a data acquisition and processing module 122 and a data convergence module 124. The data acquisition and processing module 122 may include a position detector processor 126 (which may be seen as a module that performs the functionality to locate the target in the global coordinate space and the 3D point cloud) and a point of interest vector processor, or module, 128 that performs the functionality to find reference points within the target in the 3D point cloud based on the transformation matrix. The point of interest vector processor 126 may also provide the functionality of determining either, or both, of the reference point and the termination point, in the global coordinate space and then determining their correspondence in the resulting 3D point cloud. Correspondence between the position of the reference point within the target in the global coordinate space and the position of the target in the 3D point cloud is determined or calculated, such that the reference point within the 3D point cloud along with the surface normal can be obtained or generated by the data convergence module 124. The data acquisition and processing module 122 may receive inputs from an external source 130 such as, but not limited to, a laser scanning system, a photogrammetry system or a simultaneous localization and mapping (SLAM) system which may provide the 3D point cloud to the system. The data convergence module 124 may include a point of interest detector 132. [0039] In operation, the data processing module 120 may determine a position of interest (or termination point) within, or on, the object of interest. Once determined, the target is then placed atop the position of interest. In some embodiments, placement of the mobile target may be automated with the data processing module 120 (or other module) providing instructions to a robot or the like to place the target on the position of interest. In other embodiments, once the position of interest is determined, the target may be manually placed atop the position of interest.
[0040] The data acquisition and processing module 122 may then generate, or receive, a 3D scan point cloud based on the inputs from the external source 130 which are then fed to the position detector processor 126 and/or the point of interest vector processor 128. The data acquisition and processing module 122 receives the point of interest information from the data processing module 120 and then calculates or determines a transformation matrix or correspondence matrix to translate the position of the point of interest in the normal, or the global coordinate, space with the position of the point of interest in the 3D point cloud. This will be described in more detail below. Once the transformation matrix has been calculated, it is transmitted to the data convergence module 124 which then performs the functionality of reporting, or displaying, termination points in either or both of the global coordinate system and 3D point cloud.
[0041] Turning to Figure 3, a flowchart outlining a first method of discrete point coordinate orientation detection is shown. Initially, the system determines a point of interest (1000). In one embodiment, this may be performed by the data processing module 120 after receiving an input from a user who has selected a termination point. The point of interest may also be the termination point of the object of interest within an assembly.
[0042] After the point of interest is determined, the target is placed on the point of interest
(1002). This may be performed manually or may be automated by the system. An image of the target is then obtained (1004) in the global coordinate space either by a camera or retrieved from memory. The obtained image may then be processed (1006) to determine the locations or coordinates of specific components of the target within the global coordinate space. For instance, the system may determine centers of spheres that are part of the target and then determine their position with respect to a reference point within the target. In another embodiment, the system may use edge detection to determine shapes or spaces within the target with respect to a reference point or reference points within the target.
[0043] After receiving a 3D point cloud (1008), the system may then calculate locations or positions of the target within the 3D point cloud (1010). The 3D point cloud may be generated by an external source and then inputted and received by the system. For instance, the 3D point cloud may be generated via known methodologies, such as, but not limited to, photogam metry, terrestrial laser scanning or Simultaneous Localization and Mapping.
[0044] A transformation matrix is then calculated (1012). In one embodiment, the transformation matrix is based on a comparison of the locations of specific components of the target in the global coordinate space with the location of the specific components of the target in the 3D point cloud. The transformation matrix may then be applied to the coordinates or location of the termination point in the global coordinate space to determine its corresponding position within the 3D point cloud (1014). A normal vector may also be generated to indicate a direction of the object of interest.
[0045] One example of how the target may be placed on the point of interest is shown with respect to Figures 4a to 4c. Figure 4a shows an object of interest with the point of interest (or termination point) circled. Figure 4b shows a 3D model of the object of interest with the point of interest circled and Figure 4c shows the target placed on the point of interest so that correspondence between the point of interest and the 3D point cloud may be determined.
[0046] Turning to Figures 5a to 5d, schematic diagrams of a first embodiment of a 3D target, or mobile target, 120 is provided. Figure 5a is a front view, Figure 5b is a top view, Figure 5c is a bottom view and Figure 5d is a perspective view of the target. With respect to the current embodiment, the target 102 may be seen as one that allows for accurate detection of reference point coordinate and orientation due to its geometry. It is understood that other geometric configurations may be utilized without affecting the scope of the disclosure.
[0047] As shown, the 3D target 102 includes a main base 140 that may include a magnetic bottom layer 142 that enables the target 102 to be easily attached to a stainless-steel structure or metal object of interest. In other embodiments, the main base 140 may not include a magnetic bottom but the bottom of the main base 140 may be shaped to correspond with the shape of an object of interest. As shown in Figure 5c, the base 140 may include a connection point 143 that may also be seen as a reference point for the target 102 in some embodiments.
[0048] In the current embodiment, the target 102 includes three (3) spheres 144 (including a first sphere 144a, a second sphere 144b and a third sphere 144c) with corresponding shafts 146 (including first shaft 146a, second shaft 146b and third shaft 146c) connecting the spheres 144 to the base 140. In other embodiments, there may be a different number of spheres and corresponding shafts with each embodiment including at least two spheres and two corresponding shafts.
[0049] As shown in Figures 5a to 5d, the dimensions of the components (spheres, shafts and base) of the target 102 may be represented with the following nomenclature:
[0050] D1 , D2, and D3 represent diameters of the first sphere, the second sphere and the third sphere, respectively; L1 , L2, and L3 represent lengths of the first shaft, the second and the third shaft, respectively; A1 , A2, and A3 represent diameters of first shaft, the second shaft and the third shaft; respectively; Db represents a diameter of the main base; and R1 , R2, and R3 represent a distance of a base of the first shaft, the second shaft and the third shaft with respect to a reference point.
[0051] In one embodiment, the reference point may be the same as the point of the interest within the object of interest if the connection point 143 of the target is placed directly on the termination point of the object of interest (and the connection point is the reference point of the target). Calculation of the reference point in the global coordinate space and then applying the transformation matrix to the reference point (termination point) provides a correspondence between the position of the object of interest in the 3D scan point cloud and the object of interest in the global coordinate system. This will be described in more detail below.
[0052] In other embodiments, the shafts may not be cylindrical and therefore A1 , A2 and A3 may represent a cross-section of the shaft measured perpendicular to the length of the shaft. Also, in other embodiments, the base may be any shape and does not have to be cylindrical whereby Db may represent a cross-section of the base measured through its centre perpendicular to the lengths of the shaft.
[0053] The problem of finding the termination point on an assembly within the 3D point cloud can be facilitated by finding the reference point on the target using a 3D point cloud. It will be understood that the reference point and the termination point may not always be at the same location or position, and that each target may have multiple reference points. Figure 6 is a flowchart outlining a specific embodiment of the method of Figure 3 using the target of Figures 5a to 5d.
[0054] The methodology of Figure 6 includes three sections, seen as an input section 150, a processing section 152 and an output section 154. In the current embodiment, there are two inputs in the input section 150, namely, a 3D model of the target 156 and a 3D point cloud of the target placed on the termination point 158 that are received by the system. The 3D point cloud may be provided by a photogrammetry, TLS or SLAM, or any other 3D acquisition device system as discussed above.
[0055] In the processing section 152, certain information regarding the target in the 3D model
156 is calculated, generated, or obtained. This is done with respect to the target in the global coordinate space, such as by using the camera as a point of reference. As understood, the distance between the centers of the three spheres (R1 , R2, R3) with respect to the connection point (or reference point or the termination point of the object in this example) are known and the coordinates of these points are known from the 3D model 156. The relationships between the center of the spheres and the reference point are known since the shape and design of the target is known. Based on this understanding, certain parameters may be calculated as follows:
• The coordinate of the center of Sphere 1 (R1= (D1)/2): CA = (XA,YA,ZA)
• The coordinate of the center of Sphere 2 (R2=(D2)/2): CB = (¾,yB,ZB)
• The coordinate of the center of Sphere 3 (R3=(D3)/2): Cc = (Zc, YC,ZC )
• The coordinate of the reference point: CD = (¾, yD,ZD)
• The orientation of the reference point: NCD = (NXD, NYD. NZd)
[0056] With respect to the 3D point cloud of the target placed on the reference point input
158, an example of this input is schematically shown in Figure 7. Depending on the line of sight between the camera and the target and the geometry of the reference point, a different point cloud representation may be obtained. The disclosure will work regardless of the line of sight, as long as the spheres are visible to the camera in at least one of the viewpoints.
[0057] As understood, the coordinates of the spheres within the 3D point cloud are unknown, however, they may be calculated, such as discussed below. For the positions of the centers of the spheres in the 3D point cloud, the ‘ designation is used to differentiate the nomenclature.
• The coordinate of the center of Sphere 1 (R1= (D1)/2): CA, = (¾', ¾„ ¾,)
• The coordinate of the center of Sphere 2 (R2= (D2)/2): CB, = (¾„ ¾„ZB;)
• The coordinate of the center of Sphere 3 (R3= (D3)/2): Cc, = (¾·,, Yc„ Zc, )
[0058] To determine the CA , CB and Cc coordinate values, further processing is performed.
This further processing may be performed either using an automated process or a semi-automated process. It may also be manual.
[0059] For the automated process, an octree data structure may be implemented to structure the data, or coordinate, points. An octree is used to structure the data in a way that the closest points to an arbitrary point are searchable. Implementation of the octree data structure allows for compartmentalization of points into neighbourhoods based on their Euclidean distances. The octree methodology allows for stopping the subdivision based on the number of points inside a bin. The information regarding the neighbouring bins also becomes accessible.
[0060] After dividing the space into bins where a principal length of each bin is 2*Db,
RANSAC may be used to search for spheres in each bin. The input parameter for the RANSAC algorithm is the radius of the sphere that is being searched for and the level of confidence which is a measure that controls how strictness of the acceptance threshold.
[0061] Applying the octree data structure and RANSAC algorithm determines the center of the sphere, or spheres, in the 3D point cloud space, hence the coordinates of CA,, CB,, Cc, may be calculated in the coordinate system of the 3D point cloud. Once the coordinates CA CB,, Cc, are found, the target may be mapped or drawn in the 3D point scan. A schematic diagram of the spheres found in a scanned point cloud using an octree data structure and RANSAC algorithm is shown in Figure 8.
[0062] For the semi-automated process, to find CA,, CB,,and Cc, may require or include an initiation point for the RANSAC algorithm. In other words, the user will be asked to provide an initiation point for sphere 1 in the scan point cloud. Using that point or input, the best sphere will be fitted to Sphere 1 and the coordinate of CA, will be determined. Knowing CA„ D1 , D2, D3, R1 , R2 and R3, the potential positions for the center of sphere 2 can be determined as it is limited to the circumference of a sphere whereby the radius is the distance between the CA and CB. Since the potential answer space has been substantially reduced, the RANSAC algorithm can find the center of sphere 2. A similar method can be applied for sphere 3 to find, or determine Cc,. In the end, CA CB Cc, are determined or calculated using this semi-automated method. Once the co-ordinates CAn CB/, Cc, are found, the target may be mapped or drawn in the 3D point scan. [0063] After determining the centres of the spheres, both in the 3D model (global coordinate space) and the 3D point cloud, a mapping in the 3D point cloud of these centres CA CB Cc, CA, CB, Cc is then performed since they are expressed in different coordinate systems. The mapping allows the coordinates of the centres of the spheres in the 3D model coordinate system to be mapped to the 3D point cloud coordinate system to determine a correspondence between the coordinates 160. In one embodiment, a sorting algorithm is applied to find a correct correspondence between spheres. In other words, to correspond the coordinates of the centers of spheres 1 , 2 and 3 in the global coordinate system to their locations in the 3D point cloud.
[0064] After sorting, a principal component analysis (PCA) may be used to find the transformation matrix (T) 162. In another embodiment, after they are mapped, a transformation matrix (T) is determined that overlays set of points related to the 3D model ( Pm = [CA, CB, Cc}) to another set of points related to the 3D point cloud ( Ps = {CA>, CBt, Cc,}). This may then be applied to the reference point on the 3D model 164 to determine its position within the 3D point cloud along with its normal vector. In one embodiment, the coordinates of the reference point can be calculated as Coordinate of the reference point = T x CD Additionally, the normal vector at the reference point can also be calculated as Normal vector at the termination point = T X nD; where nD is the normal vector at the reference point in the 3D model target.
[0065] For the output section 154, once the coordinates and normal vector of the reference point (or termination point since they are the same in this embodiment) within the 3D point cloud are determined, the system may then display or report these values. It is understood that if the reference point and the termination point are not the same, further calculations may be performed to adjust for the spacing between the reference point and the termination point.
[0066] Turning to Figures 9a to 9c, bottom, front and perspective views of a second embodiment of a target are provided. The figures show the parametric relationship and dimensions the embodiment of the target.
[0067] In this embodiment, the target 200 includes a main base 202, a magnetic layer 204 located at a bottom of the main base 202, a set of spheres 204 (which in the current embodiment is two) including a first sphere 204a and a second sphere 204b stacked on top of each other and a set of shafts 206 connecting the spheres 204 to each other and the base 202. In the current embodiment, a first shaft 206a may be seen as the shaft connecting the two spheres 204a and 204b and a second shaft 206b may be seen as the shaft connecting the spheres to the base. As with the previous embodiment, the spheres may be other shapes and the shafts may not necessarily be cylindrical. As with the embodiment of Figures 5a to 5d, the connection point, or reference point, of the target is placed on the termination point such that the reference point and the termination point are at the same location or position in the global coordinate space.
[0068] For the current embodiment, the dimensions in Figures 9a to 9c may be seen as D1 and D2 which represents the diameters of the first 204a and second 204b spheres, respectively, L1 and L2 which represents lengths of first shaft 206a and second shaft 206b; and A1 , A2, and A3 which represent a length, width and height of the base of the current embodiment. As outlined above, the base may also be cylindrical.
[0069] Turning to Figure 10, a schematic diagram of a methodology for determining correspondence between an object in a global coordinate space and its position in a 3D point cloud using the target of Figures 9a to 9c is shown. As with the methodology of Figure 6, the methodology includes an input section 210, a processing section 212 and an output section 214 that determine, and then report, the termination point’s coordinate in the 3D point cloud.
[0070] In the input section, a 3D model of the target 216 is input to the system along with a
3D point cloud 218 which may be provided by an external source. The coordinates of the spheres in the 3D target model may be represented as CA=(XA,YA,ZA ) for sphere 1 and CB=(XB, YB, ZB) for sphere 2. Within the 3D model, the coordinates of the reference point may be represented as Cc=(Xc, Yc, Zc).
[0071] For the coordinates of the centre of the spheres and the reference point in the 3D point cloud, these may be represented as CA· = (XA·, YA·, Za) for sphere 1 , CB=(XB·, YB·, ZB) for sphere 2 and Cc=(Xc , Yc, Zc) for the reference point. The problem of finding correspondence between the object of interest in the global coordinate space and the 3D point cloud can now be expressed as finding the coordinates of Cc.
[0072] For the processing section 212, similar to Figure 6, to find the centre of the spheres in each space, two approaches are contemplated, the automated approach and the semi-automated approach.
[0073] For the automated approach, the coordinates of the centers of the spheres in the point cloud may be determined using an octree data structure to structure the data points. After dividing the space into bins where the principal length of each bin may equal 2 *(A1 2 + A2 2 + A3 2)0,5, RANSAC may be used to search for spheres in each bin. In some embodiments, a principal length of the bins may be changed. In some embodiments, an input parameter of the RANSAC algorithm is the radius of the sphere that is being searched for and a level of confidence which is a measure that controls a strictness of an acceptance threshold. Applying the octree data structure and RANSAC algorithm enables a determination of the centers of the spheres in the 3D point cloud, hence the coordinates of CA and CB, are known in the coordinate system of the point cloud.
[0074] For the semi-automated approach, to find CA/, and CB an initiation point for the
RANSAC method is inputted and received by the system. For instance, a user may be requested to provide a point on sphere 1 in the 3D point cloud. Using that point, the best sphere will be fitted to sphere 1 and the coordinates of CA, may be determined. Knowing CA, and L1 , potential locations, or coordinates, for the center of sphere 2 are limited to a circumference of sphere whereby the radius is L1. With a list of possible coordinates for the center of sphere 2 determined, the RANSAC algorithm can find the center of Sphere 2 and hence CA and CB, are found. [0075] Knowing the coordinates of CAn and CB, a unit vector connecting the spheres and the direction of the normal vector can be calculated (220). The unit vector can be calculated as follows:
Figure imgf000014_0001
[0076] The relationship between the unit vector with the normal vector at the connection point and centre of spheres is schematically shown in Figure 11. Knowing the unit vector, by taking two points in the 3D point cloud and the unit vector combining them, a transformation matrix (T) may be determined (222). After determining T, the matrix may be applied to the coordinates of the reference point (termination point) in the 3D model to determine its coordinates within the 3D point cloud (224). In one embodiment, once U and CB is known, the coordinate of Cc, can be calculated as: C = c;, x ( 2 + A and the normal vector at the reference point can be calculated as: nc = U. In the output section 214, the coordinates of the termination point in the 3D point cloud may be displayed or reported (226). In some embodiments, the coordinates of the termination point may be transmitted to a manufacturing control system. The transmission may be performed via a local area network, a wireless network or a direct connection to the manufacturing control system.
[0077] Turning to Figure 12, a further embodiment of a target is shown. In the current embodiment, the target may be seen as one that uses edge detection functionality to determine the coordinates.
[0078] The target 180 includes a frame portion 182 that houses or holds a screen or plate
184. In one embodiment, the plate 184 may be seen as a window portion that includes a dark colored portion 185 and a set of light colored portions 186. Alternatively, the dark and light colored portions may be reversed. In another embodiment, the plate 184 may include a solid portion and a set of holes, or voids, therein. In the current embodiment, the target 180 includes a set of three reference points 187. In the current embodiment, the holes light colored portions are circular, however, other shapes are contemplated. The frame portion 182 further includes corner portions 188 along with pins 190 along sides of the frame portion 182 to hold parts of the frame or target together although other framing mechanisms may be contemplated. With the target of Figure 12, various pattern recognition algorithms can be used to determine the relationship between features (light colored portions) of the target and the reference points. For example, once the target is scanned, an edge detection algorithm may be used to detect the three centers of the light colored portions and their distance to at least one of the reference points. Figure 13 shows one embodiment used to calculate reference points using this target type where the reference points are represented by R1 , R2 and R3 (as shown in Figure 12).
[0079] Turning to Figure 13, the methodology includes an input section 300, a processing section 302 and an output section 304. [0080] As with previous examples, input section 300 receives a 3D model of the target 306 and a 3D point cloud of the target placed on the termination point 308. The methodology will work regardless of the line of sight, as long as the three (3) circles are visible.
[0081] The coordinates of different aspects of the target may be represented as the coordinate of the center of Circle 1 (R1= (D1)/2): Cx = (X1, Y1,Z1), the coordinate of the center of Circle 2 (R2=(D2)/2): C2 = (X2. Y2.Z2)· the coordinate of the center of Circle 3 (R3=(D3)/3) C3 = (X3. Y3.Z3), the coordinate of reference point 1 /?! = (XR1, YR1,ZR1), the orientation of the reference point 1 : NRl = (NXRI, NYRV NZri), the coordinate of reference point 2: R2 (XR2. YR2.ZR2 . the orientation of reference point 2 NR2 ( NXR2. NYR2.NZr2 ), the coordinate of reference point 3 ff2 = (XR3, YR3,ZR3), and the orientation of reference point 3 NR3 =
Figure imgf000015_0001
NYR3.NZ 3)· Also, the relationship between the center of the circles and the reference points are known since the characteristics or design of the target are known.
[0082] The coordinates of the circles in the 3D point cloud are unknown but may be calculated. For ease of understanding, the coordinate of the center of Circle 1 in the 3D point cloud space may be seen as (R1= (D1)/2): CA=(XA ,YA ,ZA), the coordinate of the center of Circle 2 in the 3D point cloud may be seen as (R2= (D2)/2): CB=(XB',YB ,ZB'), and the coordinate of the center of Circle 3 in the 3D point cloud may be seen as (R3= (D3)/2): Cc=(Xc',Yc',Zc).
[0083] For the processing section 302, the inputs (306, 308 and possibly others) may be processed by an automated or a semi-automated process although other processes may be contemplated.
[0084] Firstly, the center of the circles in the 3D point cloud are determined or calculated. In one embodiment, an edge detection algorithm is used. Once the edge image is produced (such as based on different intensity values between black and white pixels), a RANSAC algorithm may be used to best fit the circles to the edge image. Knowing the three circles' relative location and diameters, the RANSAC algorithm determines the coordinates of the three centers in the 3D point cloud space 310. Using the three centers, correspondence between the centres in the two spaces are found and sorted 312. A transformation matrix (T) is then calculated to transform the 3D model of the target within the 3D point cloud 314. The transformed location of reference points will be taken as the coordinate and orientation of reference points. Once T is found, the coordinate of a termination point of an assembly can be calculated as Coordinate of the termination point = T x CD and the normal vector at the termination point can also be calculated as Normal vector at the termination point = T x nD where hD is the normal vector at the connection point in the design model of the target 316. The output section may then report ordisplay the position of the termination point within the 3D point cloud 318.
[0085] One advantage of the current disclosure is the provision of an apparatus for integrating 3D data as part of the fabrication of components and/or assemblies. The incorporated or integrated data creates a feedback loop system where the data can then be leveraged for more accurate and less error prone fabrication. Another advantage is that the system and method of the disclosure allows for creating more accurate and/or repeatable, one-to-one point detection between 3D as-built objects and 3D as-built point clouds. This system and method also allows for detection of specific points as part of the 3D fabrication control apparatus
[0086] In the preceding description, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the embodiments. However, it will be apparent to one skilled in the art that these specific details may not be required. In other instances, well- known structures are shown in block diagram form in order not to obscure the understanding.
[0087] The above-described embodiments of the disclosure are intended to be examples of the present disclosure and alterations and modifications may be effected thereto, by those of skill in the art, without departing from the scope of the disclosure.

Claims

What is Claimed is:
1. A system for discrete point coordinate and orientation detection of an object of interest comprising: a target for placing on the object of interest with a termination point; an image capturing device for capturing a global coordinate system image of the target and a three-dimensional (3D) point cloud image of the target; and a processor for determining a transformation matrix between a global coordinate system and a 3D point cloud based on the global coordinate system image of the target and the 3D point cloud image of the target and to determine coordinates of the termination point within the 3D point cloud.
2. The system of Claim 1 wherein the processor comprises: a data processing module; a data acquisition and processing module; and a data convergence module.
3. The system of Claim 1 wherein the target comprises: a base; a set of 3D objects, each of the 3D objects including a center coordinate; and a set of shafts connecting the set of 3D objects to the base.
4. The system of Claim 3 wherein the set of 3D objects comprises a set of spheres.
5. The system of Claim 4 wherein the set of spheres comprises at least two spheres.
6. The system of Claim 3 wherein the base comprises a magnetic base layer.
7. _ The system of Claim 1 wherein the target comprises: a frame portion; and a window portion housed within the frame portion, the window portion including a light and dark colored pattern, the light and dark colored pattern including set of light colored portions.
8. The system of Claim 1 wherein the image capturing device comprises a camera, a laser scanner, a calibrated still camera, a calibrated video camera, or a SLAM (Simultaneous Localization and Mapping) based data acquisition system.
9. A method of discrete point coordinate and orientation detection of an object of interest comprising: obtaining an image of a target in a global coordinate space; determining global coordinate space coordinates of predetermined components of the target in the global coordinate space; obtaining a three-dimensional (3D) point cloud image of the target in the 3D point cloud space; determining 3D point cloud space coordinates of the predetermined components of the target in the 3D point cloud space; calculating a transformation matrix based on the global coordinate space coordinates and the 3D point cloud space coordinates; and applying the transformation matrix to a termination point of the object of interest.
10. The method of Claim 9 further comprising: calculating a normal vector of the termination point in the 3D point cloud.
11. The method of Claim 9 wherein obtaining an image of a target in a global coordinate space comprises: placing the target on a termination point of the object of interest; directing an image capturing device at the target; and obtaining the image of the target.
12. The method of Claim 9 wherein obtaining a 3D point cloud image of the target in the 3D point cloud space comprises: receiving the 3D point cloud image from a data acquisition system.
13. The method of Claim 12 wherein the data acquisition system is based on photogrammetry, terrestrial laser scanning or simultaneous localization and mapping.
14. The method of Claim 9 wherein the calculating a transformation matrix comprises: corresponding the global coordinate space coordinates of predetermined components of the target in the global coordinate space with the 3D point cloud space coordinates of the predetermined components of the target in the 3D point cloud space.
15. The method of Claim 14 wherein the corresponding the global coordinate space coordinates of predetermined components of the target in the global coordinate space with the 3D point cloud space coordinates of the predetermined components of the target in the 3D point cloud space comprises: applying an Octree structure and RANsac algorithm.
16. The method of Claim 9 further comprising determining position of the termination point within the 3D point cloud.
17. The method of Claim 16 further comprising reporting or displaying the position of the termination point within the 3D point cloud.
18. The method of Claim 16 further comprising transmitting the position of the termination point within the 3D point cloud to a manufacturing control system.
SUBSTITUTE SHEET (RULE 26)
PCT/CA2021/050659 2020-05-12 2021-05-12 System and method for discrete point coordinate and orientation detection in 3d point clouds WO2021226716A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063023497P 2020-05-12 2020-05-12
US63/023,497 2020-05-12

Publications (1)

Publication Number Publication Date
WO2021226716A1 true WO2021226716A1 (en) 2021-11-18

Family

ID=78525919

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2021/050659 WO2021226716A1 (en) 2020-05-12 2021-05-12 System and method for discrete point coordinate and orientation detection in 3d point clouds

Country Status (1)

Country Link
WO (1) WO2021226716A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689471A (en) * 2021-09-09 2021-11-23 中国联合网络通信集团有限公司 Target tracking method and device, computer equipment and storage medium
CN114310892A (en) * 2021-12-31 2022-04-12 梅卡曼德(北京)机器人科技有限公司 Object grabbing method, device and equipment based on point cloud data collision detection
CN114842039A (en) * 2022-04-11 2022-08-02 中国工程物理研究院机械制造工艺研究所 Coaxiality error calculation method for diamond anvil containing revolving body microstructure
CN116045851A (en) * 2023-03-31 2023-05-02 第六镜科技(北京)集团有限责任公司 Line laser profiler calibration method and device, electronic equipment and storage medium
CN118570078A (en) * 2024-07-31 2024-08-30 北京远见知行科技有限公司 SLAM precision enhancement method fusing discontinuous TLS

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030498A1 (en) * 2006-08-04 2008-02-07 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd System and method for integrating dispersed point-clouds of multiple scans of an object
US20190258225A1 (en) * 2017-11-17 2019-08-22 Kodak Alaris Inc. Automated 360-degree dense point object inspection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030498A1 (en) * 2006-08-04 2008-02-07 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd System and method for integrating dispersed point-clouds of multiple scans of an object
US20190258225A1 (en) * 2017-11-17 2019-08-22 Kodak Alaris Inc. Automated 360-degree dense point object inspection

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689471A (en) * 2021-09-09 2021-11-23 中国联合网络通信集团有限公司 Target tracking method and device, computer equipment and storage medium
CN114310892A (en) * 2021-12-31 2022-04-12 梅卡曼德(北京)机器人科技有限公司 Object grabbing method, device and equipment based on point cloud data collision detection
CN114310892B (en) * 2021-12-31 2024-05-03 梅卡曼德(北京)机器人科技有限公司 Object grabbing method, device and equipment based on point cloud data collision detection
CN114842039A (en) * 2022-04-11 2022-08-02 中国工程物理研究院机械制造工艺研究所 Coaxiality error calculation method for diamond anvil containing revolving body microstructure
CN116045851A (en) * 2023-03-31 2023-05-02 第六镜科技(北京)集团有限责任公司 Line laser profiler calibration method and device, electronic equipment and storage medium
CN118570078A (en) * 2024-07-31 2024-08-30 北京远见知行科技有限公司 SLAM precision enhancement method fusing discontinuous TLS

Similar Documents

Publication Publication Date Title
CN111815716B (en) Parameter calibration method and related device
WO2021226716A1 (en) System and method for discrete point coordinate and orientation detection in 3d point clouds
US8107722B2 (en) System and method for automatic stereo measurement of a point of interest in a scene
AU2013379669B2 (en) Apparatus and method for three dimensional surface measurement
CN105388478B (en) For detect acoustics and optical information method and apparatus and corresponding computer readable storage medium
US20090268214A1 (en) Photogrammetric system and techniques for 3d acquisition
JP7300948B2 (en) Survey data processing device, survey data processing method, program for survey data processing
US20060055943A1 (en) Three-dimensional shape measuring method and its device
JP7486740B2 (en) System and method for efficient 3D reconstruction of an object using a telecentric line scan camera - Patents.com
ES2894935T3 (en) Three-dimensional distance measuring apparatus and method therefor
US20180087910A1 (en) Methods and systems for geometrical optics positioning using spatial color coded leds
KR20020035652A (en) Strapdown system for three-dimensional reconstruction
CN112254670A (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
Aliakbarpour et al. An efficient algorithm for extrinsic calibration between a 3d laser range finder and a stereo camera for surveillance
JP2005077385A (en) Image correlation method, survey method and measuring system using them
Bergström et al. Automatic in-line inspection of shape based on photogrammetry
Yamauchi et al. Calibration of a structured light system by observing planar object from unknown viewpoints
Holdener et al. Design and implementation of a novel portable 360 stereo camera system with low-cost action cameras
CN112254677B (en) Multi-position combined 3D acquisition system and method based on handheld device
Zhou et al. A new algorithm for computing the projection matrix between a LIDAR and a camera based on line correspondences
CN112304250B (en) Three-dimensional matching equipment and method between moving objects
CN112254671B (en) Multi-time combined 3D acquisition system and method
Zhang et al. A novel global calibration method for vision measurement system based on mirror-image stereo target
Ahrnbom et al. Calibration and absolute pose estimation of trinocular linear camera array for smart city applications
Xu et al. A real-time ranging method based on parallel binocular vision

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21803374

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21803374

Country of ref document: EP

Kind code of ref document: A1