US11893744B2 - Methods and apparatus for extracting profiles from three-dimensional images - Google Patents
Methods and apparatus for extracting profiles from three-dimensional images Download PDFInfo
- Publication number
- US11893744B2 US11893744B2 US17/316,417 US202117316417A US11893744B2 US 11893744 B2 US11893744 B2 US 11893744B2 US 202117316417 A US202117316417 A US 202117316417A US 11893744 B2 US11893744 B2 US 11893744B2
- Authority
- US
- United States
- Prior art keywords
- points
- axis
- determining
- representative
- bins
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims abstract description 113
- 238000000926 separation method Methods 0.000 claims description 20
- 238000012935 Averaging Methods 0.000 claims description 12
- 230000007547 defect Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 22
- 238000003384 imaging method Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 9
- 238000007689 inspection Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000005415 magnetization Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011112 process operation Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- YLJREFDVOIBQDA-UHFFFAOYSA-N tacrine Chemical compound C1=CC=C2C(N)=C(CCCC3)C3=NC2=C1 YLJREFDVOIBQDA-UHFFFAOYSA-N 0.000 description 1
- 229960001685 tacrine Drugs 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000002366 time-of-flight method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2134—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis
- G06F18/21345—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis enforcing sparsity or involving a domain transformation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Definitions
- the techniques described herein relate generally to methods and apparatus for machine vision, including techniques for extracting profiles from three-dimensional images, and in particular to extracting profiles from three-dimensional point clouds at arbitrary poses.
- Machine vision systems can include robust imaging capabilities, including three-dimensional (3D) imaging devices.
- 3D sensors can image a scene to generate a set of 3D points that each include an (x, y, z) location within a 3D coordinate system (e.g., where the z axis of the coordinate system represents a distance from the 3D imaging device).
- 3D imaging devices can generate a 3D point cloud, which includes a set of 3D points captured during a 3D imaging process.
- the sheer number of 3D point in 3D point clouds can be massive (e.g., compared to 2D data of a scene).
- 3D point clouds may only include pure 3D data points, and therefore may not include data indicative of relations between/among the 3D points, or other information, such as surface normal information, it can be complicated to process 3D points with no data indicative of relations among other points. Therefore, while 3D point clouds can provide a large amount of 3D data, performing machine vision tasks on 3D point cloud data can be complicated, time consuming, require significant processing resources, and/or the like.
- apparatus, systems, and methods are provided for improved machine vision techniques, and in particular for improved machine vision techniques that generate profiles from points in a 3D point cloud (e.g., of object surfaces).
- the techniques can generate the profiles at arbitrary poses.
- a plane and/or a region of interest which can be user-specified, is used to generate each profile.
- the region of interest can be a 3D region of interest, such that 3D points within the region of interest can be collapsed into 2D points, binned, and used to generate the resulting profile.
- Some aspects relate to a computerized method for determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud.
- the method comprises receiving data indicative of a 3D point cloud comprising a plurality of 3D points, and determining a 3D region of interest in the 3D point cloud, wherein the 3D region of interest comprises a width along a first axis, a height along a second axis, and a depth along a third axis.
- the method comprises determining a set of 3D points of the plurality of 3D points that each comprises a 3D location within the 3D region of interest, representing the set of 3D points as a set of 2D points based on coordinate values of the first and second axes of set of the 3D points, and grouping the set of 2D points into a plurality of 2D bins arranged along the first axis, wherein each 2D bin comprises a bin width.
- the method comprises determining, for each of the plurality of 2D bins, a representative 2D position based on the associated set of 2D points, and connecting each of the representative 2D positions to neighboring representative 2D positions to generate the 2D profile.
- the method further includes creating a 3D region coordinate system comprising the first axis, the second axis, and the third axis, wherein an origin of the 3D region coordinate system is disposed at a middle of the width and depth of the 3D region, and the height of the 3D region starts at the origin.
- the method can further include mapping points from a coordinate system of the 3D point cloud to the 3D region coordinate system.
- representing the set of 3D points as the set of 2D points comprises representing the 3D points in a 2D plane comprising a first dimension equal to the width and a second dimension equal to the height, wherein the first dimension extends along the first axis and the second dimension extends along the second axis.
- Representing the set of 3D points as the set of 2D points can include setting each value of the third axis of the set of 3D points to zero.
- the plurality of 2D bins can be arranged side-by-side along the first axis within the first dimension of the 2D plane.
- determining a representative 2D position for each of the plurality of 2D bins comprises determining the set of 2D points of one or more 2D bins of the plurality of 2D bins is less than a threshold, and setting the set of 2D points of the one or more 2D bins to an empty set.
- determining the representative 2D position for each of the plurality of 2D bins comprises determining an average of the set of 2D points of each bin.
- determining the representative 2D position for each of the plurality of 2D bins comprises selecting a 2D point of the associated set of 2D points with a maximum value of the second axis as the representative 2D position.
- determining the representative 2D position for each of the plurality of 2D bins comprises, for each 2D bin, grouping the set of 2D points into one or more clusters of 2D points with distances between values of the second axis of the 2D points of each cluster within a separation threshold, wherein distances between the values of the second axis of the 2D points of different clusters are greater than the separation threshold, removing any clusters with less than a threshold minimum number of 2D points to generate a remaining set of one or more clusters, determining a maximum cluster of the one or more remaining clusters comprising determining which of the one or more remaining clusters comprises a 2D point with a maximum coordinate along the second axis, and averaging the 2D points of the maximum cluster to determine the representative 2D position.
- determining the representative 2D position for each of the plurality of 2D bins comprises determining the representative 2D position only for 2D bins of the plurality of 2D bins with non-empty sets of 2D points.
- Some embodiments relate to a non-transitory computer-readable media comprising instructions that, when executed by one or more processors on a computing device, are operable to cause the one or more processors to execute the method of any of the techniques described herein.
- Some embodiments relate to a system comprising a memory storing instructions, and a processor configured to execute the instructions to perform the method of any of the techniques described herein.
- FIG. 1 is a diagram showing an exemplary machine vision system, according to some embodiments.
- FIG. 2 is a flowchart of an exemplary computerized method for determining a 2D profile of a portion of a 3D point cloud, according to some embodiments.
- FIG. 3 is a diagram showing an example of a 3D point cloud, according to some embodiments.
- FIG. 4 A is a diagram showing an example of a rectangular-shaped 3D region of interest, according to some embodiments.
- FIG. 4 B is a diagram showing the 3D point cloud from FIG. 3 overlaid with the 3D region of interest in FIG. 4 A , according to some embodiments.
- FIG. 5 is a diagram showing 3D points projected onto a 2D plane, according to some embodiments.
- FIG. 6 is a diagram showing the 2D plane of FIG. 5 with in-plane points quantized into bins, according to some embodiments.
- FIG. 7 is a diagram showing an exemplary set of representative 2D positions determined by averaging the values of the 2D points of each bin from FIG. 6 , according to some embodiments.
- FIG. 8 is a diagram showing an exemplary set of representative 2D positions determined by selecting the 2D point of each bin of FIG. 6 with a maximum Z value, according to some embodiments.
- FIG. 9 is a diagram showing an exemplary set of representative 2D positions determined by clustering the 2D points from FIG. 6 , according to some embodiments.
- FIG. 10 shows an example of profile results extracted from a series of cutting rectangles that are evenly distributed along the Y direction of a box region, according to some embodiments.
- FIG. 11 shows an example of profile results extracted from a series of cutting rectangles that are evenly distributed along the angular direction of a frustum, according to some embodiments.
- FIG. 12 is a diagram showing exemplary variant features derived from a profile, according to some embodiments.
- FIG. 13 is a diagram showing an example of a 3D line extracted using corner features that are detected from a sequence of extracted profiles, according to some embodiments.
- FIG. 14 is a diagram showing an example of a surface defect detected by monitoring the variation of a successive sequence of extracted profiles, according to some embodiments.
- 3D point clouds provide popular representations of object surfaces under inspection using 3D point positions.
- 3D point clouds often include hundreds of thousands or millions of (x, y, z) points. Therefore, the inventors have appreciated that directly interpreting such a massive number of 3D points in space can therefore be quite time consuming and resource intensive.
- 3D point clouds include such massive numbers of 3D points and typically do not include information about structural or spatial relationships among the 3D points, trying to interpret a pure 3D point cloud can be infeasible for many machine vision applications, which may have limited time to perform such interpretations, limited hardware resources, and/or the like.
- conventional techniques typically first mesh the 3D points to generate surfaces along the 3D points, and then perform geometrical operations based on the meshed surfaces.
- meshing the 3D points can require performing complex operations.
- the resulting mesh surfaces can include noise artifacts (e.g., due to noisy 3D points that do not lie along the actual surface of the imaged objects).
- each 1D signal can be represented as an ordered sequence of 2D points in a 2D plane.
- a 3D shape such as a 3D rectangular shape, that is dimensioned based on the 2D plane can specify how neighboring 3D points in the 3D point cloud are used to produce the profile.
- the 3D points within the 3D shape can be binned and processed as clusters of 3D points in order to smooth and reduce noise in the 3D points.
- the techniques can include projecting the 3D points near or within the 3D shape as 2D points on the 2D plane, and quantizing the projected 2D points into bins. Representative positions can be determined for each bin, and the representative positions of adjacent bins can be connected to generate polylines. Since the traditionally massive amount of data in a 3D point cloud can be reduced to one or two dimensions, the techniques described herein can significantly improve performance.
- the techniques can be used in various types of point cloud-based applications, including feature extraction, metrology measurements, defect inspection, vision guided robots, and/or the like.
- FIG. 1 shows an exemplary machine vision system 100 , according to some embodiments.
- the exemplary machine vision system 100 includes a camera 102 (or other imaging acquisition device) and a computer 104 . While only one camera 102 is shown in FIG. 1 , it should be appreciated that a plurality of cameras can be used in the machine vision system (e.g., where a point cloud is merged from that of multiple cameras).
- the computer 104 includes one or more processors and a human-machine interface in the form of a computer display and optionally one or more input devices (e.g., a keyboard, a mouse, a track ball, etc.).
- Camera 102 includes, among other components, a lens 106 and a camera sensor element (not illustrated).
- the lens 106 includes a field of view 108 , and the lens 106 focuses light from the field of view 108 onto the sensor element.
- the sensor element generates a digital image of the camera field of view 108 and provides that image to a processor that forms part of computer 104 .
- object 112 travels along a conveyor 110 into the field of view 108 of the camera 102 .
- the camera 102 can generate one or more digital images of the object 112 while it is in the field of view 108 for processing, as discussed further herein.
- the conveyor can contain a plurality of objects. These objects can pass, in turn, within the field of view 108 of the camera 102 , such as during an inspection process. As such, the camera 102 can acquire at least one image of each observed object 112 .
- the camera 102 is a three-dimensional (3D) imaging device.
- the camera 102 can be a 3D sensor that scans a scene line-by-line, such as the DS-line of laser profiler 3D displacement sensors available from Cognex Corp., the assignee of the present application.
- the 3D imaging device can generate a set of (x, y, z) points (e.g., where the z axis adds a third dimension, such as a distance from the 3D imaging device).
- the 3D imaging device can use various 3D image generation techniques, such as shape-from-shading, stereo imaging, time of flight techniques, projector-based techniques, and/or other 3D generation technologies.
- the machine vision system 100 includes a two-dimensional imaging device, such as a two-dimensional (2D) CCD or CMOS imaging array. In some embodiments, two-dimensional imaging devices generate a 2D array of brightness values.
- the machine vision system processes the 3D data from the camera 102 .
- the 3D data received from the camera 102 can include, for example, a point cloud and/or a range image.
- a point cloud can include a group of 3D points that are on or near the surface of a solid object.
- the points may be presented in terms of their coordinates in a rectilinear or other coordinate system.
- other information such a mesh or grid structure indicating which points are neighbors on the object's surface, may optionally also be present.
- information about surface features including curvatures, surface normal, edges, and/or color and albedo information, either derived from sensor measurements or computed previously, may be included in the input point clouds.
- the 2D and/or 3D data may be obtained from a 2D and/or 3D sensor, from a CAD or other solid model, and/or by preprocessing range images, 2D images, and/or other images.
- the group of 3D points can be a portion of a 3D point cloud within user specified regions of interest and/or include data specifying the region of interest in the 3D point cloud.
- a 3D point cloud can include so many points, it can be desirable to specify and/or define one or more regions of interest (e.g., to limit the space to which the techniques described herein are applied).
- Examples of computer 104 can include, but are not limited to a single server computer, a series of server computers, a single personal computer, a series of personal computers, a mini computer, a mainframe computer, and/or a computing cloud.
- the various components of computer 104 can execute one or more operating systems, examples of which can include but are not limited to: Microsoft Windows ServerTM; Novell NetwareTM; Redhat LinuxTM, Unix, and/or a custom operating system, for example.
- the one or more processors of the computer 104 can be configured to process operations stored in memory connected to the one or more processors.
- the memory can include, but is not limited to, a hard disk drive; a flash drive, a tape drive; an optical drive; a RAID array; a random access memory (RAM); and a read-only memory (ROM).
- 3D vision systems capture images of scenes by generating 3D point clouds of the scene.
- a 3D vision system views an object surface from a given direction, only the visible surfaces are captured in the 3D point cloud, since other surfaces (e.g., side and/or bottom surfaces) are often occluded.
- the techniques provide for estimating 1D manifolds, or profiles, of surfaces in 3D point clouds.
- a profile can represent, for example, one or more surface curves of the surface of an object in a region of interest of the point cloud.
- a 3D box is used to identify a set of 3D points in the 3D point cloud.
- the identified 3D points are mapped to 2D points along a 2D plane associated with the 3D box.
- the 2D points can be grouped into bins, and a representative point can be computed for each bin.
- Various techniques can be used to determine the representative points of the bins, including by averaging the 2D positions, identifying maximum 2D points, and/or averaging positions from qualified clusters.
- the representative points of adjacent bins can be connected to form polylines.
- each polyline can be made by connecting 2D vertices that increase monotonically along a perpendicular direction of the viewing direction.
- FIG. 2 is a flowchart of an exemplary computerized method 200 for determining a 2D profile of a portion of a 3D point cloud, according to some embodiments.
- the machine vision system e.g., the machine vision system 100 of FIG. 1
- the 3D point cloud can be a voxel grid, a 3D lattice, and/or the like.
- FIG. 3 is a diagram showing an example of a 3D point cloud 300 , according to some embodiments.
- the 3D point cloud 300 only shows a small number of 3D points 302 A, 302 B, through 302 N, collectively referred to as 3D points 302 .
- the 3D point cloud includes a point cloud coordinate system 304 with associated X, Y and Z axes.
- the machine vision system determines a 3D region of interest in the 3D point cloud.
- the 3D region can be any 3D shape, such as a 3D box, a sphere, etc.
- the techniques include specifying a cutting plane (e.g., including a projection/sampling direction, a viewing direction in the plane, and/or a region of interest to constrain neighboring points).
- the cutting plane can be specified in conjunction with a depth that can be used to determine the 3D region of interest.
- the 3D region can be specified with respect to a 3D region coordinate system.
- the 3D region of interest can have a width along a first axis of the coordinate system (e.g., along the X axis), a height along a second axis of the coordinate system (e.g., along the Z axis), and a depth along a third axis of the coordinate system (e.g., along the Y axis).
- the techniques can include constructing a rectilinear coordinate space whose X and Z axes correspond to the width and height of a rectangular region of interest, respectively, and whose Y axis corresponds to a depth of the rectangular region of interest.
- the region of interest can be specified based on the origin of the 3D region coordinate system. For example, continuing with the example of the rectangular region of interest, the midpoints of the width and depth of the 3D region can be disposed at the origin, and the height of the 3D region can start at the origin (e.g., such that the origin is located at a center of a short side of the 3D region).
- FIG. 4 A is a diagram showing an example of a rectangular-shaped 3D region of interest 400 , according to some embodiments.
- the region of interest 400 can be specified as a 2D rectangle 402 , which can be referred to as a cutting rectangle for the region of interest.
- the width 404 of the rectangle 402 is aligned on the X direction of the 3D region coordinate system, and the height 406 of the rectangle 402 is aligned on the Z direction of the coordinate system.
- a box 408 is constructed to define the 3D region of interest for identifying the neighboring 3D points from the 3D point cloud.
- the box 408 includes a front face 410 and back face 412 , both of which are a parallel copy of the cutting rectangle 402 , and are separated from each other by the thickness 414 along the normal direction of the rectangle 402 , and each separated from the rectangle 402 by half of the thickness 414 .
- the 3D region coordinate system 416 is defined such that its origin is at the center of the bottom side of the rectangle 402 , and its X and Z axes respectively are parallel to the width and height of the rectangle 402 , respectively, while the Y direction extends along the thickness 414 .
- FIG. 4 B shows the 3D point cloud 300 from FIG. 3 overlaid with the 3D region of interest 400 of FIG. 4 A , according to some embodiments.
- the 3D region of interest can be associated with its own region coordinate system, which may be different than the coordinate system of the 3D point cloud 304 .
- the techniques can map the 3D points from the 3D point cloud coordinate system to the region coordinate system of the 3D region of interest. Performing such a mapping can obtain each 3D point's projection in the plane where the region of interest lies (e.g., the rectangle 402 ) while excluding those that are outside of the region of interest.
- a rigid transform can be used to relate the 3D point cloud coordinate space to the 3D region coordinate space.
- the machine vision system determines the set of 3D points that are located within the 3D region of interest. According to some embodiments, the machine vision system determines the 3D points within the 3D region of interest based on one or more aspects of the 3D region of interest. For example, referring to FIG. 4 B , the machine vision system can determine the 3D points of the 3D point cloud shown within the rectangular 3D region of interest 400 by determining the points within the thickness 414 (e.g., which can be user-specified). Therefore, in this example, a 3D point within the region of interest is a point that has a distance from the plane 402 less than the half of the thickness 414 .
- the machine vision system represents the set of 3D points as a set of 2D points based on coordinate values of the designated first and second axes of set of the 3D points (e.g., for a coordinate form represented by (X,Y,Z), the designated first and second axes can be X and Z, Y and Z, and/or the like).
- each neighboring point is projected to a 2D aspect of the 3D region of interest.
- the 3D points can be represented in a 2D plane.
- the 3D points can be represented as 2D points by representing the 3D points using only two of the three coordinate axes values and/or by setting each value of an axis of the set of 3D points (e.g., the third axis) to zero.
- the 3D points can be projected to the plane 402 by setting the Y component of each 3D point to zero to obtain in-plane points as shown in FIG. 5 , according to some embodiments.
- the first dimension of the plane 402 is maintained to be equal to the width 404
- the second dimension of the plane 402 is also maintained to be equal to the height 406 , such that the plane 402 maintains the same dimensions as originally configured and shown in FIGS. 4 A- 4 B .
- the dimensions of the plane 402 can also be maintained with respect to the 3D region coordinate system. Accordingly, as shown in FIG. 5 , the width 404 extends along the X axis of the 3D region coordinate system 416 , and the height 406 extends along the Z axis of the 3D region coordinate system 416 .
- the machine vision system groups the set of 2D points into a plurality of 2D bins arranged along the first axis.
- Each of the 2D bins has a bin width, and in some embodiments each of the 2D bins has the same bin width.
- Each of the plurality of 2D bins can be arranged side-by-side along the first axis (e.g., along the X axis) within the first dimension of the 2D plane.
- an occupied bin may contain one or more in-plane points. Therefore, in some embodiments, the in-plane position of each 2D point can be assigned to the corresponding bin based on the maximum integer not larger than the ratio: pointInROI.x( )/BinSize.
- FIG. 6 is a diagram showing the plane 402 with in-plane points quantized into bins, according to some embodiments.
- the plane 402 includes bins 602 A, 602 B through 602 N, collectively referred to as bins 602 , which share a common bin size 604 .
- FIGS. 4 A- 4 B also show the bin size 604 to illustrate conceptually the bin in 3D space (and the corresponding 3D points that are projected to 2D and mapped to the bin). As shown, some bins 602 may not include any 3D points, such as bin 602 B.
- the machine vision system determines a representative 2D position (e.g., (x,z) position) for each of the 2D bins based on the 2D points in each bin.
- the techniques can use a threshold to determine whether to include the 2D points of a particular bin (e.g., which can further influence whether the bin is used to determine a portion of the profile). For example, if the machine vision system determines that the number of 2D points of one or more 2D bins is less than a threshold, then the machine vision system can zero-out the 2D points of those bins so that the bins are not associated with any 2D points. For example, referring to FIG. 6 , if the minimum number of points is set to two (2), bin 602 N can be filtered out from subsequent processing since it contains only one 2D point.
- the representative 2D position of each bin can be determined based on an average, mean, standard deviation, and/or the like, of the 2D points of the bin.
- the representative 2D position can be determined based on one or more maximum point values, minimum point values, and/or the like, of the 2D points of the bin.
- the representative 2D position can be determined by clustering the points of a 2D bin and determining the representative 2D position based on the clusters.
- FIGS. 7 - 9 are diagrams that illustrate different techniques that can be used to determine the representative 2D positions, according to some embodiments.
- the representative 2D point can be determined by averaging the coordinate values of the 2D points in a bin.
- FIG. 7 is a diagram showing an exemplary set of representative 2D positions 702 determined by averaging the values of the 2D points of FIG. 6 , according to some embodiments.
- the machine vision system can be configured to run an average mode that averages the X and Z values of the 2D points in each bin 0 through 9 to determine the representative 2D points 702 that each have an X value equal to the average of the X values and a Z value equal to the average of the Z values.
- some embodiments can select the 2D point with a maximum value of an axis (e.g., the second axis) as the representative 2D position.
- FIG. 8 is a diagram showing an exemplary set of representative 2D positions 802 determined by selecting the 2D point of each bin of FIG. 6 with a maximum Z value, according to some embodiments.
- the representative 2D positions 802 of each bin are set equal to the X and Z values of the 2D point with the maximum Z value in each bin.
- some embodiments can cluster the 2D points of each bin to determine the representative 2D position.
- the machine vision system can group the set of 2D points into one or more clusters of 2D points with distances between values of an axis of the 2D points (e.g., the Z axis) of each cluster within a separation threshold, while distances between the values of the 2D points of different clusters are greater than the separation threshold.
- the machine vision system can remove any clusters with less than a threshold minimum number of 2D points to generate a remaining set of one or more clusters.
- the machine vision system can determine a maximum cluster of the one or more remaining clusters by determining which of the remaining clusters has the 2D point with a maximum coordinate value (e.g., along the second axis).
- the machine vision system can average the 2D points of the cluster with that 2D point average to determine the representative 2D position.
- FIG. 9 is a diagram showing an exemplary set of representative 2D positions 902 determined by clustering the 2D points, according to some embodiments.
- the separation threshold can be set based on the bin size.
- the 2D points of a bin are grouped into clusters based on their Z components such that: 1) different clusters have their Z-distances larger than the separation threshold; and 2) for each cluster the Z-distance between any pair of Z-adjacent points (after Z-based sorting) is not larger than the separation threshold.
- the separation threshold is determined as the maximum of the thickness of the 3D region of interest and a connectivity distance used when generating the profile line (e.g., discussed further in conjunction with step 214 ).
- a cluster can be considered a qualified cluster if it contains at least a threshold minimum number of points in the bin.
- the machine vision system can select the qualified cluster with the largest Z component.
- the machine vision system can determine the average of the points of the selected qualified cluster as the representative point of the bin.
- the rectangular box examples are provided for illustrative purposes but are not intended to limit the techniques described herein.
- Other embodiments may modify one or more aspects of these examples, such as by using different shapes, different region coordinate systems, and/or different dispositions of the 3D region with respect to the region coordinate system.
- the techniques described herein can therefore be modified accordingly.
- the maximum mode and clustering mode can extract minimum Z values/profiles instead of maximum Z values/profiles.
- the machine vision system connects each of the representative 2D positions to neighboring representative 2D positions to generate the 2D profile.
- the techniques can connect the adjacent representative 2D positions of occupied bins disposed along an axis (e.g., the X axis) to form one or more polylines.
- Two occupied bins can be considered adjacent if their bins are next to each other and/or there exist only un-occupied bins between them.
- Two adjacent bins can be considered to belong to a same polyline if the distance along the first axis (e.g., X axis) between their representative 2D positions is not greater than a specified connectivity distance threshold.
- the connectivity distance threshold can be set to larger than twice the bin size (e.g., so that bin 3 and bin 5 are connected in each of the examples shown in FIGS. 7 - 9 ).
- some bins may not be occupied by any 2D points and/or the 2D points in the bin may be removed and/or ignored (e.g., if there are not a sufficient number of 2D points in the bin). Therefore, in some embodiments there can be empty bins, or gaps, between occupied bins.
- the machine vision system can be configured to connect occupied bins across gaps. For example, the machine vision system connects bins 3 and 5 through empty bin 4 as shown in FIGS. 7 - 9 . In some embodiments, the machine vision system can use a threshold gap distance to determine whether to connect occupied bins through empty bins.
- the threshold can be two empty bins, three empty bins, five empty bins, etc. If the machine vision system determines the gap is above the threshold gap distance, then the machine vision system can be configured to not connect the occupied bins (e.g., and to create two separate 2D profiles), whereas if the gap is below the threshold gap distance, then the machine vision system can be configured to connect the occupied bins (e.g., to continue the 2D profile across the gap).
- the techniques can be used to generate profiles along objects. For example, for a box, the techniques can generate evenly distributed cutting rectangles along the box's Y direction, with each rectangle used to generate an associated profile being a parallel copy of the box's front XZ face.
- FIG. 10 shows an example of profile results 1000 extracted from a series of cutting rectangles that are evenly distributed along the Y direction of a box region, according to some embodiments. As shown, for each cutting rectangle, an associated profile 1002 A through 1002 N is generated along the Y direction.
- the techniques can generate evenly distributed cutting planes or rectangles along an object's angular direction, such that the profiles are arranged radially from the center of the object (e.g., such that the profile regions are arranged in a cylindrical fashion).
- each rectangle can be parallel to the axis of the object and its interior Y side aligned with the axis and the outer Y side on the surface.
- the techniques can be used to generate profiles using a set of arbitrary rectangles.
- FIG. 11 shows an example of profile results 1100 extracted from a series of cutting rectangles that are evenly distributed along the angular direction of a frustum, according to some embodiments. As shown, for each cutting rectangle, an associated profile 1102 A through 1102 N is generated along the angular direction.
- the profiles described herein can be used for various machine vision tools. Each profile can be used as a representation that provides a summarized view of the surface points of an object, such as the surface points that are on or near a cutting plane.
- the profiles can be used to measure, for example, features, dimensions and/or other aspects of a captured object, such as corner points, the perimeter of the object, an area of the object, and/or the like.
- a series of profiles can be used to determine information about the object, such as 3D features (e.g., 3D perimeter lines) of the object, the volume of some and/or a portion of the object, defects of the object, and/or the like. As shown in the diagram 1200 of FIG.
- variant features can be derived from a profile 1202 , including in this example the corner positions 1206 A through 1206 N (e.g., determined with large slope changes in both sides), extreme positions 1204 A and 1204 B (e.g., with maximum or minimum X or Z positions), linear segments 1208 A through 1208 N, and circular sections 1210 .
- Such derived features can be useful for 3D vision applications, such as inspection applications.
- FIG. 13 is a diagram 1300 of an example of a 3D line 1306 that is extracted by line fitting on the corner features 1304 A through 1304 N that are detected from a sequence of associated extracted profiles 1302 A through 1302 N.
- FIG. 14 is a diagram showing an application example of detecting surface defects 1404 by monitoring the variation of a successive sequence of profiles 1402 A through 1402 N.
- the techniques described herein may be embodied in computer-executable instructions implemented as software, including as application software, system software, firmware, middleware, embedded code, or any other suitable type of computer code.
- Such computer-executable instructions may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
- these computer-executable instructions may be implemented in any suitable manner, including as a number of functional facilities, each providing one or more operations to complete execution of algorithms operating according to these techniques.
- a “functional facility,” however instantiated, is a structural component of a computer system that, when integrated with and executed by one or more computers, causes the one or more computers to perform a specific operational role.
- a functional facility may be a portion of or an entire software element.
- a functional facility may be implemented as a function of a process, or as a discrete process, or as any other suitable unit of processing.
- each functional facility may be implemented in its own way; all need not be implemented the same way.
- these functional facilities may be executed in parallel and/or serially, as appropriate, and may pass information between one another using a shared memory on the computer(s) on which they are executing, using a message passing protocol, or in any other suitable way.
- functional facilities include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- functionality of the functional facilities may be combined or distributed as desired in the systems in which they operate.
- one or more functional facilities carrying out techniques herein may together form a complete software package.
- These functional facilities may, in alternative embodiments, be adapted to interact with other, unrelated functional facilities and/or processes, to implement a software program application.
- Some exemplary functional facilities have been described herein for carrying out one or more tasks. It should be appreciated, though, that the functional facilities and division of tasks described is merely illustrative of the type of functional facilities that may implement the exemplary techniques described herein, and that embodiments are not limited to being implemented in any specific number, division, or type of functional facilities. In some implementations, all functionality may be implemented in a single functional facility. It should also be appreciated that, in some implementations, some of the functional facilities described herein may be implemented together with or separately from others (i.e., as a single unit or separate units), or some of these functional facilities may not be implemented.
- Computer-executable instructions implementing the techniques described herein may, in some embodiments, be encoded on one or more computer-readable media to provide functionality to the media.
- Computer-readable media include magnetic media such as a hard disk drive, optical media such as a Compact Disk (CD) or a Digital Versatile Disk (DVD), a persistent or non-persistent solid-state memory (e.g., Flash memory, Magnetic RAM, etc.), or any other suitable storage media.
- Such a computer-readable medium may be implemented in any suitable manner.
- “computer-readable media” also called “computer-readable storage media” refers to tangible storage media. Tangible storage media are non-transitory and have at least one physical, structural component.
- At least one physical, structural component has at least one physical property that may be altered in some way during a process of creating the medium with embedded information, a process of recording information thereon, or any other process of encoding the medium with information. For example, a magnetization state of a portion of a physical structure of a computer-readable medium may be altered during a recording process.
- some techniques described above comprise acts of storing information (e.g., data and/or instructions) in certain ways for use by these techniques.
- the information may be encoded on a computer-readable storage media.
- advantageous structures may be used to impart a physical organization of the information when encoded on the storage medium. These advantageous structures may then provide functionality to the storage medium by affecting operations of one or more processors interacting with the information; for example, by increasing the efficiency of computer operations performed by the processor(s).
- these instructions may be executed on one or more suitable computing device(s) operating in any suitable computer system, or one or more computing devices (or one or more processors of one or more computing devices) may be programmed to execute the computer-executable instructions.
- a computing device or processor may be programmed to execute instructions when the instructions are stored in a manner accessible to the computing device or processor, such as in a data store (e.g., an on-chip cache or instruction register, a computer-readable storage medium accessible via a bus, a computer-readable storage medium accessible via one or more networks and accessible by the device/processor, etc.).
- a data store e.g., an on-chip cache or instruction register, a computer-readable storage medium accessible via a bus, a computer-readable storage medium accessible via one or more networks and accessible by the device/processor, etc.
- Functional facilities comprising these computer-executable instructions may be integrated with and direct the operation of a single multi-purpose programmable digital computing device, a coordinated system of two or more multi-purpose computing device sharing processing power and jointly carrying out the techniques described herein, a single computing device or coordinated system of computing device (co-located or geographically distributed) dedicated to executing the techniques described herein, one or more Field-Programmable Gate Arrays (FPGAs) for carrying out the techniques described herein, or any other suitable system.
- FPGAs Field-Programmable Gate Arrays
- a computing device may comprise at least one processor, a network adapter, and computer-readable storage media.
- a computing device may be, for example, a desktop or laptop personal computer, a personal digital assistant (PDA), a smart mobile phone, a server, or any other suitable computing device.
- PDA personal digital assistant
- a network adapter may be any suitable hardware and/or software to enable the computing device to communicate wired and/or wirelessly with any other suitable computing device over any suitable computing network.
- the computing network may include wireless access points, switches, routers, gateways, and/or other networking equipment as well as any suitable wired and/or wireless communication medium or media for exchanging data between two or more computers, including the Internet.
- Computer-readable media may be adapted to store data to be processed and/or instructions to be executed by processor. The processor enables processing of data and execution of instructions. The data and instructions may be stored on the computer-readable storage media.
- a computing device may additionally have one or more components and peripherals, including input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computing device may receive input information through speech recognition or in other audible format.
- Embodiments have been described where the techniques are implemented in circuitry and/or computer-executable instructions. It should be appreciated that some embodiments may be in the form of a method, of which at least one example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
- exemplary is used herein to mean serving as an example, instance, or illustration. Any embodiment, implementation, process, feature, etc. described herein as exemplary should therefore be understood to be an illustrative example and should not be understood to be a preferred or advantageous example unless otherwise indicated.
- a computerized method for determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud comprising:
- 3D region coordinate system comprising the first axis, the second axis, and the third axis, wherein an origin of the 3D region coordinate system is disposed at a middle of the width and depth of the 3D region, and the height of the 3D region starts at the origin.
- representing the set of 3D points as the set of 2D points comprises representing the 3D points in a 2D plane comprising a first dimension equal to the width and a second dimension equal to the height, wherein the first dimension extends along the first axis and the second dimension extends along the second axis.
- representing the set of 3D points as the set of 2D points comprises setting each value of the third axis of the set of 3D points to zero.
- the plurality of 2D bins are arranged side-by-side along the first axis within the first dimension of the 2D plane. 7. The method of 1-6, wherein determining a representative 2D position for each of the plurality of 2D bins comprises:
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
pointInROI=transformProfileFromPointCloud*
-
- where:
- indicates the compose operator; and
- pointInROI is a 3D point within the rectangular 3D region, and meets the following three conditions shown in Equations 2-4:
−rectSizeX*0.5<=pointInROI.x<=rectSizeX*0.5Equation 2
0<=pointInROI.z and pointInROI.z<=rectSizeY Equation 3
−Thickness*0.5<=pointInROI.y and pointInROI.y<=Thickness*0.5Equation 4 - Where:
- rectSizeX is the width of the rectangle (e.g.,
width 404 inFIG. 4A ); - rectSizeY is the height of the rectangle (e.g.,
height 406 inFIG. 4A ); and - Thickness is the depth of the rectangle (e.g.,
depth 414 inFIG. 4A ).
quantizedPointInROI=floor(pointInROI.x( )/BinSize)
where:
-
- pointInROI is a 3D point within the rectangular region; and
- BinSize is the bin size.
-
- receiving data indicative of a 3D point cloud comprising a plurality of 3D points;
- determining a 3D region of interest in the 3D point cloud, wherein the 3D region of interest comprises a width along a first axis, a height along a second axis, and a depth along a third axis;
- determining a set of 3D points of the plurality of 3D points that each comprises a 3D location within the 3D region of interest;
- representing the set of 3D points as a set of 2D points based on coordinate values of the first and second axes of set of the 3D points;
- grouping the set of 2D points into a plurality of 2D bins arranged along the first axis, wherein each 2D bin comprises a bin width;
- determining, for each of the plurality of 2D bins, a representative 2D position based on the associated set of 2D points; and
- connecting each of the representative 2D positions to neighboring representative 2D positions to generate the 2D profile.
2. The method of 1, further comprising:
5. The method of 4, wherein representing the set of 3D points as the set of 2D points comprises setting each value of the third axis of the set of 3D points to zero.
6. The method of 4, wherein the plurality of 2D bins are arranged side-by-side along the first axis within the first dimension of the 2D plane.
7. The method of 1-6, wherein determining a representative 2D position for each of the plurality of 2D bins comprises:
-
- determining the set of 2D points of one or more 2D bins of the plurality of 2D bins is less than a threshold; and
- setting the set of 2D points of the one or more 2D bins to an empty set.
8. The method of 1-7, wherein determining the representative 2D position for each of the plurality of 2D bins comprises determining an average of the set of 2D points of each bin.
9. The method of 1-8, wherein determining the representative 2D position for each of the plurality of 2D bins comprises selecting a 2D point of the associated set of 2D points with a maximum value of the second axis as the representative 2D position.
10. The method of 1-9, wherein determining the representative 2D position for each of the plurality of 2D bins comprises, for each 2D bin: - grouping the set of 2D points into one or more clusters of 2D points with distances between values of the second axis of the 2D points of each cluster within a separation threshold, wherein distances between the values of the second axis of the 2D points of different clusters are greater than the separation threshold;
- removing any clusters with less than a threshold minimum number of 2D points to generate a remaining set of one or more clusters;
- determining a maximum cluster of the one or more remaining clusters comprising determining which of the one or more remaining clusters comprises a 2D point with a maximum coordinate along the second axis; and
- averaging the 2D points of the maximum cluster to determine the representative 2D position.
11. The method of 1-10, wherein determining the representative 2D position for each of the plurality of 2D bins comprises determining the representative 2D position only for 2D bins of the plurality of 2D bins with non-empty sets of 2D points.
12. A non-transitory computer-readable media comprising instructions that, when executed by one or more processors on a computing device, are operable to cause the one or more processors to determine a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud, comprising: - receiving data indicative of a 3D point cloud comprising a plurality of 3D points;
- determining a 3D region of interest in the 3D point cloud, wherein the 3D region of interest comprises a width along a first axis, a height along a second axis, and a depth along a third axis;
- determining a set of 3D points of the plurality of 3D points that each comprises a 3D location within the 3D region of interest;
- representing the set of 3D points as a set of 2D points based on coordinate values of the first and second axes of set of the 3D points;
- grouping the set of 2D points into a plurality of 2D bins arranged along the first axis, wherein each 2D bin comprises a bin width;
- determining, for each of the plurality of 2D bins, a representative 2D position based on the associated set of 2D points; and
- connecting each of the representative 2D positions to neighboring representative 2D positions to generate the 2D profile.
13. The non-transitory computer-readable media of 12, wherein representing the set of 3D points as the set of 2D points comprises representing the 3D points in a 2D plane comprising a first dimension equal to the width and a second dimension equal to the height, wherein the first dimension extends along the first axis and the second dimension extends along the second axis.
14. The non-transitory computer-readable media of 12-13, wherein determining the representative 2D position for each of the plurality of 2D bins comprises determining an average of the set of 2D points of each bin.
15. The non-transitory computer-readable media of 12-14, wherein determining the representative 2D position for each of the plurality of 2D bins comprises selecting a 2D point of the associated set of 2D points with a maximum value of the second axis as the representative 2D position.
16. The non-transitory computer-readable media of 12-15, wherein determining the representative 2D position for each of the plurality of 2D bins comprises, for each 2D bin: - grouping the set of 2D points into one or more clusters of 2D points with distances between values of the second axis of the 2D points of each cluster within a separation threshold, wherein distances between the values of the second axis of the 2D points of different clusters are greater than the separation threshold;
- removing any clusters with less than a threshold minimum number of 2D points to generate a remaining set of one or more clusters;
- determining a maximum cluster of the one or more remaining clusters comprising determining which of the one or more remaining clusters comprises a 2D point with a maximum coordinate along the second axis; and
- averaging the 2D points of the maximum cluster to determine the representative 2D position.
17. A system comprising a memory storing instructions, and at least one processor configured to execute the instructions to perform: - receiving data indicative of a 3D point cloud comprising a plurality of 3D points;
- determining a 3D region of interest in the 3D point cloud, wherein the 3D region of interest comprises a width along a first axis, a height along a second axis, and a depth along a third axis;
- determining a set of 3D points of the plurality of 3D points that each comprises a 3D location within the 3D region of interest;
- representing the set of 3D points as a set of 2D points based on coordinate values of the first and second axes of set of the 3D points;
- grouping the set of 2D points into a plurality of 2D bins arranged along the first axis, wherein each 2D bin comprises a bin width;
- determining, for each of the plurality of 2D bins, a representative 2D position based on the associated set of 2D points; and
- connecting each of the representative 2D positions to neighboring representative 2D positions to generate the 2D profile.
18. The system of 17, wherein representing the set of 3D points as the set of 2D points comprises representing the 3D points in a 2D plane comprising a first dimension equal to the width and a second dimension equal to the height, wherein the first dimension extends along the first axis and the second dimension extends along the second axis.
19. The system of 17-18, wherein determining the representative 2D position for each of the plurality of 2D bins comprises determining an average of the set of 2D points of each bin.
20. The system of 17-19, wherein determining the representative 2D position for each of the plurality of 2D bins comprises, for each 2D bin: - grouping the set of 2D points into one or more clusters of 2D points with distances between values of the second axis of the 2D points of each cluster within a separation threshold, wherein distances between the values of the second axis of the 2D points of different clusters are greater than the separation threshold;
- removing any clusters with less than a threshold minimum number of 2D points to generate a remaining set of one or more clusters;
- determining a maximum cluster of the one or more remaining clusters comprising determining which of the one or more remaining clusters comprises a 2D point with a maximum coordinate along the second axis; and
- averaging the 2D points of the maximum cluster to determine the representative 2D position.
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/316,417 US11893744B2 (en) | 2020-05-11 | 2021-05-10 | Methods and apparatus for extracting profiles from three-dimensional images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063023179P | 2020-05-11 | 2020-05-11 | |
US17/316,417 US11893744B2 (en) | 2020-05-11 | 2021-05-10 | Methods and apparatus for extracting profiles from three-dimensional images |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210350615A1 US20210350615A1 (en) | 2021-11-11 |
US11893744B2 true US11893744B2 (en) | 2024-02-06 |
Family
ID=76217920
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/316,417 Active 2041-06-25 US11893744B2 (en) | 2020-05-11 | 2021-05-10 | Methods and apparatus for extracting profiles from three-dimensional images |
Country Status (6)
Country | Link |
---|---|
US (1) | US11893744B2 (en) |
JP (1) | JP2023525534A (en) |
KR (1) | KR20230050268A (en) |
CN (1) | CN116830155A (en) |
DE (1) | DE112021002696T5 (en) |
WO (1) | WO2021231254A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116704125B (en) * | 2023-06-02 | 2024-05-17 | 深圳市宗匠科技有限公司 | Mapping method, device, chip and module equipment based on three-dimensional point cloud |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5114236A (en) * | 1989-08-04 | 1992-05-19 | Canon Kabushiki Kaisha | Position detection method and apparatus |
US20040153671A1 (en) * | 2002-07-29 | 2004-08-05 | Schuyler Marc P. | Automated physical access control systems and methods |
US6992773B1 (en) * | 1999-08-30 | 2006-01-31 | Advanced Micro Devices, Inc. | Dual-differential interferometry for silicon device damage detection |
US20090232388A1 (en) * | 2008-03-12 | 2009-09-17 | Harris Corporation | Registration of 3d point cloud data by creation of filtered density images |
US20110310088A1 (en) * | 2010-06-17 | 2011-12-22 | Microsoft Corporation | Personalized navigation through virtual 3d environments |
US20130108143A1 (en) * | 2011-11-01 | 2013-05-02 | Hon Hai Precision Industry Co., Ltd. | Computing device and method for analyzing profile tolerances of products |
US20130230206A1 (en) * | 2012-03-01 | 2013-09-05 | Exelis, Inc. | Foliage penetration based on 4d lidar datasets |
US20140334670A1 (en) * | 2012-06-14 | 2014-11-13 | Softkinetic Software | Three-Dimensional Object Modelling Fitting & Tracking |
US20150199839A1 (en) * | 2012-08-02 | 2015-07-16 | Earthmine, Inc. | Three-Dimentional Plane Panorama Creation Through Hough-Based Line Detection |
US20160234475A1 (en) * | 2013-09-17 | 2016-08-11 | Société Des Arts Technologiques | Method, system and apparatus for capture-based immersive telepresence in virtual environment |
US9536339B1 (en) * | 2013-06-13 | 2017-01-03 | Amazon Technologies, Inc. | Processing unordered point cloud |
US20190163968A1 (en) * | 2017-11-30 | 2019-05-30 | National Chung-Shan Institute Of Science And Technology | Method for performing pedestrian detection with aid of light detection and ranging |
US20200090357A1 (en) * | 2018-09-14 | 2020-03-19 | Lucas PAGÉ-CACCIA | Method and system for generating synthetic point cloud data using a generative model |
US20200191943A1 (en) * | 2015-07-17 | 2020-06-18 | Origin Wireless, Inc. | Method, apparatus, and system for wireless object tracking |
US20200191971A1 (en) * | 2018-12-17 | 2020-06-18 | National Chung-Shan Institute Of Science And Technology | Method and System for Vehicle Detection Using LIDAR |
US10699444B2 (en) * | 2017-11-22 | 2020-06-30 | Apple Inc | Point cloud occupancy map compression |
US20200333462A1 (en) * | 2017-12-22 | 2020-10-22 | Sportlight Technology Ltd. | Object tracking |
US20210150230A1 (en) * | 2019-11-15 | 2021-05-20 | Nvidia Corporation | Multi-view deep neural network for lidar perception |
US11030763B1 (en) * | 2019-12-06 | 2021-06-08 | Mashgin Inc. | System and method for identifying items |
US20210183093A1 (en) * | 2019-12-11 | 2021-06-17 | Nvidia Corporation | Surface profile estimation and bump detection for autonomous machine applications |
-
2021
- 2021-05-10 DE DE112021002696.8T patent/DE112021002696T5/en active Pending
- 2021-05-10 WO PCT/US2021/031503 patent/WO2021231254A1/en active Application Filing
- 2021-05-10 US US17/316,417 patent/US11893744B2/en active Active
- 2021-05-10 JP JP2022568617A patent/JP2023525534A/en active Pending
- 2021-05-10 KR KR1020227043349A patent/KR20230050268A/en unknown
- 2021-05-10 CN CN202180061446.3A patent/CN116830155A/en active Pending
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5114236A (en) * | 1989-08-04 | 1992-05-19 | Canon Kabushiki Kaisha | Position detection method and apparatus |
US6992773B1 (en) * | 1999-08-30 | 2006-01-31 | Advanced Micro Devices, Inc. | Dual-differential interferometry for silicon device damage detection |
US20040153671A1 (en) * | 2002-07-29 | 2004-08-05 | Schuyler Marc P. | Automated physical access control systems and methods |
US20090232388A1 (en) * | 2008-03-12 | 2009-09-17 | Harris Corporation | Registration of 3d point cloud data by creation of filtered density images |
US20110310088A1 (en) * | 2010-06-17 | 2011-12-22 | Microsoft Corporation | Personalized navigation through virtual 3d environments |
US20130108143A1 (en) * | 2011-11-01 | 2013-05-02 | Hon Hai Precision Industry Co., Ltd. | Computing device and method for analyzing profile tolerances of products |
US20130230206A1 (en) * | 2012-03-01 | 2013-09-05 | Exelis, Inc. | Foliage penetration based on 4d lidar datasets |
US20140334670A1 (en) * | 2012-06-14 | 2014-11-13 | Softkinetic Software | Three-Dimensional Object Modelling Fitting & Tracking |
US20150199839A1 (en) * | 2012-08-02 | 2015-07-16 | Earthmine, Inc. | Three-Dimentional Plane Panorama Creation Through Hough-Based Line Detection |
US9536339B1 (en) * | 2013-06-13 | 2017-01-03 | Amazon Technologies, Inc. | Processing unordered point cloud |
US20160234475A1 (en) * | 2013-09-17 | 2016-08-11 | Société Des Arts Technologiques | Method, system and apparatus for capture-based immersive telepresence in virtual environment |
US20200191943A1 (en) * | 2015-07-17 | 2020-06-18 | Origin Wireless, Inc. | Method, apparatus, and system for wireless object tracking |
US10699444B2 (en) * | 2017-11-22 | 2020-06-30 | Apple Inc | Point cloud occupancy map compression |
US20190163968A1 (en) * | 2017-11-30 | 2019-05-30 | National Chung-Shan Institute Of Science And Technology | Method for performing pedestrian detection with aid of light detection and ranging |
US20200333462A1 (en) * | 2017-12-22 | 2020-10-22 | Sportlight Technology Ltd. | Object tracking |
US20200090357A1 (en) * | 2018-09-14 | 2020-03-19 | Lucas PAGÉ-CACCIA | Method and system for generating synthetic point cloud data using a generative model |
US20200191971A1 (en) * | 2018-12-17 | 2020-06-18 | National Chung-Shan Institute Of Science And Technology | Method and System for Vehicle Detection Using LIDAR |
US20210150230A1 (en) * | 2019-11-15 | 2021-05-20 | Nvidia Corporation | Multi-view deep neural network for lidar perception |
US11030763B1 (en) * | 2019-12-06 | 2021-06-08 | Mashgin Inc. | System and method for identifying items |
US20210183093A1 (en) * | 2019-12-11 | 2021-06-17 | Nvidia Corporation | Surface profile estimation and bump detection for autonomous machine applications |
Non-Patent Citations (6)
Title |
---|
[No Author Listed], Customizable Vision Series XG-X2000 Series, User's Manual. Keyence. Mar. 2018:7 pages. |
[No Author Listed], Matrox Imaging Library (MIL) Tools. Matrox Imaging. https://www.matrox.com/en/imaging/products/software/sdk/mil/tools/3D-imaging [last accessed Jun. 25, 2021]. 5 pages. |
International Search Report and Written Opinion dated Aug. 13, 2021 in connection with International Application No. PCT/US2021/031503. |
Mah et al., "3D laser imaging for surface roughness analysis," International Journal of Rock Mechanics and Mining Sciences 58 ( 2013): 111-117 (Year: 2013). * |
PCT/US2021/031503, Aug. 13, 2021, International Search Report and Written Opinion. |
Petras et al., Processing UAV and lidar point clouds in grass GIS, The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 41 (2016): 945 (Year: 2016). * |
Also Published As
Publication number | Publication date |
---|---|
CN116830155A (en) | 2023-09-29 |
KR20230050268A (en) | 2023-04-14 |
DE112021002696T5 (en) | 2023-02-23 |
JP2023525534A (en) | 2023-06-16 |
US20210350615A1 (en) | 2021-11-11 |
WO2021231254A1 (en) | 2021-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10880541B2 (en) | Stereo correspondence and depth sensors | |
KR102171231B1 (en) | Mixing infrared and color component data point clouds | |
US10510148B2 (en) | Systems and methods for block based edgel detection with false edge elimination | |
US9135710B2 (en) | Depth map stereo correspondence techniques | |
JP2016502704A (en) | Image processing method and apparatus for removing depth artifacts | |
US9208547B2 (en) | Stereo correspondence smoothness tool | |
WO2019160032A1 (en) | Three-dimensional measuring system and three-dimensional measuring method | |
US20210350115A1 (en) | Methods and apparatus for identifying surface features in three-dimensional images | |
WO2019167453A1 (en) | Image processing device, image processing method, and program | |
US20220405878A1 (en) | Image processing apparatus, image processing method, and image processing program | |
WO2019070703A1 (en) | Method and device for up-sampling a point cloud | |
US11893744B2 (en) | Methods and apparatus for extracting profiles from three-dimensional images | |
US11816857B2 (en) | Methods and apparatus for generating point cloud histograms | |
CN113947630A (en) | Method and device for estimating volume of object and storage medium | |
CN112233139A (en) | System and method for detecting motion during 3D data reconstruction | |
CN113379826A (en) | Method and device for measuring volume of logistics piece | |
CN111295694A (en) | Method and apparatus for filling holes in a point cloud | |
US11763462B2 (en) | Mesh hole area detection | |
CN113939852A (en) | Object recognition device and object recognition method | |
US20230306684A1 (en) | Patch generation for dynamic mesh coding | |
JP2023016500A (en) | Image processing apparatus, image processing method, and program | |
JP2023104131A (en) | Information processing device, information processing method, and program | |
WO2023180843A1 (en) | Patch generation for dynamic mesh coding | |
Peyrot et al. | Stereo reconstruction of semiregular meshes, and multiresolution analysis for automatic detection of dents on surfaces | |
CN112308896A (en) | Image processing method, chip circuit, device, electronic apparatus, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: COGNEX CORPORATION, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHU, HONGWEI;BOGAN, NATHANIEL;SIGNING DATES FROM 20211201 TO 20220603;REEL/FRAME:060367/0190 Owner name: COGNEX CORPORATION, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICHAEL, DAVID J.;REEL/FRAME:060367/0248 Effective date: 20220315 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction |