GB2528669A - Image Analysis Method - Google Patents

Image Analysis Method Download PDF

Info

Publication number
GB2528669A
GB2528669A GB1413245.0A GB201413245A GB2528669A GB 2528669 A GB2528669 A GB 2528669A GB 201413245 A GB201413245 A GB 201413245A GB 2528669 A GB2528669 A GB 2528669A
Authority
GB
United Kingdom
Prior art keywords
points
nodes
point
region
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1413245.0A
Other versions
GB201413245D0 (en
GB2528669B (en
Inventor
Minh-Tri Pham
Riccardo Gheradi
Frank Perbet
Bjorn Stenger
Sam Johnson
Oliver Woodford
Pablo Alcantarilla
Roberto Cipolla
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB1413245.0A priority Critical patent/GB2528669B/en
Publication of GB201413245D0 publication Critical patent/GB201413245D0/en
Priority to US14/807,248 priority patent/US9767604B2/en
Priority to JP2015146739A priority patent/JP6091560B2/en
Publication of GB2528669A publication Critical patent/GB2528669A/en
Application granted granted Critical
Publication of GB2528669B publication Critical patent/GB2528669B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

Analysing a point cloud (a) comprising a plurality of points C4-C8, each point representing a spatial point in an image, the analysis involving arranging the points into a hierarchical search tree (b), with a lowest level comprising a plurality of leaf nodes C4-C8 which each corresponds to a point of the point cloud. The search tree comprises a plurality of hierarchical levels with tree nodes C1-C8 in each of the levels, wherein at least one moment of a property of the descendant nodes is stored in each tree node. Geometric information of the points located within a region Q is determined by identifying the highest level tree nodes C3, C5 of the largest subtrees entirely contained within the region and performing statistical operations on the identified nodes. The statistical measures are determined from the moments of the properties stored within the identified tree nodes. The property may be the position, normal vector, colour, curvature, intensity or transparency at the point. The geometric information may be the number of points, or the mean or covariance of the property. The analysis may be used to produce feature descriptors for the region for object recognition or registration, or to filter and compress the point cloud.

Description

Image Analysis Method
FIELD
Embodiments of the present invention as described herein are generally concerned with the image analysis methods and systems.
BACKGROUND
Many computer vision and image processing applications require the ability to calculate statistical measures of a group of points from a 3D point cloud. One example is thc detection of features ofa3D point cloud.
Features provide a compact representation of the content of a scene represented by a 3D point cloud. They describe only local parts of the scene and, hence, over robustness to clutter, occlusions, and intra-class variation. For such properties, features are a favourable choice for problems like object recognition and registration, scene understanding, camera calibration and pose estimation. When the data is large (i.e. captured from a large-scale reconstruction method or a laser scanner), feature-based approaches have an advantage in efficiency over point-based approaches because the number of features are much smaller than the number of points.
Applications of feature-based approaches for 3D computer vision are ubiquitous. These include, but not limited to: object recognition and registration, mobile robots, navigation and mapping, augmented reality, automatic car driving, scene understanding and modelling.
One of the goals of 3D object recognition is the real-time detection of objects in a scene.
This technology is one of the key technologies for building CAD models from 3D data.
Other application examples are in navigation, mobile robotics, augmented reality, and automatic manufacturing. Detection from a 3D point cloud obtained from the camera is a difficult task, since only a partial view of the scene is available. Humans recognize objects with little effort. However, this is still a challenge in computer vision, especially with limited time resources.
One important step in 3D object recognition is to find and extract features, i.e. salient regions in tile scene which can he used to estimate the location and the orientation of an object. Feature extraction is also uscifil for other tasks: e.g. point cloud registration and object tracking.
BRIEF DESCRIPTION OF THE DRAWiNGS
Figure 1 is a schematic of a computcr vision system adapted to perform a method in accordance with an embodirncnt of the present invention; Figure 2 is a rcprcsentation of a feature; Figure 3 is a flow diagram illustrating a method of feature detcction in accordance with an embodiment; Figure 4(a) is a diagram showing a query overlapping with tree nodes and leaf nodes, figure 4(b) is a schematic of a search tree; Figure 5 is a plot of the time taken to perform 10,000 radius searches against the radius of the search region; Figure 6 is a plot of the time taken to perform 100,000 radius searches against the radius of the search region using a method in accordance with the present invention running on both a CPU and GPU; Figure 7 is a flow diagram illustrating a method of generating a feature descriptor in accordance with an embodiment; Figure 8 is a flow diagram illustrating a method of extracting a feature and generating a feature descriptor in accordance with an embodiment; Figure 9 is a flow diagram illustrating a method of point cloud filtering in accordance with an embodiment; Figure 10 is a flow diagram illustrating a method of point cloud sub-sampling in accordance with an embodiment; Figure 11 is a flow diagram illustrating density estimation of a point cloud in accordance with an embodiment; Figure 12 is a flow diagram illustrating a method of estimating the normals from a point cloud in accordance with an embodiment; and Figure 13 is a flow diagram illustrating a method of estimating the orientation of a surface of a point cloud in accordance with an embodiment.
DETAILED DESCRIPTION OF TIlE DRAWINGS
According to an embodiment, a method for analysing a point cloud is provided, the method comprising: receiving a point cloud, comprising a plurality of points, each point representing a spatial point in an image; arranging the points into a hierarchical search tree, with a lowest level comprising a plurality of leaf nodes, where each leaf node corresponds to a point of the point cloud, the search tree comprising a plurality of hierarchical levels with tree nodes in each of the hierarchical levels, the nodes being vertically connected to each other though the hierarchy by branches, wherein at least one moment of the property of the descendant nodes is stored in each tree node; and determining geometric information of the points located within a region, by idcnti1'ing the highest level tree nodes where all of the descendent leaf nodes are contained within the region and selecting the leaf nodes for the points where no sub-tree is entirely contained within the region, such that such that the points falling within the region are represented by the smallest number of nodes and performing statistical operations on the nodes representing the points in the region, the statistical measures being determined from the moments of the properties stored within the identified tree nodes.
The above method represents a set of points as a set of tree nodes, where the tree is constructed from the 3D point cloud. The region may be a 3D ball region, in this example, the method finds a set of all points of the point cloud that locate inside a ball region and returns the set as a set of tree nodes. The set of points found by the method can he exact or approximate.
To calculate, for example, the mean of positions, it is necessary to compute the mean of all of the points which fall within the region. In the above method, if it desired to calculate the mean of positions, when the search tree is built, the mean of the positions of all descendant nodes of a tree node are stored in each tree node. The mean of the positions being the 1n order moment of the positions, where the positions are the property. When the geometric information is computed, it is, where possible, calculated using the values of the means stored in the tree nodes. Thus, the method does not ealculatc thc mean for each of the points every time. The values from the leaF nodes themselves are only used when a tree node cannot be identified where all of its descendant nodes lie within the region. A group of points will he described by the highest level tree node from which the points descend. If some of the points that descend from a tree node are outside the region then that tree node cannot be used, and a lower level tree node is sought.
The property may be at least one selected from position, normal vector, colour, curvature, intensity or transparency.
The geometric information may be one or more selected from: number of points; mean of positions; mean of colour; mean of normal vectors; mean of intensity; covariance of positions; covariance of normal vectors; covariance of colour; variance of curvature; and variance of intensity.
The moments may be selected from 0th order, fit order, 2nd order, or any higher order moments.
In an embodiment, the method is configured to filter the point cloud by replacing the position of every point in the point cloud with the mean of its neighbouring points, the neighbouring points falling within a distance defined by the said region, wherein the geometric information is the mean of the neighbouring points.
In one embodiment, the method is used for filtering a point cloud, here the zeroth order and first order moments of the position of the points are stored in the said nodes, the method being configured to filter the point cloud by replacing the position of every point in the point cloud with the mean of its neighbouring points, the neighbouring points falling within a distance defined by the said region, wherein the geometric information is the mean of the neighbouring points calculated from the stored zeroth order and first order moments.
6 In a further embodiment, the geometric information is the number of points within a region, the method further comprising estimating the density of the point cloud using the number of points in a region divided by the size of the region.
In a further embodiment, the method is used for determining the number of points within a region. Here, the zeroth order moments of the position of the points are stored in the said nodes and the geometric information is the number of points within a region, the method thither comprising estimating the density of the point cloud using the number of points in a region from the stored zeroth order moments divided by the size of the region.
The method may also be used when determining the normal to the surface at a point or point, here, the zeroth order, first order and second order moments of the position of the points may be stored in the said nodes and the geometric information is tile normal vector of a selected point on the point cloud, the normal vector being determined by calculating the covariance matrix of the points within a region around the selected point from the moments stored within the nodes and the method further comprising determining the normal from the 3td eigenvector of the covarianee matrix.
The orientation of the surface at a point or a subset of points may be determined by storing the zeroth order, first order and second order moments of the position of the points in the said nodes and the geometric information is the orientation of the surface of a point cloud at a selected point, the orientation being determined by calculating the covarianee matrix of the points within a region around the selected point from the moments stored within the nodes and deriving the orientation from the 3 eigenvectors of the covariance matrix.
In an embodiment, the above method is used for feature detection. During feature detection, it is necessary to compute geometric information from a number of points in a region. Many types of feature detectors use the covariance of either the position or the surface normals. The covariance can be computed from the 0th 1st and 2nd order moments and these moments can be stored in the tree nodes when the tree is constructed. These stored moments can be retrieved from the tree nodes during feature detection to avoid the need to calculate the moments using each of the descendant nodes when determining the covariances.
The ability to calculate the covariance matrix has many applications in determining feature locations and also feature descriptors. Here, the zeroth order, first order and second order moments of the position of the points may be stored in the said nodes and the geometric information is the location of a feature in the point cloud, the method thrther comprising calculating the covariance matrix from the moments stored within the nodes for the points in a region defined around a selected point and determining a score from an eigenvalue of said covariance matrix, wherein features are deemed to be located at selected points on the basis of their score.
In an embodiment, the covariance matrix has three eigenvalues and the lowest eigenvalue is assigned as the score. In a further embodiment, said region is a first ball having a first radius, and a selected point is analysed by constructing a first ball and a second ball having a second radius around said point, where the second radius is larger than the first radius, wherein each of the points in the point cloud within the second ball for a selected point are analysed by constructing first baits around these points and calculating the score for each point. Here, the feature location may be determined to be at the point with the largest score calculated for said first ball.
in the above, the descriptor may he derived as the first ball and the 3D orientation for the first ball deteninned for the point where the feature is located.
however, the covariance matrix may also he used to determine other descriptors known in the an and these can he computed using the covariance from the above analysis method.
S Ihis can then he used for object recognitionlregistration where the extracted features from an unknown scene are compared with a database of features from known previously analysed objects.
Although the above techniques call be used to efficiently extract features and descriptors, there arc also other uses. For example, point cloud subsampling, where the above filtered point cloud is sampled. This can he used for point cloud compression and visualisation.
In a further embodiment, a system configured to analyse a point cloud is provided, the system comprising: a point cloud receiving unit adapted to receive a point cloud comprising a plurality of points, each point representing a spatial point in an image; a processor adapted to arrange the points into a hierarchical search tree, with a lowest level comprising a plurality of leaf nodes, where each leaf node corresponds to a point of the point cloud, the search tree comprising a plurality of hierarchical levels with tree nodes in each of the hierarchical levels, the nodes being vertically connected to each other though the hierarchy by branches, wherein at least one moment of the property of the descendant nodes is stored in each tree node, the processor being ftwther adapted to determine and output geometric information of the points located within a region, by identifying die highest level tree nodes where all of the descendent leaf nodes are contained within the region and selecting the leaf nodes for the points where no sub-tree is entirely contained within the region, such that such that the points falling within the region are represented by the smallest number of nodes and perfomiing statistical operations on the nodes representing the points in the region, the statistical measures being determined from the moments of the properties stored within the identified tree nodes.
In an embodiment the processor is a Graphics Processing Unit (GPIJ).
In a further embodiment, the method is implemented on a mobile device such as a mobile telephone, tablet etc. The method can be used to provide object recognition or S registration for use with augmented reality applications, etc. The feature detection, description, filtering, point cloud subsampling, orientation determination embodiments and the other embodiments can all be implemented within a mobile device.
Since the embodiments of the present invention can be implemented by software, 25 embodiments of the present invention encompass computer code provided to a genera! purpose computer, mobile processing device etc., which may, in an embodiment comprise a CPU, on any suitable carrier medium. The carrier medium can comprise any storage mcdium such as a floppy disk, a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an 1 5 electrical, optical or microwave signal.
Figure 1 shows a possible system which can be used to capture 3-D data in the form of a point cloud. The system basically comprises a camera 35, an analysis unit 21 and possibly a display (not shown).
In an embodiment, the camera 35 is a standard video camera and can be moved by a user. In operation, the camera 35 is freely moved around an object which is to be imaged. The camera may be simply handhcld. However, in further embodiments, the camera is mounted on a tripod or other mechanical support device. A 31) point cloud may then he constructed using the 2D images collected at various camera poses. In other embodiments a 3D camera or other depth sensor may be used, for example a stereo camera comprising a plurality of fixed apart apertures or a camera which is capable of projecting a pattern onto said object, LIDAR sensors and time of flight sensors. Medical scanners such as CAT scanners and MRI scanners may be used to provide the data. Methods for generating a 3D point cloud from these types of cameras and scanners are known and will not be discussed further here.
The analysis unit 21 comprises a section for receiving camera data from camera 35.
The analysis unit 21 comprises a processor 23 which executes a program 25. Analysis unit 21 further comprises storage 27. The storage 27 stores data which is used by program 25 to analyse the data received from the camera 35. The analysis unit 21 further comprises an input module 31 and an output module 33. The input module 31 is connected to camera 35. The input module 31 may simply reccivc data directly from the camera 35 or alternatively, the input module 31 may receive camera data from an external storage medium or a network.
[11 use, the analysis unit 21 receives camera data thiough input module 31. The program executed on processor 23 analyses the camera data using data stored in the storage 27 to produce 3D data and recognise the objects and their poses. The data is output via the output module 35 which may be connected to a display (not shown) or other output device either local or networked.
Figure 1 is purely schematic, the system could be provided within a. mobile device such as a mobile telephone, tablet etc. where the camera is the inbuilt camera, for example the camera can be moved around to obtain 3D data. Alternatively, the data can be sent to the mobile device. In a further embodiment, the system can be provided in wcarahlc technology such as watches and/or headwear, visors etc. The analysis unit can be configured to handle a number of tasks. The task of detecting leatures in an image will be described with reference to figures 2 to 3. A feature is an area of interest of an object and usually where there is a change in the profile of the object. Detecting features is a crucial step in many 31) applications such as object recognition and registration, navigation and mapping, scene understanding and modelling. Figure 2 shows a schematic of a corner of an object that has been identified as a feature.
Many feature detectors exist for 3D point clouds. Some of these detectors rely on extracting some statistics of points or surface normals located in vicinities of points observed in the scene.
In an embodiment, to identif' suitable points for analysis, a query is provided as a ball region. 1o identi ly the points for analysis, a search tree structure is built to index the points, in the flow diagram of figure 2, the point cloud is received in step Si 01 from an apparatus of the type described with reference to figure 1.
Instep 5103, a search tree is built in order to index the points of the point cloud. Many different methodologies can be uscd for building the search tree, for example, Octree or KlY1'ree. rn some embodiments, a trcc structure will be implemented, where at each node, the tree divides into two branches, selected so that there is the maximum distance between the values of the two groups produced at the node.
In this embodiment, in step S 105, values are stored in the nodes of the search tree. The exact nature of the values will depend on the application and will be explained in more detail later. However, in each node, moments of a property associated with all points of the point cloud that descend from a tree node are stored within the tree node. For example, if the analysis to be performed requires the covariance of a property of the points, for example, the surface normal, colour ete, when the search tree is built, the eovarianee of the property calculated for all descendant nodes will be stored in the tree node. Each tree node will form the root node for its own sub-tree where the subtree comprises all descendant points.
In step SI 07, a query ball region R is set. In one embodiment, the size of the ball will he fixed. In other embodiments, the size of the ball will vary. In this particular example, for simplicity, a single ball size will he assumed.
Next, in an embodiment, a branch-and-bound method is used to search in the tree to end points located inside the ball. In this embodiment, the sub-trees which fill completely within the region R are identified such that the highest level sub trees are identified where all of the descendent leaf nodes are contained within the region.
Where it is not possible to identir a sub-tree because one is not entirely contained within the region, the leaf nodes representing the points will be selected. Thus, the points falling within the region are represented by the smallest number of nodes.
The determining of nodes in the region R is performed in step Si 09. This will be explained in more detail with reference to figure 4. In figure 4(b) a search tree is shown with a root node Cl at the highest level which splits into two branches. kaeh of the two branches will terminate in a tree node C2 and C3 respectively. free node C2 subdivides further and has descendant leaf nodes C4 and CS. Tree node C3 subdivides further to terminate in three leaf nodes C6, C7 and C8.
Figure 4(a) shows a schematic of the search query Q with the leaf nodes and tree nodes Cl to C8 marked. It can be seen that the entire tree from root node Cl is not entirely contained within search query Q, as point C4 lies outside Q. However, the subtree from tree node C3 is entirely contained withhi search query Q and therefore, the tree node C3 can he used to represent the three descendant nodes C6, C7 and CS.
Leaf node CS does not belong to a sub-tree that is entirely contained within the search query Q and hence this point can only he represented by the leaf node CS.
As explained above the tree nodes store moments for their descendent leaf nodes.
The embodiment of figure 3 relates to a keypoint detector. The Harris method is a corner and edge based method and these types of methods are characterized by their high-intensity changes in the horizontal and vertical directions. When adapted to 31), image gradients are replaced by surface nonnals. A eovarianee matrix (7(x) is computed from the surface normals of points located in a ball region centred at each point x C R of the point cloud, with a predefined radius r. The keypoint response is then defined by: R(x) = k(trace(C(x))2, (1) where k is a positive real-valued parameter. This parameter serves roughly as a lower een the magnitude of the weaker edge and that of the stronger edge. In addition, there are two other variants of the Harris 3D keypoint detector. The Lowe method uses the following response: R(x) = (trace(C(x))2 (2) The Noble method uses the following response: R(x) = (trace(C(x) (3) In the Kanade-Lucas-Tomasi (KLT) detector, to adapt to 3 [), the eovariance matrix is calculated directly from the input 31) positions instead of the surface normals. For the keypoint response, the first eigenvalue of the eovarianee matrix is used.
The so-called SIFT keypoint detector uses a 3D version of the Hessian to select keypoints. A density flmctionf(x is approximated by sampling the data regularly in space. A scale space is built over the density function, and a search is made for local maxima of the Hessian determinant. The input point cloud is convolved with a number of Gaussian filters with increasing scales. The adjacent smoothed volumes are then subtracted to yield a few Difference-of-Gaussian (DoG) clouds. Keypoints are identified as loca' miniinalmaxirna of the DoG clouds across scales.
Intrinsic Shape Signatures (155) relies on region-wise quality measurements. This method uses the magnitude of the smallest Eigen-value (to include only points located inside a query ball with large vanations along each principal direction) and the ratio between two successive eigenvalues (to exclude points having similar spread along principal directions). Like the above methods, it reqnires an algorithm to search for points located inside a given ball region.
The above are examples of keypoint or feature detection methods. Each of them require the calculation of the covarianee matrix for the points that fall within the region R. In step S 109, the nodes that faJl within the region R were identified.
First, consider an algorithm that finds points located inside a given query ball region as a radius search, which takes as input a 3D point p and radius r> 0, and returns as an output a list of points of the point cloud, X = {xi. . . x} where ii is the numher of points returned.
In the method discussed in relation to step S 109, each node of the underlying tree defines a subset of the point cloud, which is the set of all points of leaf nodes which arc descendants of the given node.
Let c1,Crn represent the tree nodes (for some m <n) and denote by P(e) the set of points of leaf nodes of a node c. The subset Xcan be equivalently represented as a union of P(c1) U. . . U P(cik) for some set of tree nodes c = {c1.
Since in practice the number of elements in C, i.e. k, is substantially smaller than the number of elements in X, i.e. n, if the radius search algorithm returns C instead of X and if the statistics based on C are computed instead of i%', efficiency will be improved.
It is possible to compute the statistics based on C, hut a further step is needed. For ease of explanation, it will be assumed that the goal is to compute the statistics of points, i.e. X. Once this example is explained, the extension of the idea to other kinds of geometric information, for example surface normals will be explained.
The number of points, the mean of points, and the eovariancc matrix of points of X will be denoted by n, ii and C, respectively. Defining the following quantities, m(X) = (4) M(X) = (5) the mean and the covariance can be computed indirectly via the following formulae: u = m(X)/n, (6) C = M(X)/n. m(X)m(X)T/n2. (7) Hence, all that is lefi is to compute rn(X)and M(X) from C. Since these two quantities are sums of some quantities of the points, and sum is an associativc operator, they can be computed by the following fomiulae: m(X) = Em(P(cjj)), (8) M(X) = M(P(ciD). (9) Therefore, if the quantities m(P(c)) and M(P(e)) are computed for every tree node c, we can compute the point statistics of X from C [hr every query via summing these quantities.
The following can be implemented by the following algorithm Algorithm 1 Fast Statistics Extraction 1: Offline phase: for each tree node e, pr&compute a hoiniding ball B(c), and quantities m(P(c)) and M(P(c)).
2: Online phase: input is a qnery ball Q = (p, r).
3: Create an empty stack of tree nodes 7-1 and push the root node c0 to H..
4: Set ii = 0, m = (I and M = 0.
5: while 7-1 is not enipty do 6: Pop c from 7-1.
7: if B(c) C Q then 8: Set ii = Ti -L P(c)l.
9: Setm=m+rn(P(c)).
10: Set M = M + M(P(c)).
11: else if B(c) fl Q $ 0 thou 12: Push every child node of e to N. 13: end if 14: end while 15: return a as the nnmbcr of points, rn/n as the mean of points, and M/r as the covariance of points. -___________________ It should he noted that since each search query does not modif' the tree structure, multiple queries can be served in parallel. This allows tasks to be further accelerated, by loading the tasks to a GPIJ.
The above example has used the statistics of the points to be stored in the tree nodes.
However, as noted above any geometric property of the points could be used in the same way and the required moments of this property stored in the nodes in order to reduce the computation as explained above. For example, the O' order moments may be calculated (i,e, the numbers of points) may be stored in the tree nodes, the Vt order moments of any property, for example, to calculate the mean values. In a further embodiment 2" order moments can be stored in order to calculate the covarianee etc. Further order moments may also be calculated and pre-stored in the tree nodes.
The flow chart of figure 3 relates to feature detection and in step S 111, the statistics necessary for feature detection are calculated, The exact statistics used depend on the actual method used, possible examples have been given above. IL for example, the ISS method is used, the covariance matrix is constructed. This is achieved by pre-computing the second moment of the position of the points or the surface normals for the tree nodes as described above.
Once this has been done, for one region, the method progresses to step Si 13 where it is determined whether or not the whole image has been analysed. If there are more regions, a new region R is set and the procedure is completed again from where the nodes to be evaluated are determined. In an embodiment, each region is defined as a hal! centered at a point of the point cloud/image with a given radius. It is determined that the whole image has been analysed when every point and every radius has been analysed.
In some embodiments, the size of the region is fixed. In other embodiments, the radius of the region R is optimiscd. Here, the process from step S109 will be repeated for regions centred about the same point but with different radii. In an embodiment, this is achieved by computing a score, for example, the "comemess" of points in a region, detennined by the 3Td eigenvalue of the covariance matrix of points located in that region. In feature detection, regions are selected that score higher than all their neighbouring regions.
S In practice, two ball regions are considered neighbours to cach other if both following conditions are met; -the ratio between the two radii is between 1/alpha and alpha, whcrc alpha is often setto 1.5.
-the distance between two ball centers is less than beta times the radius of lEe larger ball, where beta is often set to 0.8.
This applies to both the cases in when the same radius is fixed for every ball region and the ease in which the radius is varied for each ball region. In both eases, a ball region is only retained if its score is higher than the scores of all of its neighbouring ball regions.
Note that in the ease that the radius is varied, in an embodiment the radius per point is not optimized. Instead, all ball regions at different radii and different points are computed. Then the ball regions with highest scores compared to their neighbours are retained.
The above method was tested both using a CPU and a GPU using the OpenOL Shading Language (GLSL). Ten input point clouds for testing were taken from the toshiba Object Recognition datasct each representing a different object. The number of points per point cloud ranged from 80000 to 150000.
Query balls were generated at random with centres being unifonnly sampled from the point cloud's points and radius varying from 0 to 25cm. The above method requires a low processing time. Illustrated in figure 5 is the plot of the average processing time over the query ball's radius, obtained from the above method. Interestingly, as the radius increases, the speed of the above method started to decrease because the number of tree nodes actually decreases.
At the extremurn, when the radius is so large that every point is included in a query ball, the above method visits oniy a single tree node, he. the root node.
Comparing the CPU version with the GPTJ version of the above method, illustrated in figure 6, it is shown that the GPU version improves the speed by 8-13 folds.
It can be seen from the above example, that it provides a method for fast extraction of statistics feature detection. The method improves the efficiency of feature detection, which is very useftil for large-scale data.
In the above method, the search tree is built and points within a given ball region are located. However, in the above method for each tree node, all points that locate at the descendant leave nodes of the tree node are additionally identified. Next, a bounding ball and a set of moments from these points for each free node is pre-computed. When extracting geometric information from a given query ball region, the bounding balls are used to quickly identir a minimal set of tree nodes such that their points are the same set of points located inside the query ball region. Geometric information is then extracted indirectly from the moments of the identified tree nodes, rather than from the set of points inside the query ball region.
The speed of processing the above method is not dependent on the number of points found per query, hut dependent on the number of tree nodes found. However, the number of tree nodes is practically much smaller than the number olpoints, leading to a substantial gain in efficiency.
The larger the point cloud, the more gain in speed is seen with the above method. For example, when a query ball region is so large that it covers every point of the point cloud, the above method returns a single tree node as opposed to every point in the query ball region. The gain in this ease can be N times where N is the number of points of the point cloud.
The above description has related to feature detection, but the method can also be applied to the construction of feature descriptors.
Figure 7 is a flow diagram illustrating a method of generating a feature descriptor and/or extracting a feature in accordance with an cmbodiment. To avoid any unnecessary repetition, like rcfcrcncc numerals will be used to denote like features.
As for the flow diaam explained with reference to figure 3, a point cloud is obtained in step SlOl. In step S 103, a search tree is built in order to index the points of the point cloud.
En step S205, the moments of a propcrty are stored in each node, the momeni is calculated based on all nodes descending from the node where the moment of the property is stored. The process is explained in more detail with reference to step S105 1 5 of figure 3. In this embodiment, for detenuining a feature descriptor, the zeroth order, first order and second order moments of the positions are stored in the nodes.
In step S207, a region R of the image is set to be analysed. When constructing a descriptor, this may be a set size around a point x.
In step 5209 the nodes to be evaluated are determined. This process is the same as that described with reference to step Sl09 of figure 3.
In an embodiment, the descriptor may he determined by calculating the covariance matrix in step 5211 using the moments stored in the nodes as explained with reference to step 5205. The descriptor is detennined in step 5213.
In step 5215, a check is perfoimed to see if the analysis has been completed. If it has not been completed, then a new region is selected in stcp S21 7 and the nodes within this region are evaluated in step 5209 to repeat the process.
For example, if the position of the features has already been extracted as points x, then the process can go through the points x in sequence.
Figure 8 is a flow diagram of a method for both feature extraction and detection. To avoid any unnecessary repetition, like reference numerals will be used to denote like features.
As for the flow diagram explained with reference to figure 3, a point cloud is obtained in step 8101. Instep 8103, a search tree is built in order to index the points of the point cloud.
In step S255, the moments of a property are stored in each node, the moment is calculated based on all nodes descending from the node where the moment of the property is stored. The process is explained in more detail with reference to step 8105 of figure 3. In this embodiment, for feature extraction and generation of descriptors, the zeroth order moment, first order moments and second order moments of the positions are stored.
In step 8257, a point x is selected from the point cloud. Eventually, the method will be performed for all points in the point cloud.
In step S259, for point x a first ball Bj(x, centered at x with a fixed radius rj and a ball B2(x, also centered at x but with a fixed radius r2 are constructed. In step S261, for each ball B,x,), a 3D orientation R'x,) and a seorej(x,) is computed.
in an embodiment, this feature extraction method is used to recognize objects of relatively the same size, i.e. their appearance can be bounded by a 3D ball of a known radius Here r is detennined to be gamma times r where gamma is often set to 0.3 and r2 is determined to be beta times r where beta is often set to 0.8. Ihe values for gamma and beta can he varied as necessary.
The 31) orientation and the score Jj'x) for a given point x are computed by computing the number of points n(x,) located inside Bj'x,), the mean rn(x) and the covariance matrix Ccx,) of the positions of all points located inside The number of points can be determined from the stored zeroth order moments.
The three eigenvalucs vj(v,), v2(x,.) and v3(x,) of the covariance matrix C(x) arc calculated and sorted into descending order, together with their associated eigenvectors e;'9, e2(x,), and e3(x,). For each cigenvector e1x where i=].. 3, its direction is flipped by setting e,(r,)= -e(x) if the dot product of e,(x) and m(x)-x is negative. The three eigenvectors are then assigned as the 3D orientation: R'x,) (ej(r,), e2(x,l, e3(vi).
Next, the third eigenvalue v3(x,) is assigned as the score ffx,) . intuitively, the score J1v,) represents how thick the surface of points in Bj(x,) is. Features detected by the method often correspond to corners, edges, and intersected regions of two or more objects.
In step S263, the point x is selected whose scoref() is greater than the scores of all neighbouring points located inside B2('c). In step S265, it is checked if the selected feature is robust. In an embodiment, the features that do not meet the following requirements are removed: * nx,) < 0,': (default value for Oj is 5) thc number of points in Bj(ç is too few.
* i'2(x,)/vj(x,) K 02: (default value for 02 is 0.95) V2('X,) is too close to vj'x,) making the computation of ej'x,) and e2(x unstable.
* v3(x,)/v2('x,Y K O: (default value for 03 is 0.95) v3('x) is too close to v2(x/) making the computation of e2(x) and e3(r) unstable.
In step S267, a check will he performed to sec if all points in the point cloud have been analysed. If not, then a new point x is selected in step S273 and the process ioops back to step S259.
The method of the above embodiment requires a very low processing time. It cun-ently runs at 15 frames per second, suitable for real-time applications. Another advantage is that it does not require colour information. Hence, it is suitable for situations where colour is not a distinctive feature, e.g. for texture-less objects or objects captured iii poor light conditions.
Figure 9 is a flow diagram illustrating a method of point cloud filtering in accordance with an embodiment. To avoid any unnecessary repetition, like reference numerals will be used to denote like features.
As for the flow diagram explained with reference to figure 3, a point cloud is obtained in step SI 01. In step SI 03, a search trcc is built in order to index the points of the point cloud.
In step S305, the moments of a property are stored in each node, the moment is calculated based on all nodes descending from the node where the moment of the property is stored. the process is explained in more detail with reference to step S105 of figure 3. Tn this embodiment, for filtering, the first order moment is stored in the nodes such that the mean of the position of the descendant nodes is stored.
In this filtering embodiment, each point is selected and the position of the point is replaced with the mean of the region around the point. In step 5307 the point to be filtered is selected. Each point in the point cloud to be analysed will be filtered during the process. Therefore, in this embodiment, the points to be filtered arc selected in sequence.
A region round the point is set in step 5309 which is a ball centred around the point. ]n this embodiment, the ball will have a fixed radius. [he radius will he set dependent on the level of filtering required. l'he radius will be fixed for the whole point cloud that is to be imaged. In an embodiment, the radius is detennined as follows. First, the shortest non-zero distance between any two points of the point cloud is established by exhaustively going through every pair of points. Let this shortest distance be do. Then, the radius is set to be lambda times d0 where lambda is often set to 5.
In step 5311 the nodes to be evaluated arc determined. This process is the same as that described with reference to step Si 09 of figure 3. The mean is calculated in step 531 3 using the first order moments stored in the nodes as explained with reference to step 5305. The position value at each point is replaced with the calculated mean in step S315. In step S317, a check is made to see if the process has been completed for all points. If not, a new point is selected in step 5319 and a region is constructed around that point in step 5309. The process then repeats until it has been completed for all points.
In the above example, the property is the position of the points. However, other properties could he treated in the same way. For example, the colour of the points could be filtered as described above with the first order moment of the colour being stored as the property. It should be noted, that the moments of multiple properties may he stored in the nodes so that the means, or other statistical measure of multiple properties may be calculated.
Other methods of filtering may also be used, for example, the mean may be calculated as described with reference to figure B up to step S3 13. However, then at step 5315, the difference between the property at the point from the mean may be calculated. If the difference is greater than a pre-determined value, the point may be discarded or replaced by the mean value.
Figure 10 is a flow diagram illustrating a method of point cloud sub-sampling. The method is based on the filtering method described with reference to figure 9. To avoid any unnecessary repetition, like reference numerals will be used to denote like features.
After the filtered point cloud has been generated in step 5321, points are randomly sampled from the filtered point cloud to produce a sub-sampled point cloud.
The above method described with reference to figure 10 is also a method of point cloud compression as the subsampled point cloud has a lower number of points than the point cloud received in step SI 01.
Although the above examples have referred to random sampling of the points, other sampling methods could he used. Also, the method of filtering described with reference to figure B could he modified, by, for example, only replacing the value of the point with the mean if the point is over a predetermined distance from the mean or points that a more than a predetennined distance from the mean can be discarded.
Figure II is a flow diagram illustrating density estimation of a potht cloud. To avoid any unnecessary repetition, like reference numerals will be used to denote like features.
As for the flow diagram explained with reference to figure 3, a point cloud is obtained in step 8101. In step S 103, a search tree is built in order to index the points of the point cloud.
In step S405, the moments of a property are stored in each node, the moment is calculated based on all nodes descending from the node where the moment of the property is stored. The process is explained in more detail with reference to step 5105 of figure 3. Tn this embodiment, for density estimation, the first order moment is stored in the nodes thus, the number of descendant nodes is stored in each node.
In step S407, the region for which the density is to be estimated is selected. In step 8409, the nodes to be evaluated arc determined in the same maimer as described with reference to step 8109 of figure 3. The total number of points in the region is then calculated using the zeroth order moments stored in the nodes in step 5411. To estimate the density, the number of points is then divided by the volume of the region in step 8413 andthe density is output in step S415.
Figure 12 is a flow diagram illustrating a method of estimating the normals from a point cloud. To avoid any unnecessary repetition, like reference numerals will be used to denote like features.
As for the flow diagram explained with reference to figure 3, a point cloud is obtained in step 5101. In step 5103, a search tree is built in order to index the points of the point cloud, In step S505, the moments of a property are stored in each node, the moment is calculated based on all nodes descending from the nodc whcre the moment of the property is stored. lhc process is explained in more detail with reference to step Si 05 of figure 3. In this embodiment, for normal estimation, the zeroth, first and second order moments from the point positions of the descendant nodes are stored in each tree nodc.
In step S507, the point for which the normal is to be calculated is selected. In step 5509, a region is constructed around the point. In an embodiment this is a ball region with a fixed radius. In an embodiment, the radius is detennined as follows. First, the shortest non-zero distance between any two points of the point cloud is established by exhaustively going through every pair of points. l,et this shortest distance he d0. Then, the radius is set to he lambda times d0 where lambda is often set to 5.
In step 5511, the nodes to be evaluated are determined in the same manner as described with reference to step SI 09 of figure 3. In step S513, the covariance matrix of all points within the region is computed using the stored, zeroth, first and second order moments.
In step 5515, the normal is thcn cstimated as the third Eigen vector of the covarianee matrix.
In step 5517, a check is performed to see if the process has been pealormed for all points. If not, and there are more points for which an estimate of the normal is required, a new point is selected in step 5519 and the process ioops back to step 5509 where a new region is constructed. In step 5521, the estimated normals are outputted.
In a further embodiment, a mean of the normal vectors is computed by storing the zcroth and first order moments of the normals in the tree nodes.
Figure 13 is a flow diagram illustrating a method of estimating the orientation of a surface of a point cloud. To avoid any unnecessary repetition, like reference numerals will be used to denote like features.
As for the flow diagram explained with reference to figure 3, a point cloud is obtained in step Slot. In step S103, a search trcc is built in order to index the points of thc point cloud.
In step 5605, the moments of a property are stored in each node, the moment is calculated based on all nodes descending from the node where the moment of the property is stored. The process is explained in more detail with reference to step 5105 of figure 3. In this embodiment, fin orientation estimation, the zeroth, first and second order moments from the point positions of the descendant nodes are stored in each tree node.
In step 5607, the point for which the orientation is to be calculated is selected. In step S609, a region is constructed around the point. In an embodiment this is a ball region with a fixed radius. In an embodiment, when the orientation at each point in required, the radius may be determined as follows. First, the shortest non-zero distance between any two points of the point cloud is established by exhaustively going through every pair of points. Let this shortest distance be do. Then, the radius is set to be lambda times d0 where lambda is often set to 5.
When it is desired to determine the orientation of a feature, the radius can be determined from the ball regions determined in the feature detection method of the type described, for example, with reference to figure 8.
In step S6l 1, the nodes to be evaluated are determined in the same manner as described with reference to step Si 09 of figure 3. In step S613, the covariance matrix of all points within the region is computed using the stored, zeroth, first and second order moments.
In step S615, the orientation is then estimated as the three Eigen vectors of the covariance matrix.
in step 5617, a check is performed to see if the process has been perthrmed for all points. If not, and there are more points for which an estimate of the orientation is required, a new point is selected in step 5619 and the process ioops back to step 5609 where a new region is constructed. In step 5621, the estimated orientations are outputted.
In an embodiment, any of the above methods taught in figures 3 to 13 can be implemented in a mobile device, for example, a mobile telephone, tablet, wearable technology etc. Such methods can form part of other applications, for example, augmented reality, gesture recognition, object recognition, object regisfration, pose estimation of objects or the mobile device, navigation andlor gaines.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed the novel methods and systems described herein may be embodied in a var ety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and systems described herein may be made without departing fioni the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms of modifications as would fall within the scope and spirit of the inventions.

Claims (20)

  1. CLAIMS: I. A method for analysing a point cloud, the method comprising: receiving a point cloud, comprising a plurality of points, each point representing a spatial point in an image; arranging the points into a hierarchical search tree, with a lowest level comprising a plurality of leaf nodes, where each leaf node con-esponds to a point of the point cloud, the search tree comprising a plurality of hierarchical levels with tree nodes in each of the hierarchical levels, the nodes being vertically connected to each other though the hierarchy by branches, wherein at least one moment of the property of the descendant nodes is stored in each tree node; and determining geometric infommtion of the points located within a region, by identif'ing the highest level tree nodes where all of the descendent leaf nodes are contained within the region and selecting the leaf nodes for the points where no sub-tree is entirely contained within the region, such that such that the points falling within the region are represented by the smallest number of nodes and performing statistical operations on the nodes representing the points in the region, the statistical measures being detennined from the moments of the properties stored within the identified tree nodes.
  2. 2. A method according to claim I, wherein the property is at least one selected from position, normal vector, colour, curvature, intensity or transparency.
  3. 3. A niethod according to claim 1, wherein the geometric information is at least one selected from: number of points; mean of positions; mean of colour; mean of normal vectors; mean of intensity; eovariance of positions; covariance of normal vectors; covariance of colour; variance of curvature; and variance of intensity.
  4. 4. A method according to claim I, wherein the moments are selected from 0th order, l order, 2mfh order, or any higher order moments.
  5. 5. A method according to claim 1, wherein the zeroth order and first order moments of the position of the points are stored in the said nodes, the method being configured to filter the point cloud by replacing the position of every point in the point cloud with the mean of its neighbouring points, the neighbouring points falling within a distance defined by the said region, wherein the geometric information is the mean of the neighbouring points calculated from the stored zeroth order and first order moments.
  6. 6. A method according to claim 1, wherein the zeroth order moments of the position of the points arc stored in the said nodes and the geometric information is the mimber of points within a region, the method further comprising estimating the density of the point cloud using the number of points in a region from the stored zeroth order moments divided by the size of the region.
  7. 7. A method according to claim 1, wherein the zeroth order, first order and second order moments of the position of the points are stored in the said nodes and the geometric information is the normal vector of a selected point on the point cloud, (lie normal vector being determined by calculating the covariance matrix of the points within a region around the selected point from the moments stored within the nodes and the method further comprising detetmining the normal from the 3 eigenveetor of the covariance matrix.
  8. 8. A method according to claim 1, wherein the zeroth order, first order and second order moments of the position of the points are stored in the said nodes and the geometric information is the orientation of the surface of a point cloud at a selected point, the orientation being deteimined by calcifiating the covarianee matrix of the points within a region around the selected point from the moments stored within the nodes and deriving the orientation from the 3 eigcnvcctors of the covariance matrix.
  9. 9. A method according to claim 1, wherein the zeroth order, first order and second order moments of the position of the points are stored in the said nodes and the geometric infornrntion is the location of a feature in the point cloud, the method further comprising calculating the covariance matrix from the moments stored within the nodes for the points in a region defined around a selected point and determining a score from an eigenvalue of said eovarianee matrix, wherein leatures are deemed to be located at selected points on the basis of their score.
  10. 10. A method according to claim 9, wherein the covariance matrix has three eigenvaJues and the lowest eigenvalue is assigned as the score.
  11. 11 A method according to claim 9, wherein said region is a first ball having a first radius, and a selected point is analysed by constructing a first ball and a second ball having a second radius around said point, where the second radius is larger than the first radius, wherein each of the points in the point cloud within the second ball for a selected point are analysed by constructing first balls around these points and calculating the score for each point.
  12. 12. A method according to claim 11, wherein the feature location is detennined to be at the point with the largest score calculated for said first ball.
  13. 13. A method according to claim 9, wherein a descriptor is derived for an identified feature, said desenptor being the first ball and the 3D orientation for the first ball determined for the point where the feature is located.
  14. 14. A method according to claim I, wherein the zeroth order, first order and second order moments of the position of the points are stored in the said nodes and the geometric information is the descriptor of a feature in the point cloud, the method further comprising calculating the covarianee matrix from the moments stored within the nodes for the points in a region defined around a selected point and determining a descriptor from said eovarianee matrix.
  15. 15. A method according to claim 1, configured for analysing the point map to produce feature descriptors for the region, the feature descriptors comprising the said geometric information.
  16. 16. A method according to claim 5, further comprising sampling the filtered point cloud.
  17. 17. A method of object recognition and/or registration, the method comprising analysing a point cloud according to thc method of claim 14, the method thrther comprising comparing the feature descriptors with a database of feature descriptors for a plurality of objects.
  18. 18. A method of compressing a point cloud, the method comprising sampling a filtered point cloud, wherein said filtered point cloud is dctcrmincd according to thc method of claim 5.
  19. 19. A system configured to analyse a point cloud, the system comprising: a point cloud receiving unit adapted to receive a point cloud comprising a plurality of points, cach point representing a spatial point in an image; a processor adapted to arrange the points into a hierarchical search tree, with a lowest level comprising a plurality of leaf nodes, where each leaf node corresponds to a point of the point cloud, the search free comprising a plurality of hierarchical levels with tree nodes in each of the hierarchical levels, the nodes being vertically connected to each other though the hierarchy by branches, wherein at least one moment of the property of the descendant nodes is stored in each tree node, the processor being ftirther adapted to determine and output geometric information of the points located within a region, by identifying the highest level tree nodes where all of the desecndent leaf nodes are contained within the region and selecting the leaf nodes for the points where no sub-tree is entirely contained within the region, such that such that the points falling within the region are represented by the smallest number of nodes and performing statistical operations on the nodes representing the points in the region, the statistical measures being determined from the moments of the properties stored within the identified tree nodes.
  20. 20. A carrier medium carrying computer readable instructions for conbollinig the computer to perform the method of claim 1.
GB1413245.0A 2014-07-25 2014-07-25 Image Analysis Method Expired - Fee Related GB2528669B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1413245.0A GB2528669B (en) 2014-07-25 2014-07-25 Image Analysis Method
US14/807,248 US9767604B2 (en) 2014-07-25 2015-07-23 Image analysis method by analyzing point cloud using hierarchical search tree
JP2015146739A JP6091560B2 (en) 2014-07-25 2015-07-24 Image analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1413245.0A GB2528669B (en) 2014-07-25 2014-07-25 Image Analysis Method

Publications (3)

Publication Number Publication Date
GB201413245D0 GB201413245D0 (en) 2014-09-10
GB2528669A true GB2528669A (en) 2016-02-03
GB2528669B GB2528669B (en) 2017-05-24

Family

ID=51587264

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1413245.0A Expired - Fee Related GB2528669B (en) 2014-07-25 2014-07-25 Image Analysis Method

Country Status (3)

Country Link
US (1) US9767604B2 (en)
JP (1) JP6091560B2 (en)
GB (1) GB2528669B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022188582A1 (en) * 2021-03-12 2022-09-15 腾讯科技(深圳)有限公司 Method and apparatus for selecting neighbor point in point cloud, and codec

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10210430B2 (en) 2016-01-26 2019-02-19 Fabula Ai Limited System and a method for learning features on geometric domains
US10013653B2 (en) * 2016-01-26 2018-07-03 Università della Svizzera italiana System and a method for learning features on geometric domains
CN106296650B (en) * 2016-07-22 2019-05-24 武汉海达数云技术有限公司 A kind of laser point cloud method for registering and device
CN108510439B (en) * 2017-02-28 2019-08-16 贝壳找房(北京)科技有限公司 Joining method, device and the terminal of point cloud data
CN108133226B (en) * 2017-11-27 2021-07-13 西北工业大学 Three-dimensional point cloud feature extraction method based on HARRIS improvement
US10826786B2 (en) * 2018-04-11 2020-11-03 Nvidia Corporation Fast multi-scale point cloud registration with a hierarchical gaussian mixture
CN108765475B (en) * 2018-05-25 2021-11-09 厦门大学 Building three-dimensional point cloud registration method based on deep learning
WO2020086824A1 (en) * 2018-10-24 2020-04-30 University Of Notre Dame Du Lac Method of textured contact lens detection
CN109712229A (en) * 2018-11-26 2019-05-03 漳州通正勘测设计院有限公司 A kind of isolated tree wooden frame point extracting method, device, equipment and storage medium
CN111435551B (en) * 2019-01-15 2023-01-13 华为技术有限公司 Point cloud filtering method and device and storage medium
CN110111378B (en) * 2019-04-04 2021-07-02 贝壳技术有限公司 Point cloud registration optimization method and device based on indoor three-dimensional data
CN110458772B (en) * 2019-07-30 2022-11-15 五邑大学 Point cloud filtering method and device based on image processing and storage medium
CN110930382A (en) * 2019-11-19 2020-03-27 广东博智林机器人有限公司 Point cloud splicing precision evaluation method and system based on calibration plate feature point extraction
CN111582391B (en) * 2020-05-11 2022-06-07 浙江大学 Three-dimensional point cloud outlier detection method and device based on modular design
CN111598915B (en) * 2020-05-19 2023-06-30 北京数字绿土科技股份有限公司 Point cloud single wood segmentation method, device, equipment and computer readable medium
CN111709450B (en) * 2020-05-21 2023-05-26 深圳大学 Point cloud normal vector estimation method and system based on multi-scale feature fusion
CN113065014B (en) * 2020-12-05 2021-12-17 林周容 Drop tree body type identification device and method
CN113076389A (en) * 2021-03-16 2021-07-06 百度在线网络技术(北京)有限公司 Article region identification method and device, electronic equipment and readable storage medium
CN113223062A (en) * 2021-06-04 2021-08-06 武汉工控仪器仪表有限公司 Point cloud registration method based on angular point feature point selection and quick descriptor
CN113487713B (en) * 2021-06-16 2022-09-02 中南民族大学 Point cloud feature extraction method and device and electronic equipment
KR20230071866A (en) * 2021-11-16 2023-05-24 한국전자기술연구원 Processing Method for Video-based Point Cloud Data and an electronic device supporting the same
KR20240027182A (en) * 2022-08-22 2024-03-04 한국전자기술연구원 Point cloud encoding system for streaming

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4694404A (en) 1984-01-12 1987-09-15 Key Bank N.A. High-speed image generation of complex solid objects using octree encoding
US5280547A (en) * 1990-06-08 1994-01-18 Xerox Corporation Dense aggregative hierarhical techniques for data analysis
JP3355015B2 (en) * 1994-03-14 2002-12-09 三菱電機株式会社 Image processing method
US5963956A (en) * 1997-02-27 1999-10-05 Telcontar System and method of optimizing database queries in two or more dimensions
JP4148642B2 (en) * 2000-10-26 2008-09-10 株式会社リコー Similar image search device and computer-readable recording medium
JP2006139713A (en) * 2004-11-15 2006-06-01 Fuji Electric Systems Co Ltd 3-dimensional object position detecting apparatus and program
JP6172432B2 (en) * 2013-01-08 2017-08-02 日本電気株式会社 Subject identification device, subject identification method, and subject identification program
US9317529B2 (en) * 2013-08-14 2016-04-19 Oracle International Corporation Memory-efficient spatial histogram construction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022188582A1 (en) * 2021-03-12 2022-09-15 腾讯科技(深圳)有限公司 Method and apparatus for selecting neighbor point in point cloud, and codec

Also Published As

Publication number Publication date
US9767604B2 (en) 2017-09-19
GB201413245D0 (en) 2014-09-10
JP6091560B2 (en) 2017-03-08
US20160027208A1 (en) 2016-01-28
JP2016031764A (en) 2016-03-07
GB2528669B (en) 2017-05-24

Similar Documents

Publication Publication Date Title
GB2528669A (en) Image Analysis Method
US11703951B1 (en) Gesture recognition systems
CN110689584B (en) Active rigid body pose positioning method in multi-camera environment and related equipment
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
US6691126B1 (en) Method and apparatus for locating multi-region objects in an image or video database
JP5677798B2 (en) 3D object recognition and position and orientation determination method in 3D scene
Santos et al. 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera
KR102095842B1 (en) Apparatus for Building Grid Map and Method there of
CN113362382A (en) Three-dimensional reconstruction method and three-dimensional reconstruction device
EP3376433B1 (en) Image processing apparatus, image processing method, and image processing program
Liu et al. Automatic buildings extraction from LiDAR data in urban area by neural oscillator network of visual cortex
Ramiya et al. Object-oriented semantic labelling of spectral–spatial LiDAR point cloud for urban land cover classification and buildings detection
Polewski et al. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds
JP2016099835A (en) Image processor, image processing method, and program
Pushkar et al. Automated progress monitoring of masonry activity using photogrammetric point cloud
Hafiz et al. Interest point detection in 3D point cloud data using 3D Sobel-Harris operator
CN114641795A (en) Object search device and object search method
CN110458177B (en) Method for acquiring image depth information, image processing device and storage medium
KR102597692B1 (en) Method, apparatus, and computer program for measuring volume of objects by using image
Xu et al. Identification of street trees’ main nonphotosynthetic components from mobile laser scanning data
Alhwarin Fast and robust image feature matching methods for computer vision applications
JP6796850B2 (en) Object detection device, object detection method and object detection program
Xavier Perception System for Forest Cleaning with UGV
Le et al. Geometry-Based 3D Object Fitting and Localizing in Grasping Aid for Visually Impaired
Ebadi et al. Automatic building extraction using a fuzzy active contour model

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20230725