CN104331699B - A kind of method that three-dimensional point cloud planarization fast search compares - Google Patents
A kind of method that three-dimensional point cloud planarization fast search compares Download PDFInfo
- Publication number
- CN104331699B CN104331699B CN201410671969.1A CN201410671969A CN104331699B CN 104331699 B CN104331699 B CN 104331699B CN 201410671969 A CN201410671969 A CN 201410671969A CN 104331699 B CN104331699 B CN 104331699B
- Authority
- CN
- China
- Prior art keywords
- image
- point
- points
- dimensional
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000011218 segmentation Effects 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 6
- 230000002146 bilateral effect Effects 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 9
- 239000003086 colorant Substances 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 6
- 238000009826 distribution Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000014759 maintenance of location Effects 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 238000000354 decomposition reaction Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 241000283973 Oryctolagus cuniculus Species 0.000 description 1
- 238000012356 Product development Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of method that three-dimensional point cloud planarization fast search compares, this method obtains the cloud data of object first, and fairing is carried out to it and simplifies processing, then two dimension view is selected as requested, find the border of the view, then on the basis of this border, gridding segmentation is carried out on demand to image, each grid of image after segmentation is traveled through, grid is marked according to the point cloud density in each grid lattice, afterwards two referential is made with obtained mark result, the image possesses the characteristic point that can reflect cloud data situation, Scale invariant features transform matching algorithm is reused to compare two referential with carrying out characteristic point according to the image treated using same procedure in java standard library, one group of most data of Feature Points Matching are found out using traversal mode.This method has the characteristics that precision is high, speed is fast, high flexible, needs to establish java standard library suitable for various, and image in cloud data and storehouse is carried out to the occasion of rapid registering.
Description
Technical Field
The invention relates to a three-dimensional point cloud processing and three-dimensional point cloud matching method, belongs to the field of computer vision and mode recognition, and particularly relates to a three-dimensional point cloud planarization rapid searching and comparing method.
Background
The three-dimensional point cloud data refers to a space distribution mass point set of a morphological structure of an object obtained by a three-dimensional digitization technology. The high-precision point cloud data can well represent the three-dimensional morphological characteristics of a measured object, and has important application in the fields of mold and product development of automobiles, hardware and household appliances, aviation, ceramics and the like, rapid prototyping of antiques, artware, sculptures, portrait products and the like, mechanical appearance design, medical face-lifting, human body appearance manufacturing, human body shape measurement, plant shape acquisition and the like.
Most of the traditional three-dimensional graph matching is to directly compare point cloud data to be matched with standard point cloud data, and perform similarity or identity analysis after geometric transformation such as translation, rotation, scaling and the like. This technique is widely used in various fields such as medicine, civil engineering and construction, and industrial reverse engineering. The method is suitable for precise matching, but the method is only suitable for pairwise individual matching and is not suitable for occasions where the point cloud data in the contract standard library are matched and is not suitable for occasions where rapid identification is needed.
Disclosure of Invention
In view of the above, the invention provides a method for planar rapid search and comparison of three-dimensional point clouds, which is provided for overcoming the defect of insufficient rapidity of the existing method, has the characteristics of high precision, good rapidity, high flexibility and the like, and is suitable for various occasions requiring rapid registration of point cloud data in the same standard library.
The technical solution of the invention is as follows: preprocessing the three-dimensional point cloud, converting the three-dimensional point cloud image into a two-dimensional graph, meshing and dividing the two-dimensional graph, generating an approximate binary image for matching according to the divided graph, and matching the approximate binary image with an image in a standard library according to the approximate binary image so as to find out a group of point cloud data which are closest to the three-dimensional point cloud data of the object to be detected in the standard library. The method specifically comprises the following steps:
the method comprises the following steps: obtaining three-dimensional point cloud of an object to be detected, performing fairing processing on point cloud data by using a bilateral filtering denoising algorithm, performing data simplification on the faired point cloud data by using a random sampling method, performing two-dimensional transformation on the simplified point cloud data, keeping the selected views (front view and side view) consistent with the views adopted when a standard library is built, and generating a two-dimensional point cloud image after dimension reduction;
step two: finding four boundary points of the two-dimensional point cloud image through rapid sequencing, then generating a two-dimensional point cloud image boundary according to the four boundary points, carrying out meshing segmentation on the image, then retrieving each segmented grid, making a corresponding mark according to the density difference of point clouds in different grids, filling corresponding colors according to the mark, and generating an approximate binary image of a corresponding gradient;
step three: and comparing the obtained image with an approximate binary image in a standard library by adopting a scale invariant feature transformation matching algorithm, and after all comparison is finished, searching in sequence, comparing the matching condition of the feature points of each group by traversing each group of images in the standard library, and finding out the group with the most corresponding feature points, thereby finishing the comparison.
Further, the two-dimensional transformation in the first step is: and selecting one of the three views for generating the three-dimensional image according to comparison requirements, wherein the adopted view must be the same as the view of the image in the standard library, so that the accuracy is ensured.
Further, the bilateral filtering denoising algorithm in the first step specifically comprises the following steps: 3.1 establishing a K-neighborhood; 3.2 estimating a normal vector; 3.3 defining a view plane; 3.4, introducing a bilateral filter operator to obtain the coordinates after fairing.
Further, the random sampling method in the step one is as follows: establishing a function to make the random number generated by it include all point clouds, then generating a continuous group of random numbers, finding out the corresponding points from the original point clouds and removing them until the total number of points reaches the established requirement.
Further, the boundary points in the second step refer to the farthest points in the top, bottom, left and right of the image, for example, in the front view, two points with the minimum and maximum X values and two points with the minimum and maximum Y values are found out by using a quick sorting method, and the total of four coordinate points are the boundary points.
Further, the gridding segmentation in the second step refers to: the image is divided into n multiplied by n (n belongs to R, n is larger than 0) grid blocks by taking the boundary determined in the step as a reference.
Further, the approximating a binary image in the second step means: because the color of the image is related to the density mark calibrated in the second step, the image is between the gray image and the binary image, for convenience of expression, the two-dimensional images used for comparison are collectively called approximate binary images, the image can generate more characteristic points through the mark density, and the accuracy of the method is improved.
Further, the generating of the approximate binary image of the corresponding gradient in the second step specifically includes the following steps: traversing the point clouds in each grid, marking each grid according to the number of the point clouds in the grid, sequentially filling corresponding colors into the corresponding grids according to the obtained corresponding marks, generating a pixel map with the size of n multiplied by n (n belongs to R, n is larger than 0), taking two colors of black and gray to express different densities in order to facilitate calculation and ensure that enough characteristic points can be generated, and expressing the grids without the point clouds in white.
Further, the scale invariant feature transform matching algorithm in step three specifically includes the following steps: 9.1 establishing an image scale space; 9.2 detecting key points; 9.3 distribution of key point direction; 9.4 feature point descriptors; 9.5 adopting an exhaustion method to compare the characteristic points of the two images, and counting the number of the matched characteristic points for searching and comparing.
Compared with the prior art, the invention has the advantages that: (1) the space-time complexity is low, and the rapidity of search comparison is improved under the condition of ensuring higher precision. (2) The method is suitable for various occasions needing quick comparison and identification, is suitable for pairwise independent matching, and can also establish a standard library by self to compare with data in the standard library. (3) The anti-interference capability is strong, and the comparison accuracy can be still ensured under the condition of poor fairing effect by modifying key mapping parameters (marks of point cloud density in the grids). (4) The algorithm focuses on identification, so that only one side with the most obvious object characteristics can be scanned or all the sides can be scanned when a standard library is established, different methods are adopted according to different requirements, the flexibility is high, the storage space can be greatly saved, and the comparison speed is improved.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a flow chart of a method for planar fast search and comparison of three-dimensional point clouds according to the invention
FIG. 2 is a schematic diagram of gridding and segmenting in accordance with the present invention
FIG. 3 is a schematic diagram of an approximate binary image according to the present invention
FIG. 4 is a graph showing the effect of using Stanford Bunny rabbit for point cloud data
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
FIG. 1 is a flow chart of the method of the present invention, which includes the steps of:
the method comprises the following steps: the scan of a single field of view or multiple fields of view is determined as desired.
Step two: and scanning the object to be compared by using a three-dimensional measuring system to obtain the three-dimensional point cloud of the object to be compared.
Step three: and if a plurality of fields of view need to be scanned, splicing all the single-field three-dimensional point clouds to the same measurement coordinate system. The adopted splicing method comprises the following steps: the mechanical arm auxiliary splicing method, the sticking of mark points and the like are general methods for three-dimensional point cloud splicing.
Step four: the obtained point cloud data c ═ p1,p2,…,pn}pi∈R3And performing fairing by adopting a bilateral filtering denoising algorithm.
Step five: and (4) simplifying the smoothed three-dimensional point cloud data according to a random sampling method, and reserving 40% of the point cloud data.
Step six: the preprocessed point cloud data c ″ is transformed, if necessary, into a front view or top view, in this case into a front view. Removing the Z-axis part of the c' midpoint cloud data, and reserving the Z-axis part as (X, Y) two-dimensional cloud dataThe transformed point cloud data is recorded as c*。
Step seven: by traversing c*Finding four boundary points, the rightmost side of the image, i.e. the point of the maximum value of XMinimum point of the leftmost side of the image, i.e. XThe maximum point of Y being the top of the imageMinimum point of the lowest side of the image, i.e. Y
Step eight: image processing methodFour points are boundaries, and are divided into n × n (n ∈ R, n >0) grids, each grid being longWidth ofWhereinIndicating pointsThe value of X in the above description, and so on.
Step nine: traversing the point clouds in each grid, marking each grid according to the density of the point clouds, and when the number of the point clouds is less than X, as shown in figure 21When the number of point clouds is greater than X, the mark is 01Is less than X2When, labeled 1, when the number of point clouds is greater than X2Time, markIs 2 up to Xi. Wherein X1、X2To XiIs set according to the actual situation, in this case, only X is set1And X2。
Step ten: according to the obtained grids and the corresponding marks, an approximate binary image is made, in this example, the grid marked as 0 is painted with white, the grid marked as 1 is painted with black, and the grid marked as 2 is painted with gray, the colors can be adjusted by self, and the painting effect is as shown in fig. 3 and 4.
Step eleven: and comparing the obtained image with an approximate binary image in a standard library by adopting a scale invariant feature transformation matching algorithm, and after all comparison is finished, searching in sequence, comparing the matching condition of the feature points of each group by traversing each group of images in the standard library, and finding out the group with the most corresponding feature points, thereby finishing the comparison.
Further, the bilateral filtering denoising algorithm adopted in the fourth step specifically includes the following steps:
4.1: establishing K-neighborhoods
And if any measuring point p in the point cloud data c meets the condition that p belongs to c, K points closest to the measuring point p are called as K-neighborhood of p and are recorded as N (p). Where k is 25.
4.2: estimation of normal vector
And constructing a plane in the N (p) obtained in the last step by a least square method, wherein the plane is called a tangent plane of the point p on the neighborhood N (p) and is marked as T (p). The unit normal vector of T (p) is the unit normal vector at the point p
4.3: defining a viewing plane
Space R3 is decomposed into a direct sum of two subspaces:wherein,n is the one-dimensional space of the neighborhood at point p along the normal vector direction, and S2 is the two-dimensional tangent plane space through point p. In the local range, S2 is defined as a view plane, the projection of the neighborhood point on the S2 plane is defined as the position of the pixel point, and the distance from the neighborhood point to the projection point is defined as the size of the pixel value, similar to image processing.
4.4: bilateral filter operator
Introducing bilateral filter operators
Where N (p) is a neighborhood of p, pi∈ N (p), p' is the projection of p onto S2. here, the direct three-dimensional spatial distance is not directly taken but the distance on the projection plane is used.Is a p-point normal vector and is,is piThe normal vector. WCAnd WsIs respectively at sigmac、σsIs a Gaussian kernel function of standard deviation, σcControl of the degree of fairing σsControlling the feature retention degree, wherein Wc is a spatial domain weight; ws is the feature domain weight. d is the distance of normal vector direction adjustment according to the formulaAnd obtaining the coordinates c ^ after fairing.
Further, the scale invariant feature transform matching algorithm adopted in the step eleven specifically includes the following steps:
11.1: an image scale space (or gaussian pyramid) is created and extreme points are detected, here and hereinafter referred to as pixels in the figure.
The algorithm adopts a Gaussian function to establish a scale space, and the formula of the Gaussian function is as follows:
the above formula G (x, y, σ) is a scale-variable gaussian function.
The scale space of an image, L (x, y, σ) is defined as the convolution of a variable-scale Gaussian function G (x, y, σ) with the original image I (x, y)
L(x,y,σ)=G(x,y,σ)*I(x,y) (3)
The scale space is represented by a Gaussian pyramid during implementation, and the pyramid model of the image is a pyramid model formed by continuously sampling the original image in a reduced order to obtain a series of images with different sizes from large to small and from bottom to top. The original image is the first layer of the pyramid, and a new image obtained by down-sampling each time is one layer (one image for each layer) of the pyramid, wherein n layers are provided for each pyramid. The pyramid layer number is determined according to the original size of the image and the size of the image at the top of the tower, and the calculation formula is as follows:
n=log2{min(M,N)}-t,t∈[0,log2{min(M,N)}](4)
wherein M and N are the size of the original image, and t is the logarithm value of the minimum dimension of the tower top image.
After the scale space is established in the scale space, in order to find stable feature points, a gaussian difference method is used to detect extreme points at local positions, that is, image subtraction in two adjacent scales is used, that is, a formula is defined as:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y) (5)
=L(x,y,kσ)-L(x,y,σ)
11.2: detecting key points
To find the extreme points in the scale space, each sample point is compared to all its neighbors to see if it is larger or smaller than its neighbors in the image and scale domains. Because the comparison with the adjacent scales is needed, only two scale extreme points can be detected in one group of Gaussian difference images, and the detection of the other scale extreme points needs to be performed in the Gaussian difference image of the upper layer of the image pyramid, so that the detection of extreme values of different scales in the Gaussian difference images of different layers in the image pyramid is sequentially completed.
11.3: distribution of key point directions
In order to make the descriptor rotationally invariant, it is necessary to assign a direction to each feature point using local features of the image. By using the characteristics of gradient and directional distribution of the neighborhood pixels of the key points, the gradient module values and the directions can be obtained as follows:
the scale is the scale of each feature point.
Sampling is carried out in a neighborhood window with the key point as the center, and the gradient direction of neighborhood pixels is counted by using a histogram. The gradient histogram ranges from 0 to 360 degrees, with one direction every 10 degrees, for a total of 36 directions.
The peak of the histogram represents the main direction of the neighborhood gradient at the keypoint, i.e. the direction that is taken as the keypoint.
11.4: feature point descriptor
Through the above-described calculation, three pieces of information have been given to each feature point: location, scale, and orientation. After that, a descriptor can be established for each feature point, and finally a feature vector is formed, wherein the feature vector at this time has scale invariance and rotation invariance.
11.5: comparing the feature points in the two images by adopting an exhaustion method, taking a certain feature point in the image to be detected, finding out the first two feature points with the nearest Euclidean distance in the standard image, and if the nearest distance divided by the next nearest distance in the two feature points is less than a certain proportional threshold, accepting the pair of matching points, wherein the threshold is generally between 0.4 and 0.6.
As described above, the three-dimensional point cloud data are converted into the two-dimensional plane data, comparison among complex three-dimensional point clouds is simplified into graph matching, comparison time is greatly shortened, the fairing requirements of gridded image processing on the three-dimensional point clouds are greatly reduced, the method is suitable for various occasions of rapid scanning and rapid comparison, only a standard library is established by adopting the same method before comparison, the segmentation quantity can be changed at any time according to needs, colors are displayed, data in the library is increased and deleted at any time, flexibility is high, expansibility is strong, a scale invariant feature transformation matching algorithm ensures rotation invariance and scale invariance of matching, and comparison accuracy is ensured.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.
Claims (8)
1. A method for planar rapid search and comparison of three-dimensional point cloud is characterized by comprising the following steps:
the method comprises the following steps: obtaining three-dimensional point cloud of an object to be detected, performing fairing processing on point cloud data by using a bilateral filtering denoising algorithm, performing data simplification on the faired point cloud data by using a random sampling method, performing two-dimensional transformation on the simplified point cloud data, keeping the selected view consistent with the view adopted when a standard library is built, and generating a two-dimensional point cloud image after dimensionality reduction;
step two: finding four boundary points of the two-dimensional point cloud image through rapid sequencing, then generating a two-dimensional point cloud image boundary according to the four boundary points, carrying out meshing segmentation on the image, then retrieving each segmented grid, making a corresponding mark according to the density difference of point clouds in different grids, filling corresponding colors according to the mark, and generating an approximate binary image of a corresponding gradient;
step three: comparing the obtained image with an approximate binary image in a standard library by adopting a scale invariant feature transformation matching algorithm, and after all comparison is finished, searching in sequence, comparing the feature point matching condition of each group by traversing each group of images in the standard library, and finding out the group with the most corresponding feature points so as to complete comparison;
the specific steps of the bilateral filtering denoising algorithm in the step one are
3.1: establishing K-neighborhoods
If any measuring point p in the point cloud data c meets the condition that p belongs to c, K points closest to the measuring point p are called as K-neighborhood of p and are recorded as N (p);
3.2: estimation of normal vector
Constructing a plane in the N (p) by a least square method, wherein the plane is called a tangent plane of the point p on the neighborhood N (p) and is called T (p); the unit normal vector of T (p) is the unit normal vector at the point p;
3.3: defining a view plane to be a space R3The decomposition is into the direct sum of two subspaces:n is a one-dimensional space of the neighborhood at the p point along the normal vector direction, and S2Is a two-dimensional tangent plane space of the passing point p; in the local scope, define S2As a view plane, the neighborhood point is at S2The projection on the plane is the position of the pixel point, and the distance from the domain point to the projection point is defined as the size of the pixel value;
3.4: bilateral filter operator
Introducing bilateral filter operators
Where N (p) is a neighborhood of p, pi ∈ N (p), and p' is p at S2The projection of the image onto the image plane is performed,is a p-point normal vector and is,is the vector of pi normal, WcAnd WSIs respectively at sigmac、σsIs a Gaussian kernel function of standard deviation, σcControlling the degree of fairing, σsControlling the degree of retention of the characteristic, WcIs a spatial domain weight; wSIs a feature domain weight; d is the distance of normal vector direction adjustment according to the formulaObtaining the coordinates after fairing。
2. The method of claim 1, wherein the method comprises: the two-dimensional transformation in the first step is as follows: and selecting one of the three views for generating the three-dimensional image according to comparison requirements, wherein the adopted view must be the same as the view of the image in the standard library, so that the accuracy is ensured.
3. The method of claim 1, wherein the method comprises: the random sampling method in the step one is as follows: establishing a function to make the random number generated by it include all point clouds, then generating a continuous group of random numbers, finding out the corresponding points from the original point clouds and removing them until the total number of points reaches the established requirement.
4. The method of claim 1, wherein the method comprises: the boundary points in the second step refer to the farthest points in the top, bottom, left and right of the image, for example, in the front view, two points with the minimum and maximum X values, two points with the minimum and maximum Y values, which are four coordinate points in total, are found out by a quick sorting method.
5. The method of claim 1, wherein the method comprises: the gridding segmentation in the second step refers to: the image is divided into n × n (n ∈ R, n >0) grid blocks with the boundary determined in this step as a reference.
6. The method of claim 1, wherein the method comprises: the approximate binary image in the second step refers to: because the color of the image is related to the density mark calibrated in the second step, the image is between the gray image and the binary image, for convenience of expression, the two-dimensional images used for comparison are collectively called approximate binary images, the image can generate more characteristic points through the mark density, and the accuracy of the method is improved.
7. The method of claim 1, wherein the method comprises: the generating of the approximate binary image of the corresponding gradient in the second step specifically includes the following steps: traversing the point clouds in each grid, marking each grid according to the number of the point clouds in the grid, sequentially filling corresponding colors into the corresponding grids according to the obtained corresponding marks, generating a pixel map with the size of n multiplied by n (n belongs to R, n is greater than 0), taking two colors of black and gray to express different densities in order to facilitate calculation and ensure that enough characteristic points can be generated, and expressing the grids without the point clouds in white.
8. The method of claim 1, wherein the method comprises: the scale invariant feature transform matching algorithm in the third step specifically comprises the following steps: 9.1 establishing an image scale space; 9.2 detecting key points; 9.3 distribution of key point direction; 9.4 feature point descriptors; 9.5 adopting an exhaustion method to compare the characteristic points of the two images, and counting the number of the matched characteristic points for searching and comparing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410671969.1A CN104331699B (en) | 2014-11-19 | 2014-11-19 | A kind of method that three-dimensional point cloud planarization fast search compares |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410671969.1A CN104331699B (en) | 2014-11-19 | 2014-11-19 | A kind of method that three-dimensional point cloud planarization fast search compares |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104331699A CN104331699A (en) | 2015-02-04 |
CN104331699B true CN104331699B (en) | 2017-11-14 |
Family
ID=52406421
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410671969.1A Active CN104331699B (en) | 2014-11-19 | 2014-11-19 | A kind of method that three-dimensional point cloud planarization fast search compares |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104331699B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105404898B (en) * | 2015-11-26 | 2018-11-06 | 福州华鹰重工机械有限公司 | A kind of loose type point cloud data segmentation method and equipment |
CN106251353A (en) * | 2016-08-01 | 2016-12-21 | 上海交通大学 | Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof |
CN107590829B (en) * | 2017-09-18 | 2020-06-30 | 西安电子科技大学 | Seed point picking method suitable for multi-view dense point cloud data registration |
CN108109150B (en) * | 2017-12-15 | 2021-02-05 | 上海兴芯微电子科技有限公司 | Image segmentation method and terminal |
CN108466265B (en) * | 2018-03-12 | 2020-08-07 | 珠海市万瑙特健康科技有限公司 | Mechanical arm path planning and operation method, device and computer equipment |
CN108961419B (en) * | 2018-06-15 | 2023-06-06 | 重庆大学 | Microscopic visual field space digitizing method and system for microscopic visual system of micro assembly system |
CN108986162B (en) * | 2018-06-28 | 2022-02-22 | 杭州吉吉知识产权运营有限公司 | Dish and background segmentation method based on inertial measurement unit and visual information |
CN109118500B (en) * | 2018-07-16 | 2022-05-10 | 重庆大学产业技术研究院 | Image-based three-dimensional laser scanning point cloud data segmentation method |
CN109840882B (en) * | 2018-12-24 | 2021-05-28 | 中国农业大学 | Station matching method and device based on point cloud data |
CN109767464B (en) * | 2019-01-11 | 2023-03-28 | 西南交通大学 | Point cloud registration method with low overlapping rate |
CN109978885B (en) * | 2019-03-15 | 2022-09-13 | 广西师范大学 | Tree three-dimensional point cloud segmentation method and system |
CN110458805B (en) * | 2019-03-26 | 2022-05-13 | 华为技术有限公司 | Plane detection method, computing device and circuit system |
CN110555824B (en) * | 2019-07-22 | 2022-07-08 | 深圳供电局有限公司 | Switch position judging method and control method of switch position detection system |
CN111091594B (en) * | 2019-10-17 | 2023-04-11 | 如你所视(北京)科技有限公司 | Multi-point cloud plane fusion method and device |
CN111445385B (en) * | 2020-03-28 | 2023-06-09 | 哈尔滨工程大学 | Three-dimensional object planarization method based on RGB color mode |
CN112287481B (en) * | 2020-10-27 | 2023-11-21 | 上海设序科技有限公司 | Mechanical design scheme searching method and device based on three-dimensional point cloud |
CN113362461B (en) * | 2021-06-18 | 2024-04-02 | 盎锐(杭州)信息科技有限公司 | Point cloud matching method and system based on semantic segmentation and scanning terminal |
CN113658238B (en) * | 2021-08-23 | 2023-08-08 | 重庆大学 | Near infrared vein image high-precision matching method based on improved feature detection |
CN115641553B (en) * | 2022-12-26 | 2023-03-10 | 太原理工大学 | Online detection device and method for invaders in heading machine working environment |
CN118135116B (en) * | 2024-04-30 | 2024-07-23 | 壹仟零壹艺数字科技(合肥)有限公司 | Automatic generation method and system based on CAD two-dimensional conversion three-dimensional entity |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268363A (en) * | 2013-06-06 | 2013-08-28 | 哈尔滨工业大学 | Elastic HOG (histograms of oriented gradient) feature-based Chinese calligraphy image retrieval method matched with DDTW (Derivative dynamic time wrapping) |
CN104007444A (en) * | 2014-06-09 | 2014-08-27 | 北京建筑大学 | Ground laser radar reflection intensity image generation method based on central projection |
CN104112115A (en) * | 2014-05-14 | 2014-10-22 | 南京国安光电科技有限公司 | Three-dimensional face detection and identification technology |
-
2014
- 2014-11-19 CN CN201410671969.1A patent/CN104331699B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268363A (en) * | 2013-06-06 | 2013-08-28 | 哈尔滨工业大学 | Elastic HOG (histograms of oriented gradient) feature-based Chinese calligraphy image retrieval method matched with DDTW (Derivative dynamic time wrapping) |
CN104112115A (en) * | 2014-05-14 | 2014-10-22 | 南京国安光电科技有限公司 | Three-dimensional face detection and identification technology |
CN104007444A (en) * | 2014-06-09 | 2014-08-27 | 北京建筑大学 | Ground laser radar reflection intensity image generation method based on central projection |
Non-Patent Citations (2)
Title |
---|
《三维点云数据处理的技术研究》;王丽辉;《中国博士学位论文全文数据库 信息科技辑 》;20120115;I138-46 * |
《格网划分的最邻近点搜索方法》;杨容浩等;《测绘科学》;20120930;第37卷(第5期);第90-93页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104331699A (en) | 2015-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104331699B (en) | A kind of method that three-dimensional point cloud planarization fast search compares | |
CN109655019B (en) | Cargo volume measurement method based on deep learning and three-dimensional reconstruction | |
Yang et al. | An individual tree segmentation method based on watershed algorithm and three-dimensional spatial distribution analysis from airborne LiDAR point clouds | |
CN106709947B (en) | Three-dimensional human body rapid modeling system based on RGBD camera | |
CN107093205B (en) | A kind of three-dimensional space building window detection method for reconstructing based on unmanned plane image | |
CN108549873A (en) | Three-dimensional face identification method and three-dimensional face recognition system | |
CN104867126B (en) | Based on point to constraint and the diameter radar image method for registering for changing region of network of triangle | |
CN106127791B (en) | A kind of contour of building line drawing method of aviation remote sensing image | |
CN107248159A (en) | A kind of metal works defect inspection method based on binocular vision | |
CN109977997B (en) | Image target detection and segmentation method based on convolutional neural network rapid robustness | |
CN111640158B (en) | End-to-end camera and laser radar external parameter calibration method based on corresponding mask | |
CN108921895B (en) | Sensor relative pose estimation method | |
CN111696210A (en) | Point cloud reconstruction method and system based on three-dimensional point cloud data characteristic lightweight | |
CN110610505A (en) | Image segmentation method fusing depth and color information | |
CN113470090A (en) | Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics | |
CN112734844B (en) | Monocular 6D pose estimation method based on octahedron | |
CN108182705A (en) | A kind of three-dimensional coordinate localization method based on machine vision | |
CN112489099A (en) | Point cloud registration method and device, storage medium and electronic equipment | |
CN106295657A (en) | A kind of method extracting human height's feature during video data structure | |
CN116958420A (en) | High-precision modeling method for three-dimensional face of digital human teacher | |
Tong et al. | 3D point cloud initial registration using surface curvature and SURF matching | |
CN109345570A (en) | A kind of multichannel three-dimensional colour point clouds method for registering based on geometry | |
CN106373177A (en) | Design method used for optimizing image scene illumination estimation | |
Liu et al. | Robust 3-d object recognition via view-specific constraint | |
CN114241150B (en) | Water area data preprocessing method in oblique photography modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |