CN108876831A - A kind of building three-dimensional point cloud method for registering based on deep learning - Google Patents

A kind of building three-dimensional point cloud method for registering based on deep learning Download PDF

Info

Publication number
CN108876831A
CN108876831A CN201810584888.6A CN201810584888A CN108876831A CN 108876831 A CN108876831 A CN 108876831A CN 201810584888 A CN201810584888 A CN 201810584888A CN 108876831 A CN108876831 A CN 108876831A
Authority
CN
China
Prior art keywords
point
point cloud
cloud
key
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810584888.6A
Other languages
Chinese (zh)
Inventor
梅少辉
魏江
田仲祺
黄杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201810584888.6A priority Critical patent/CN108876831A/en
Publication of CN108876831A publication Critical patent/CN108876831A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

The invention discloses a kind of building three-dimensional point cloud method for registering based on deep learning, includes the following steps:The acquisition of S1, training data;The building of S2, deep learning model;S3, model training and tuning;S4, data to be tested pretreatment;S5, test point cloud key area;S6, test point cloud key point;S7, judge crucial point correspondence;S8, transformational relation and registration are calculated.Deep learning is applied to two committed steps of point cloud registering by the present invention:It finds key point and determines matching relationship.The present invention makes full use of deep learning to the characterization ability of point cloud data to find key area, positioning key point step by step, detection speed is greatly accelerated relative to traditional traversal mode, and learns point cloud feature with depth network model and replace manual feature, so that algorithm is more robust and efficient.

Description

A kind of building three-dimensional point cloud method for registering based on deep learning
Technical field
The present invention relates to city three-dimensional reconstruction and unmanned field more particularly to a kind of buildings based on deep learning Object three-dimensional point cloud method for registering.
Background technique
In industrial production, life and scientific research, the demand to three-dimensional reconstruction is all widely present, such as AR/VR, 3D Printing, industrial automation, the protection of historical relic legacy, computer technology are in application of medical domain etc..Nowadays three-dimensional is swept The precision for retouching instrument is higher and higher, obtains fine city large-scale data and also becomes easy, some developed countries start already The construction of smart city, and the three-dimensional reconstruction in city is the important component of smart city, for cloud data registration side The research of method becomes key in three-dimensional reconstruction field.
In traditional method for registering based on point, the rule that key point must manually be set according to certain is found, successively It traverses all points and calculates response, key point is finally selected according to threshold value.This method can not be based on neighborhood in view of point All characteristics, cause feature extraction to be not thorough, next takes the mode of traversal computationally intensive, especially for outdoor architecture object In the processing of larger point cloud.Deep learning is in growing continuously and fast at this stage, so that the research of numerous areas obtains Great breakthrough and plentiful and substantial scientific achievement, especially obtain considerable progress in the relevant research field of picture, still Research and application in terms of three-dimensional data is then at the early-stage.It is essentially all to take especially in three-dimensional point cloud application aspect To low dimension projective, division mode by point cloud pretreatment at certain specification, will cause information in this way and largely lose.
Since laser radar moves towards industry, scientific research field from military project, cost price also gradually declines, so that obtaining a large amount of Point cloud data be possibly realized, be also more convenient for the training of deep learning model.It is how deep using the training of a large amount of point cloud datas Learning model is spent, accurately efficiently extracts key point and description point cloud feature as urgent using the sensing capability of model Business.
Summary of the invention
The purpose of the present invention is to provide a kind of building three-dimensional point cloud method for registering based on deep learning, can have Effect promotes registration efficiency and precision.
To achieve the above object, the present invention uses following technical scheme:
A kind of building three-dimensional point cloud method for registering based on deep learning comprising following steps:
The acquisition of S1, training sample obtain the sample data for training pattern;
S2, deep learning model is built, so that model is suitable for the processing of point cloud data;
S3, hyper parameter is adjusted according to model training, test feedback information;
S4, pretreatment point cloud, the input by point cloud data piecemeal to be detected, as model;
S5, the input point cloud mass for model, whether detection wherein contains key area, by the point cloud comprising key area Block retains;
S6, the key point put in cloud mass comprising key area is detected;
S7, the key point for being respectively at difference cloud is inputted into key point Matching Model in couples, judge they whether Match;
S8, the amount of taking fully correspondence key point calculate transformational relation, it is best from obtained transition matrix to select registration effect One be used as final result.
The step S1 specifically include it is following step by step:
The point of S11, interception in positions such as door frame angle, window frame angle and metope angles is as key point;
S12, take out using key point as the centre of sphere, radius r sphere in include point, then with these point be respectively the centre of sphere, cut Positive sample of the radius for the point cloud mass of the sphere of r as key area detection model is taken, so that positive sample all includes key Point;Negative sample of the sphere of same size as key area detection model is intercepted in the region that cloud does not include key point;
S13, bulbous region point cloud mass is intercepted as the positive sample of critical point detection model, not to close by the centre of sphere of key point Key point is negative sample of the bulbous region point cloud mass of the centre of sphere as critical point detection model, includes in negative sample or does not include crucial Point;
S14, the positive sample that key point Matching Model is extracted in two point cloud datas P, Q being registrated:It is taken in cloud P Point, then the nearest point of the Search Length point is as corresponding points in cloud Q, if closest approach distance is greater than the threshold value of setting, Without corresponding points;Negative sample is extracted in two point cloud datas P, Q being registrated:It takes a little in cloud P, is then taken in cloud Q It is greater than the point of given threshold as noncorresponding points away from the point;
S15, using corresponding points to the pairs of bulbous region point cloud mass intercepted for the centre of sphere as key point Matching Model just Sample, using noncorresponding points to the pairs of bulbous region point cloud mass intercepted for the centre of sphere as the negative sample of key point Matching Model.
The step S2 specifically include it is following step by step:
S21, point cloud classifications model is built based on PointNet network structure, so that model is directly acted in the form of convolution In the coordinate data of three-dimensional point, the feature extracted finally is mapped in the form of fully-connected network;
S22, twin network model is built based on PointNet network structure, so that model branch is respectively acting on difference Cloud judges whether some cloud masses correspond to by connecting the output after mapping entirely.
In the step S3, according to model training concrete condition, the hyper parameter of input data is adjusted, hyper parameter includes learning Practise step-length, learning rate, batch size.
The step S4 specifically include it is following step by step:
S41, farthest point sampling is carried out on to be processed cloud;
S42, using sampled point as the centre of sphere, intercept out the point cloud mass in certain radius sphere, sentence as key area The input of disconnected model.
The step S6 specifically include it is following step by step:
S61, it takes a little in the point cloud mass comprising key area, and is cut respectively using these points as the centre of sphere with small radii Take sphere;
S62, by sphere obtained in step S61 as the input of key point discrimination model, reservation is judged as crucial The point of point.
In the step S8, taking RANSAC strategy, the corresponding key point of the amount of taking fully utilizes SVD to calculate transformational relation every time, Take RANSAC strategy to obtain a large amount of transformational relations, find matching degree best one, registration effect with LCP value be judgement mark Standard, formula are expressed as follows:
Wherein piRepresent the point in point cloud P, qiThe point in a cloud Q is respectively represented, R and T respectively indicate rotation and translation square Battle array, ε value are 0.05m;Formula meaning is maximum public point set, i.e., after point cloud P, Q registration, point q is taken in Qi, find by turning In point cloud P after changing with qiApart it is less than the point of ε, if it exists such point, as common point, then LCP value is increase accordingly, and is taken LCP value is highest to be used as final result.
After adopting the above technical scheme, compared with the background technology, the present invention, having the following advantages that:
1, the application by the powerful learning ability of deep neural network and PointNet network structure on unordered cloud, Training pattern can identify the search range for greatly reducing comprising crucial region and finding key point.
2, point cloud local region is perceived using depth network model, extracts provincial characteristics, rather than engineer's rule mentions Feature is taken, the structural information of this sampling point cloud is able to be perceptible enough and extract, so that feature is more accurate to the description in region.
3, so that it is incuded matching relationship with the twin network model of mass data training, model is allowed to learn corresponding rule from sample Then, to eliminate the judgment criteria that feature space distance is manually set, so that matching is more objective credible.
Detailed description of the invention
Fig. 1 is workflow schematic diagram of the invention.
Fig. 2 is key area detection model positive sample choosing method.
Fig. 3 is to constitute this method to be related to the main structure of model.
Fig. 4 is the twin network model for judging crucial point correspondence.
Fig. 5 is the distribution example of farthest point sampled result.
Fig. 6 is the displaying of critical point detection result.
Fig. 7 is two o'clock cloud registration result example.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Embodiment
Referring to Fig. 1, the invention discloses a kind of building three-dimensional point cloud method for registering based on deep learning comprising Following steps:
The acquisition of S1, training sample obtain the sample data for training pattern;
The point cloud data that this method is used only is needed comprising three-dimensional coordinate without other information, since model is to be used for Regional area geometry is experienced, so the training data for same model needs to normalize to the sphere of same size It is interior, and the centre of sphere is needed to be aligned, the gaussian noise that mean value is zero is added when training, is defined as follows:
And the data enhancement methods such as Random-Rotation for taking vertical direction, it is defined as follows:
The point of S11, interception in positions such as door frame angle, window frame angle and metope angles is as key point, due to laser thunder Point up to scanning is not necessarily at angular vertex, so (the d in a certain range around the corner structure<Point 0.01m) is to close Key point;
S12, take out using key point as the centre of sphere, radius r (r=0.5m) sphere (dark circle circuit covering as shown in Figure 2 Region) in include point, then with these point be respectively the centre of sphere, interception radius be r sphere point cloud mass as key area The positive sample (region of light color circle covering as shown in Figure 2) of domain detection model, such positive sample all includes key point;In a cloud Region not comprising key point intercepts negative sample of the sphere of same size as key area detection model, in such negative sample Not comprising key point;
S13, bulbous region point cloud mass is intercepted as the positive sample of critical point detection model, not to close by the centre of sphere of key point Key point is negative sample of the bulbous region point cloud mass of the centre of sphere as critical point detection model, includes in negative sample or does not include crucial Point.Since the situation about being located near key area when carrying out the judgement of key point, encountered is more, so needing closing Key range nearby more does negative sample;
S14, the sample (point maked sample that key point Matching Model is extracted in two point cloud datas P, Q being registrated Key point is known in cloud).Positive sample:Point (key point and non-key point) is taken in cloud P, then in cloud Q The nearest point of the Search Length point (key point and non-key point) is as corresponding points (if closest approach distance is greater than the threshold of setting Value, then without corresponding points);Negative sample:Point (key point and non-key point) is taken in cloud P, is then taken in cloud Q away from this Point is greater than the point of given threshold as noncorresponding points.In a practical situation, judge the corresponding relationship in cloud P, Q between key point The case where it is more, so, in positive sample,Should be in the majority, in negative sample,It should be in the majority.
S15, with corresponding point to bulbous region point cloud mass is intercepted in couples for the centre of sphere, just as key point Matching Model Sample, in positive sample, key point is relatively high to accounting for, and non-key point is lower to accounting;With noncorresponding points to intercepting for the centre of sphere Bulbous region point cloud mass is to the negative sample as key point Matching Model.
S2, deep learning model is built, so that model is suitable for the processing of point cloud data;
The main part of model is shown in that Fig. 3, model act on three-dimensional coordinate a little in the form of convolution, but due to cloud Randomness is put and is not in contact between the neighbouring relations and point neighbouring relations notationally of three-dimensional space, so each is rolled up Product core acts only on a single point.Feature extraction is finally the solution point cloud representation sequence tape in the form of maxpooling in model The influence come.Formula is schematically as follows:
f({x1,…,xn})≈g(h(x1),…,h(xn))
Wherein Xk(k=1 ..., n) is three-dimensional point, and function g is symmetric function, and function h is multi-layer perception (MLP), in this method In be convolution form, symmetric function by max () realization.
S21, point cloud classifications model is built based on PointNet network structure, so that model is directly acted in the form of convolution In the coordinate data of three-dimensional point, the feature that finally mapping is extracted in the form of fully-connected network finally obtains classification results.The model Applied to detection key area and detection key point;
S22, twin network model is built based on PointNet network structure, as shown in figure 4, making model branch respectively For difference cloud mass, by connecting the mapping output judgement point matched probability of cloud mass entirely.The model is applied to judge key point Between matching relationship.
S3, hyper parameter is adjusted according to model training, test feedback information;
According to model training concrete condition, the hyper parameters such as the Learning Step, learning rate, batch size of input data are adjusted.
S4, pretreatment point cloud, the input by point cloud data piecemeal to be detected, as model;
S41, farthest point sampling (Farthest Point Sampling) is carried out on to be processed cloud, sees Fig. 5, in figure Biggish black color dots and lesser light color point are sampled point.
S42, using sampled point as the centre of sphere, the point cloud mass in the sphere that radius is r=0.5m is intercepted out, as key The input of region detection model.
Whether S5, the input point cloud mass for model, detection wherein contain key area (region comprising key point), will Point cloud mass comprising key area retains.Sphere neighborhood representated by larger black color dots is that key area detection model is sentenced in Fig. 5 Break as the region comprising key point;
S6, the key point put in cloud mass comprising key area is detected;
In the region comprising key point detected in S5, traverses each point or suitable point is taken to judge whether at random Key point.
S61, it is taken a little in the point cloud mass comprising key area, and respectively using these points as the centre of sphere, with small radii (r =0.1m) interception sphere;
S62, by sphere obtained in S61 as the input of critical point detection model, reservation is judged as key point Point.Lesser Grey Point is the key point that critical point detection model is found out in Fig. 6, and right side is four effects of partial enlargement in figure Fruit is schemed, and smaller Grey Point is key point in effect picture.
S7, the crucial vertex neighborhood for being respectively at difference cloud is inputted into key point Matching Model in couples, judges that they are No matching.Key point of the fetch bit in two point cloud datas partners respectively, and intercepts sphere area by the centre of sphere of key point Domain point cloud mass inputs key point Matching Model, records crucial point correspondence.
S8, the correspondence key point for taking each amount of taking fully of RANSAC strategy calculate transformational relation using SVD, obtain a large amount of After transformational relation, take matching degree best one.Point cloud P and Q registration formula is expressed as follows:
qi=Rpi+T
Wherein piAnd qiThe point in cloud P, Q is respectively represented, R and T respectively indicate rotation and translation matrix.
As shown in fig. 7, two point cloud data (black and grey) registration effect a front surface and a side surfaces are shown.Registration effect with LCP (Largest Common Points) value is judgment criteria, and formula is expressed as follows:
Wherein R, T, pi、qiMeaning is same as described above, and ε value is 0.05m.Formula meaning is maximum public point set, i.e. point cloud P, after Q registration, point q is taken in Qi, find by conversion after point cloud P in qiApart it is less than the point of ε, if it exists such point, As common point, then LCP value is increase accordingly.
LCP value is respectively from left to right in Fig. 7:80.3%, 75.5%, 72.1%, take the highest left figure of LCP value as most Terminate fruit.
After above-mentioned steps, the registration to building three-dimensional point cloud based on deep learning model can be realized.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by anyone skilled in the art, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with scope of protection of the claims Subject to.

Claims (7)

1. a kind of building three-dimensional point cloud method for registering based on deep learning, which is characterized in that include the following steps:
The acquisition of S1, training sample obtain the sample data for training pattern;
S2, deep learning model is built, so that model is suitable for the processing of point cloud data;
S3, hyper parameter is adjusted according to model training, test feedback information;
S4, pretreatment point cloud, the input by point cloud data piecemeal to be detected, as model;
Whether S5, the input point cloud mass for model, detection wherein contain key area, and the point cloud mass comprising key area is protected It stays;
S6, the key point put in cloud mass comprising key area is detected;
S7, the key point for being respectively at difference cloud is inputted into key point Matching Model in couples, judges whether they match;
S8, the amount of taking fully correspondence key point calculate transformational relation, selected from obtained transition matrix registration effect it is best one It is a to be used as final result.
2. a kind of building three-dimensional point cloud method for registering based on deep learning as described in claim 1, which is characterized in that institute State step S1 specifically include it is following step by step:
The point of S11, interception in positions such as door frame angle, window frame angle and metope angles is as key point;
S12, take out using key point as the centre of sphere, radius r sphere in include point, then with these points be respectively the centre of sphere, interception half Diameter is positive sample of the point cloud mass of the sphere of r as key area detection model, so that positive sample all includes key point;? The region that point cloud does not include key point intercepts negative sample of the sphere of same size as key area detection model;
S13, bulbous region point cloud mass is intercepted as the positive sample of critical point detection model, not with key point by the centre of sphere of key point Include for negative sample of the bulbous region point cloud mass as critical point detection model of the centre of sphere, in negative sample or not include key point equal It can;
S14, the positive sample that key point Matching Model is extracted in two point cloud datas P, Q being registrated:It is taken a little in cloud P, Then the nearest point of the Search Length point is as corresponding points in cloud Q, if closest approach distance is greater than the threshold value of setting, without right Ying Dian;Negative sample is extracted in two point cloud datas P, Q being registrated:It takes a little in cloud P, then takes in cloud Q away from this Point is greater than the point of given threshold as noncorresponding points;
S15, using corresponding points to the pairs of bulbous region point cloud mass intercepted for the centre of sphere as the positive sample of key point Matching Model, Using noncorresponding points to the pairs of bulbous region point cloud mass intercepted for the centre of sphere as the negative sample of key point Matching Model.
3. a kind of building three-dimensional point cloud method for registering based on deep learning as described in claim 1, which is characterized in that institute State step S2 specifically include it is following step by step:
S21, point cloud classifications model is built based on PointNet network structure, so that model directly acts on three in the form of convolution Tie up the coordinate data of point, the feature that finally mapping is extracted in the form of fully-connected network;
S22, twin network model is built based on PointNet network structure, so that model branch is respectively acting on difference cloud, Judge whether some cloud masses correspond to by connecting the output after mapping entirely.
4. a kind of building three-dimensional point cloud method for registering based on deep learning as described in claim 1, which is characterized in that In the step S3, according to model training concrete condition, the hyper parameter of input data is adjusted, hyper parameter includes Learning Step, learns Habit rate, batch size.
5. a kind of building three-dimensional point cloud method for registering based on deep learning as claimed in claim 2, which is characterized in that institute State step S4 specifically include it is following step by step:
S41, farthest point sampling is carried out on to be processed cloud;
S42, using sampled point as the centre of sphere, intercept out the point cloud mass in certain radius sphere, judge mould as key area The input of type.
6. a kind of building three-dimensional point cloud method for registering based on deep learning as claimed in claim 2, which is characterized in that institute State step S6 specifically include it is following step by step:
S61, it is taken a little in the point cloud mass comprising key area, and respectively using these points as the centre of sphere, ball is intercepted with small radii Body region;
S62, by sphere obtained in step S61 as the input of key point discrimination model, reservation is judged as key point Point.
7. a kind of building three-dimensional point cloud method for registering based on deep learning as described in claim 1, it is characterised in that:? In the step S8, taking RANSAC strategy, the corresponding key point of the amount of taking fully takes RANSAC using SVD calculating transformational relation every time Strategy obtains a large amount of transformational relations, find matching degree best one, registration effect using LCP value as judgment criteria, formula indicate It is as follows:
Wherein piRepresent the point in point cloud P, qiThe point in a cloud Q is respectively represented, R and T respectively indicate rotation and translation matrix, and ε takes Value is 0.05m;Formula meaning is maximum public point set, i.e., after point cloud P, Q registration, point q is taken in Qi, find after conversion Point cloud P in qiApart it is less than the point of ε, if it exists such point, as common point, then LCP value is increase accordingly, and takes LCP value most High is used as final result.
CN201810584888.6A 2018-06-08 2018-06-08 A kind of building three-dimensional point cloud method for registering based on deep learning Withdrawn CN108876831A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810584888.6A CN108876831A (en) 2018-06-08 2018-06-08 A kind of building three-dimensional point cloud method for registering based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810584888.6A CN108876831A (en) 2018-06-08 2018-06-08 A kind of building three-dimensional point cloud method for registering based on deep learning

Publications (1)

Publication Number Publication Date
CN108876831A true CN108876831A (en) 2018-11-23

Family

ID=64337567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810584888.6A Withdrawn CN108876831A (en) 2018-06-08 2018-06-08 A kind of building three-dimensional point cloud method for registering based on deep learning

Country Status (1)

Country Link
CN (1) CN108876831A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210431A (en) * 2019-06-06 2019-09-06 上海黑塞智能科技有限公司 A kind of point cloud classifications method based on cloud semantic tagger and optimization
CN111027140A (en) * 2019-12-11 2020-04-17 南京航空航天大学 Airplane standard part model rapid reconstruction method based on multi-view point cloud data
CN111429563A (en) * 2020-03-10 2020-07-17 山东大学 Pipeline three-dimensional reconstruction method, system, medium and equipment based on deep learning
CN111612891A (en) * 2020-05-22 2020-09-01 北京京东乾石科技有限公司 Model generation method, point cloud data processing device, point cloud data processing equipment and medium
CN111882593A (en) * 2020-07-23 2020-11-03 首都师范大学 Point cloud registration model and method combining attention mechanism and three-dimensional graph convolution network
CN112150523A (en) * 2020-09-24 2020-12-29 中北大学 Three-dimensional point cloud registration method with low overlapping rate
JP2021515724A (en) * 2019-01-30 2021-06-24 バイドゥ ドットコム タイムス テクノロジー (ベイジン) カンパニー リミテッド LIDAR positioning to infer solutions using 3DCNN network in self-driving cars
CN113139996A (en) * 2021-05-06 2021-07-20 南京大学 Point cloud registration method and system based on three-dimensional point cloud geometric feature learning
CN113807366A (en) * 2021-09-16 2021-12-17 电子科技大学 Point cloud key point extraction method based on deep learning

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7060625B2 (en) 2019-01-30 2022-04-26 バイドゥドットコム タイムズ テクノロジー (ベイジン) カンパニー リミテッド LIDAR positioning to infer solutions using 3DCNN network in self-driving cars
JP2021515724A (en) * 2019-01-30 2021-06-24 バイドゥ ドットコム タイムス テクノロジー (ベイジン) カンパニー リミテッド LIDAR positioning to infer solutions using 3DCNN network in self-driving cars
US11531110B2 (en) 2019-01-30 2022-12-20 Baidu Usa Llc LiDAR localization using 3D CNN network for solution inference in autonomous driving vehicles
CN110210431A (en) * 2019-06-06 2019-09-06 上海黑塞智能科技有限公司 A kind of point cloud classifications method based on cloud semantic tagger and optimization
CN111027140A (en) * 2019-12-11 2020-04-17 南京航空航天大学 Airplane standard part model rapid reconstruction method based on multi-view point cloud data
CN111429563A (en) * 2020-03-10 2020-07-17 山东大学 Pipeline three-dimensional reconstruction method, system, medium and equipment based on deep learning
CN111429563B (en) * 2020-03-10 2021-08-13 山东大学 Pipeline three-dimensional reconstruction method, system, medium and equipment based on deep learning
CN111612891A (en) * 2020-05-22 2020-09-01 北京京东乾石科技有限公司 Model generation method, point cloud data processing device, point cloud data processing equipment and medium
CN111612891B (en) * 2020-05-22 2023-08-08 北京京东乾石科技有限公司 Model generation method, point cloud data processing method, device, equipment and medium
CN111882593A (en) * 2020-07-23 2020-11-03 首都师范大学 Point cloud registration model and method combining attention mechanism and three-dimensional graph convolution network
CN111882593B (en) * 2020-07-23 2022-06-17 首都师范大学 Point cloud registration model and method combining attention mechanism and three-dimensional graph convolution network
CN112150523B (en) * 2020-09-24 2022-04-12 中北大学 Three-dimensional point cloud registration method with low overlapping rate
CN112150523A (en) * 2020-09-24 2020-12-29 中北大学 Three-dimensional point cloud registration method with low overlapping rate
CN113139996A (en) * 2021-05-06 2021-07-20 南京大学 Point cloud registration method and system based on three-dimensional point cloud geometric feature learning
CN113139996B (en) * 2021-05-06 2024-02-06 南京大学 Point cloud registration method and system based on three-dimensional point cloud geometric feature learning
CN113807366A (en) * 2021-09-16 2021-12-17 电子科技大学 Point cloud key point extraction method based on deep learning
CN113807366B (en) * 2021-09-16 2023-08-08 电子科技大学 Point cloud key point extraction method based on deep learning

Similar Documents

Publication Publication Date Title
CN108876831A (en) A kind of building three-dimensional point cloud method for registering based on deep learning
CN108765475A (en) A kind of building three-dimensional point cloud method for registering based on deep learning
CN110378196B (en) Road visual detection method combining laser point cloud data
CN106127204B (en) A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks
CN103268729B (en) Based on mobile robot&#39;s tandem type map creating method of composite character
CN108364272A (en) A kind of high-performance Infrared-Visible fusion detection method
CN101770581B (en) Semi-automatic detecting method for road centerline in high-resolution city remote sensing image
WO2017020528A1 (en) Lane line recognition modeling method, apparatus, storage medium, and device, recognition method and apparatus, storage medium, and device
CN108288088A (en) A kind of scene text detection method based on end-to-end full convolutional neural networks
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN105005769A (en) Deep information based sign language recognition method
CN105160686B (en) A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators
CN103489193B (en) OO high-resolution remote sensing image change detecting method based on convergence strategy
Xue et al. Dual network structure with interweaved global-local feature hierarchy for transformer-based object detection in remote sensing image
CN106447662A (en) Combined distance based FCM image segmentation algorithm
CN106022342A (en) Image feature extraction method based on KAZE algorithm
Cheng et al. Generating high-resolution climate prediction through generative adversarial network
CN109829459A (en) Based on the vision positioning method for improving RANSAC
CN107330436B (en) Scale criterion-based panoramic image SIFT optimization method
CN115082778B (en) Multi-branch learning-based homestead identification method and system
CN106558065A (en) The real-time vision tracking to target is realized based on color of image and texture analysiss
CN116128024A (en) Multi-view contrast self-supervision attribute network abnormal point detection method
CN104732190A (en) Synthetic aperture sonar target detection method based on orthogonal texture correlation analysis
CN109583461A (en) A kind of template matching method based on edge feature
Wu et al. Image inpainting algorithm based on adaptive template direction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20181123

WW01 Invention patent application withdrawn after publication