CN111462107A - End-to-end high-precision industrial part shape modeling method - Google Patents

End-to-end high-precision industrial part shape modeling method Download PDF

Info

Publication number
CN111462107A
CN111462107A CN202010280473.7A CN202010280473A CN111462107A CN 111462107 A CN111462107 A CN 111462107A CN 202010280473 A CN202010280473 A CN 202010280473A CN 111462107 A CN111462107 A CN 111462107A
Authority
CN
China
Prior art keywords
point
edge
points
map
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010280473.7A
Other languages
Chinese (zh)
Other versions
CN111462107B (en
Inventor
王磊
吴伟龙
周建品
李争
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shiyan Intelligent Technology Guangzhou Co ltd
Original Assignee
Shiyan Intelligent Technology Guangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shiyan Intelligent Technology Guangzhou Co ltd filed Critical Shiyan Intelligent Technology Guangzhou Co ltd
Priority to CN202010280473.7A priority Critical patent/CN111462107B/en
Publication of CN111462107A publication Critical patent/CN111462107A/en
Application granted granted Critical
Publication of CN111462107B publication Critical patent/CN111462107B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an end-to-end high-precision industrial part shape modeling method, which comprises the following steps: s1: extracting edge points to obtain an edge map; s2: constructing a topological relation between points according to the edge graph; s3: extracting feature points of the point diagram belonging to each target; s4: and carrying out coding-decoding point location optimization on the feature points to realize model parameter optimization. The method has strong adaptability and accuracy, and can stably extract simple and accurate shape lines under the conditions of data noise, occlusion, shadow and the like; the method has great adaptability and can be directly applied to geometric modeling of various types of parts; the method effectively solves the problem that the shape lines cannot be stably extracted under the conditions of data noise, shielding, shadows and the like; meanwhile, the model is simple to debug, and compared with the traditional heuristic shape modeling technology, the method greatly shortens the algorithm debugging time.

Description

End-to-end high-precision industrial part shape modeling method
Technical Field
The invention relates to the technical field of industrial part modeling, in particular to an end-to-end high-precision industrial part shape modeling method.
Background
The shape modeling technology of industrial parts is one of the core technologies of robot welding, industrial part detection and measurement. In the existing method, various line feature or point feature extraction operators are generally adopted, such as Hough line feature operators and point feature extraction operators of Sift, Harris, local gradient and the like. And then designing a heuristic rule according to the shape characteristics of the target, connecting adjacent broken lines, removing the error conditions such as wrong detected line segments and the like, and realizing the shape modeling. The point and line features can also be used directly to guide the robot welding and matching with templates of the CAD model. These operators often cannot adapt to various different data environments, and are prone to false detection, missing detection, line breakage, certainty and the like under the conditions of data noise, shadow, occlusion and the like. Meanwhile, a large number of redundant points are often arranged in the extracted lines, and the additional post-processing can cause the precision to be reduced. Heuristic rules cannot adapt to complex and variable data situations and various different types of parts. Meanwhile, the rule-based method requires a large amount of debugging time, so that the deployment period of the industrial automation solution is greatly increased.
Disclosure of Invention
The invention provides an end-to-end high-precision industrial part shape modeling method for overcoming the defect that the shape modeling of the industrial part in the prior art cannot adapt to various different data environments.
The method comprises the following steps:
s1: extracting edge points to obtain an edge map;
s2: constructing a topological relation between points according to the edge graph;
s3: extracting feature points of the point diagram belonging to each target;
s4: and carrying out coding-decoding point location optimization on the feature points to realize model parameter optimization.
Preferably, S1 includes the steps of:
s1.1: generating a depth map using the input point cloud;
s1.2: marking the region of each part and the internal edge structure thereof on the depth map as a model training true value;
s1.3: performing target level detection and semantic segmentation on the depth map by using Mask-RCNN, and predicting the distance between each pixel and the edge point closest to the pixel, which is called a distance map for short;
s1.4: obtaining the area of each target detected by S1.3; in the distance map of the region, setting the pixel value with the pixel value less than 2 as 1, otherwise setting the pixel value as 0; an initial edge map is obtained.
S1.5: using morphological expansion and corrosion operation to close the region which is not closed in the edge map, removing small islands which may appear in the initial edge map, and filling holes which may exist in the extracted edge region;
s1.6: and extracting the skeleton of the edge map of each target obtained in the step S1.4 by using a skeleton algorithm of Opencv to obtain an edge map with the width of 1.
Preferably, S2 includes the steps of:
s2.1: traversing all the edge points in the edge graph obtained in the S1, and connecting each edge point with the edge points in the 8 adjacent domains to obtain a group of edge point graphs;
s2.2, traversing all the edge point diagrams, and if the tail end directions of two edge point diagrams are within a preset threshold value and the tail end distance is also within the preset threshold value, connecting corresponding end points of the two point diagrams to combine the two point diagrams into one point diagram;
s2.3, traversing all edge point graphs, wherein the point number of one point graph is the number of all end points in the graph as one point graph is a group of points connected through lines; and deleting the dot diagram with the dot number lower than a preset dot number threshold value.
Preferably, the preset threshold in S2.2 is adjusted according to the size of the target, and the range is about 3-10 pixels.
Preferably, the threshold of the number of points preset in S2.3 depends on the shape of the object, which is around N if the shape of the object is represented by an N-polygon. Typically less than N because the lines of the target may be occluded.
Preferably, S3 includes the steps of:
s3.1: performing normalization processing on the point map belonging to each target, and then generating the characteristics of the point on the point cloud by using PointNet;
s3.2: the characteristics of each point and the three-dimensional position after normalization are connected in series to obtain point characteristics with position information;
s3.3: searching each branch point column in the point diagram;
s3.4: calculating the average characteristic of each point column; and connecting the characteristic of each point in each point column with the average characteristic of the point column to obtain the final point characteristic of each point.
Preferably, the normalization process in S3.1 specifically includes: and subtracting the average value of the coordinates of all the points of the point diagram from the coordinates of each point, and dividing the average value by half of the longest side length of the longest external bounding box of the part.
Preferably, each branch point column in S3.3 is a column of points made up of all adjacent points with a degree of 2.
Preferably, S4 includes:
s4.1: for each point column, using a two-layer GRU network to encode the point features obtained in S3.4 one by one according to the sequence of the points in the point column to obtain a hidden feature vector;
inputting a point location and a previously obtained hidden feature vector into a decoder which is also composed of two layers of GRU networks to obtain point location displacement; ending when the decoder output terminates;
the decoder is an RNN, that is, each time a point location and a previously obtained hidden feature vector are input, a point location after the point location, a termination probability and an updated hidden feature vector are output. Since there is no initial point at the time of first run of the decoder, the point at the input will use a symbol representing the start of the sequence. The output point location and hidden feature are input into the decoder again to obtain a second point location and a new updated hidden feature vector. This is repeated until the end-symbol is sampled using the end-symbol probability output by the decoder.
S4.2: during training, the true value point location is used for replacing the point location output by each step of the decoder as the input of each step of the decoder, and the next point location and the stop sign of each true value point location are predicted. After the penultimate point location is input into the decoder, the true value of the predicted point location of the decoder is the last point location, and the true value of the predicted stopping probability is 1. Otherwise, the true value of the stop probability is 0. After the training converges, the point of the decoder output is used as the input of the decoder for training until the convergence again.
S4.3: calculating the Euclidean distance between the decoded point position and the true value point position as a loss function; back-propagating using the loss function; model parameter optimization was performed using an ADAM optimizer.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the method can extract simple and accurate target part edges from the 3D point cloud, and can be used for industrial part attitude estimation based on template matching. The method has strong adaptability and accuracy, and can stably extract simple and accurate shape lines under the conditions of data noise, occlusion, shadow and the like. The method has great adaptability and can be directly applied to geometric modeling of various types of parts. The method effectively solves the problem that the shape lines cannot be stably extracted under the conditions of data noise, shielding, shadows and the like. Meanwhile, the model is simple to debug, and compared with the traditional heuristic shape modeling technology, the method greatly shortens the algorithm debugging time.
Drawings
FIG. 1 is a flow chart of a method for modeling the shape of an end-to-end high-precision industrial part according to embodiment 1.
Fig. 2 is a shape modeling network model based on encoding-decoding RNNs.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1:
the embodiment provides an end-to-end high-precision industrial part shape modeling method. As shown in fig. 1, the method mainly includes: s1: edge extraction, S2: building topological relation between points, S3: point feature extraction, S4: and point location optimization. These four modules will be described separately below.
S1: and extracting initial edge points.
S1.1, generating a depth map by using the input point cloud.
S1.2: and marking the region of each part and the internal edge structure of the part on the depth map as a true model training value.
S1.3: and performing target-level detection and semantic segmentation on the depth map by using Mask-RCNN, and simultaneously predicting the distance between each pixel and the edge point closest to the pixel, namely a distance map.
S1.4: the area of each target detected in S1.3 is acquired. In the distance map of this region, a pixel value having a pixel value smaller than 2 is set to 1, and otherwise, is set to 0. The result is an initial edge map.
S1.5: the non-occluded regions in the edge map were closed using morphological dilation and erosion operations. And removing small islands which can appear in the initial edge graph, and filling holes which can exist in the extracted edge area.
S1.6: the skeleton of the edge map of each object obtained in S4 is extracted using the skeleton algorithm of Opencv, resulting in an edge map with a width of 1.
With Mask-RCNN and a certain amount of data, the extraction accuracy of the edge map can be high, but the problems of fracture and false detection still exist. And then, constructing a topological relation between the points and optimizing the positions of the points.
S2: and constructing a topological relation between the points.
S2.1: and traversing all the edge points in the edge map extracted in the step S1, and connecting each edge point with the edge points in the 8 neighborhoods of the edge point to obtain a group of edge point maps.
And S2.2, traversing all the point graphs, and if the end directions of two point graphs are within a certain threshold value and the end distance is also within a certain threshold value, connecting corresponding end points of the two point graphs to combine the two point graphs into one point graph.
S2.3, traversing all the point maps, and deleting the point number of the point maps which is lower than a certain threshold value.
S3: and (5) extracting point features.
S3.1: and (3) carrying out normalization processing on the dot diagram belonging to each target, namely subtracting the average value of all point coordinates of the dot diagram from the coordinates of each point, dividing the average value by half of the longest side length of the longest externally-connected bounding box of the part, and then generating the characteristics of the current point on the point cloud by using PointNet by taking the three-dimensional position of each point contained in the point cloud as the center.
S3.2: and connecting the feature of each point and the normalized three-dimensional position in series to obtain the point feature with the position information.
S3.3: each branch point column in the point diagram, i.e., the point column composed of all neighboring points with degree 2, is searched.
S3.4: the average feature for each column of points is calculated. And connecting the characteristic of each point in each point column with the average characteristic of the point column to obtain the final point characteristic of each point.
S4: and point location optimization.
S4.1: and for each point column, coding the characteristics of the points one by using a two-layer GRU network according to the sequence of the points in the point column to obtain a hidden characteristic vector. And inputting the hidden feature vector and the point location predicted in the previous step into a decoder which is also composed of two layers of GRU networks to obtain the point location displacement. Ending when the decoder output terminates as shown in fig. 2.
S4.2: during training, the Euclidean distance between the decoded point location and the true value point location is calculated to be used as a loss function. This loss function is used for back propagation. Model parameter optimization was performed using an ADAM optimizer.
The terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (9)

1. An end-to-end high precision industrial part shape modeling method, characterized in that the method comprises the following steps:
s1: extracting edge points to obtain an edge map;
s2: constructing a topological relation between points according to the edge graph;
s3: extracting feature points of the point diagram belonging to each target;
s4: and carrying out coding-decoding point location optimization on the feature points to realize model parameter optimization.
2. The end-to-end high precision industrial part shape modeling method of claim 1, characterized in that S1 comprises the steps of:
s1.1: generating a depth map using the input point cloud;
s1.2: marking the region of each part and the internal edge structure thereof on the depth map as a model training true value;
s1.3: performing target level detection and semantic segmentation on the depth map by using Mask-RCNN, and predicting the distance between each pixel and the edge point closest to the pixel, which is called a distance map for short;
s1.4: obtaining the area of each target detected by S1.3; in the distance map of the region, setting the pixel value with the pixel value less than 2 as 1, otherwise setting the pixel value as 0; obtaining an initial edge map;
s1.5: using morphological expansion and corrosion operation to close the region which is not closed in the edge map, removing small islands which may appear in the initial edge map, and filling holes which may exist in the extracted edge region;
s1.6: and extracting the skeleton of the edge map of each target obtained in the step S1.4 by using a skeleton algorithm of Opencv to obtain an edge map with the width of 1.
3. The end-to-end high precision industrial part shape modeling method of claim 2, wherein S2 comprises the steps of:
s2.1: traversing all the edge points in the edge graph obtained in the S1, and connecting each edge point with the edge points in the 8 adjacent domains to obtain a group of edge point graphs;
s2.2: traversing all the edge point diagrams, and if the end directions of two edge point diagrams are within a preset threshold value and the end distance is also within the preset threshold value, connecting corresponding end points of the two point diagrams to merge the two point diagrams into one point diagram;
s2.3: and traversing all the edge point maps, and deleting the point maps with the point number lower than a preset point number threshold value.
4. An end-to-end high precision industrial part shape modeling method according to claim 3, characterized in that said preset threshold in S2.2 is adjusted according to the size of the target, ranging from 3-10 pixels.
5. An end-to-end high precision industrial part shape modeling method according to claim 4, characterized in that the preset point threshold in S2.3 depends on the shape of the target, and if the shape of the target is represented by an N-polygon, the threshold is near N.
6. The end-to-end high precision industrial part shape modeling method of claim 4 or 5, wherein S3 comprises the steps of:
s3.1: performing normalization processing on the point map belonging to each target, and then generating the characteristics of the point on the point cloud by using PointNet;
s3.2: the characteristics of each point and the three-dimensional position after normalization are connected in series to obtain point characteristics with position information;
s3.3: searching each branch point column in the point diagram;
s3.4: calculating the average characteristic of each point column; and connecting the characteristic of each point in each point column with the average characteristic of the point column to obtain the final point characteristic of each point.
7. The end-to-end high-precision industrial part shape modeling method according to claim 6, characterized in that the normalization process in S3.1 is specifically: and subtracting the average value of the coordinates of all the points of the point diagram from the coordinates of each point, and dividing the average value by half of the longest side length of the longest external bounding box of the part.
8. A method for end-to-end modeling of a high precision industrial part shape according to claim 7, wherein each branch point column in S3.3 is a column of points consisting of all adjacent points with degree 2.
9. The end-to-end high precision industrial part shape modeling method of claim 8, wherein S4 comprises:
s4.1: for each point column, using a two-layer GRU network to encode the point features obtained in S3.4 one by one according to the sequence of the points in the point column to obtain a hidden feature vector;
inputting a point location and a previously obtained hidden feature vector into a decoder which is also composed of two layers of GRU networks to obtain point location displacement; ending when the decoder output terminates;
s4.2: during training, firstly, replacing the point location output by each step of the decoder with the true value point location as the input of each step of the decoder, and predicting the next point location and the stop sign of each true value point location; after the penultimate point location is input into a decoder, the true value of the predicted point location of the decoder is the last point location, and the true value of the predicted stopping probability is 1; otherwise, the true value of the stopping probability is 0; after the training is converged, using the point of the decoder output as the input of the decoder to train until the convergence is performed again;
s4.3: calculating the Euclidean distance between the decoded point position and the true value point position as a loss function; back-propagating using the loss function; model parameter optimization was performed using an ADAM optimizer.
CN202010280473.7A 2020-04-10 2020-04-10 End-to-end high-precision industrial part shape modeling method Active CN111462107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010280473.7A CN111462107B (en) 2020-04-10 2020-04-10 End-to-end high-precision industrial part shape modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010280473.7A CN111462107B (en) 2020-04-10 2020-04-10 End-to-end high-precision industrial part shape modeling method

Publications (2)

Publication Number Publication Date
CN111462107A true CN111462107A (en) 2020-07-28
CN111462107B CN111462107B (en) 2020-10-30

Family

ID=71685268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010280473.7A Active CN111462107B (en) 2020-04-10 2020-04-10 End-to-end high-precision industrial part shape modeling method

Country Status (1)

Country Link
CN (1) CN111462107B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7373215B2 (en) * 2006-08-31 2008-05-13 Advanced Micro Devices, Inc. Transistor gate shape metrology using multiple data sources
CN104517316A (en) * 2014-12-31 2015-04-15 中科创达软件股份有限公司 Three-dimensional modeling method and terminal equipment
CN106952344A (en) * 2017-05-04 2017-07-14 西安新拓三维光测科技有限公司 A kind of Damaged model calculation method for being used to remanufacture reparation
CN109558647A (en) * 2018-11-07 2019-04-02 中国航空工业集团公司西安飞机设计研究所 A kind of similar part fast modeling method based on CATIA
CN109872397A (en) * 2019-02-18 2019-06-11 北京工业大学 A kind of three-dimensional rebuilding method of the airplane parts based on multi-view stereo vision
CN110349213A (en) * 2019-06-28 2019-10-18 Oppo广东移动通信有限公司 Method, apparatus, medium and electronic equipment are determined based on the pose of depth information
CN110766783A (en) * 2019-09-16 2020-02-07 江汉大学 Three-dimensional modeling method for part
CN110889893A (en) * 2019-10-25 2020-03-17 中国科学院计算技术研究所 Three-dimensional model representation method and system for expressing geometric details and complex topology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7373215B2 (en) * 2006-08-31 2008-05-13 Advanced Micro Devices, Inc. Transistor gate shape metrology using multiple data sources
CN104517316A (en) * 2014-12-31 2015-04-15 中科创达软件股份有限公司 Three-dimensional modeling method and terminal equipment
CN106952344A (en) * 2017-05-04 2017-07-14 西安新拓三维光测科技有限公司 A kind of Damaged model calculation method for being used to remanufacture reparation
CN109558647A (en) * 2018-11-07 2019-04-02 中国航空工业集团公司西安飞机设计研究所 A kind of similar part fast modeling method based on CATIA
CN109872397A (en) * 2019-02-18 2019-06-11 北京工业大学 A kind of three-dimensional rebuilding method of the airplane parts based on multi-view stereo vision
CN110349213A (en) * 2019-06-28 2019-10-18 Oppo广东移动通信有限公司 Method, apparatus, medium and electronic equipment are determined based on the pose of depth information
CN110766783A (en) * 2019-09-16 2020-02-07 江汉大学 Three-dimensional modeling method for part
CN110889893A (en) * 2019-10-25 2020-03-17 中国科学院计算技术研究所 Three-dimensional model representation method and system for expressing geometric details and complex topology

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HEHE FAN等: "PointRNN: Point Recurrent Neural Network for Moving Point Cloud Processing", 《HTTPS://ARXIV.ORG/PDF/1910.08287.PDF》 *
张兰挺等: "基于特征的机械零件参数化建模技术研究", 《内蒙古工业大学学报》 *
谢家龙: "基于CT的异形零部件测量及三维建模与快速成型研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN111462107B (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111429563B (en) Pipeline three-dimensional reconstruction method, system, medium and equipment based on deep learning
CN105528614B (en) A kind of recognition methods of the cartoon image space of a whole page and automatic recognition system
CN113065594B (en) Road network extraction method and device based on Beidou data and remote sensing image fusion
Canaz Sevgen et al. An improved RANSAC algorithm for extracting roof planes from airborne lidar data
CN103295242A (en) Multi-feature united sparse represented target tracking method
KR102305230B1 (en) Method and device for improving accuracy of boundary information from image
KR20210057907A (en) Method and Apparatus for Generating Intelligent Engineering Drawings Based on Deep Learning
CN110838105A (en) Business process model image identification and reconstruction method
CN111681300A (en) Method for obtaining target area composed of outline sketch lines
CN110598634A (en) Machine room sketch identification method and device based on graph example library
CN114491718B (en) Geological profile multi-segment line optimization method and system for finite element analysis
CN114283343B (en) Map updating method, training method and device based on remote sensing satellite image
CN117373070B (en) Method and device for labeling blood vessel segments, electronic equipment and storage medium
CN111462107B (en) End-to-end high-precision industrial part shape modeling method
CN113902715A (en) Method, apparatus and medium for extracting vessel centerline
Jia et al. Sample generation of semi‐automatic pavement crack labelling and robustness in detection of pavement diseases
Vasin et al. Geometric modeling of raster images of documents with weakly formalized description of objects
CN114443856A (en) Automatic fault knowledge graph creating method and device for fault tree picture
CN111488882A (en) High-precision image semantic segmentation method for industrial part measurement
Priestnall et al. Arrowhead recognition during automated data capture
CN115147801B (en) Lane line recognition method and device, electronic device and storage medium
CN117934337B (en) Method for mask repair of blocked chromosome based on unsupervised learning
CN115049995B (en) Lane line detection method and device, electronic equipment and storage medium
CN113192195B (en) Method for repairing damaged terrain coordinate data
CN117612195A (en) Graph model generation method and device based on main wiring graph recognition technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant