CN114821013A - Element detection method and device based on point cloud data and computer equipment - Google Patents

Element detection method and device based on point cloud data and computer equipment Download PDF

Info

Publication number
CN114821013A
CN114821013A CN202210764430.5A CN202210764430A CN114821013A CN 114821013 A CN114821013 A CN 114821013A CN 202210764430 A CN202210764430 A CN 202210764430A CN 114821013 A CN114821013 A CN 114821013A
Authority
CN
China
Prior art keywords
sample
point cloud
cloud data
trained
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210764430.5A
Other languages
Chinese (zh)
Other versions
CN114821013B (en
Inventor
黄惠
陈柱瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202210764430.5A priority Critical patent/CN114821013B/en
Publication of CN114821013A publication Critical patent/CN114821013A/en
Application granted granted Critical
Publication of CN114821013B publication Critical patent/CN114821013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method and a device for detecting elements based on point cloud data and computer equipment. The method comprises the following steps: inputting the acquired sample point cloud data into an element detection model to be trained, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to the sample elements; extracting the global structure relation among a plurality of sample elements according to the geometric parameters corresponding to the sample elements through the element detection model to be trained; according to the wrong structural relationship in the global structural relationship, constraining the element detection model to be trained to obtain a pre-trained element detection model; and carrying out element detection on the acquired point cloud data to be detected through the pre-trained element detection model. The method can improve the accuracy of element detection.

Description

Element detection method and device based on point cloud data and computer equipment
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for detecting primitives based on point cloud data, a computer device, a storage medium, and a computer program product.
Background
The point cloud data obtained by scanning through various three-dimensional sensors (such as Kinect, laser radar and the like) can store the space, color, intensity and other information of the object and the surrounding environment, but the point cloud data is not convenient for people to directly understand and analyze. Moreover, as the point cloud data is sparse, noisy and disordered, people cannot directly obtain an accurate high-level abstract semantic meaning through the point cloud data. That is, there is a gap in understanding between the low-level visual features expressed by point cloud data and the high-level abstract semantics that one can handle. By detecting the ubiquitous geometric elements in the point cloud data, people can build a bridge spanning the gap. In a conventional primitive detection method, the point cloud data is fitted into a geometric primitive by predicting the primitive and primitive attributes of each point in the point cloud data of the artificial object.
However, the conventional method adopts an isolated element detection method, and when the structure of the artificial object is more complex, detection errors are easy to occur, so that the element detection accuracy is lower.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device, a computer readable storage medium and a computer program product for detecting primitives based on point cloud data, which can improve the accuracy of detecting primitives.
In a first aspect, the application provides a method for detecting primitives based on point cloud data. The method comprises the following steps:
inputting the acquired sample point cloud data into an element detection model to be trained, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to the sample elements;
extracting the global structure relation among a plurality of sample elements according to the geometric parameters corresponding to the sample elements through an element detection model to be trained;
according to the wrong structural relationship in the global structural relationship, constraining the element detection model to be trained to obtain a pre-trained element detection model;
and carrying out element detection on the acquired point cloud data to be detected through a pre-trained element detection model.
In one embodiment, the method includes inputting acquired sample point cloud data into an element detection model to be trained, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to each sample element includes:
extracting the characteristics of the sample point cloud data through an element detection model to be trained to obtain high-dimensional characteristics corresponding to the sample point cloud data;
determining a plurality of sample elements corresponding to the sample point cloud data according to the high-dimensional features through an element detection model to be trained, and predicting the type of the sample element corresponding to each sample element;
and carrying out parametric fitting on the corresponding sample elements according to the sample element types of the sample elements and the sample point cloud data through the element detection model to be trained to obtain the geometric parameters corresponding to the sample elements.
In one embodiment, the step of performing feature extraction on the sample point cloud data through the primitive detection model to be trained to obtain the high-dimensional features corresponding to the sample point cloud data includes:
extracting shallow layer characteristics corresponding to sample point cloud data through a primitive detection model to be trained;
extracting deep features corresponding to sample point cloud data through a primitive detection model to be trained;
and combining the shallow feature and the deep feature to obtain the high-dimensional feature corresponding to the sample point cloud data.
In one embodiment, determining a plurality of sample elements corresponding to sample point cloud data according to high-dimensional features by an element detection model to be trained, and predicting the sample element type corresponding to each sample element includes:
classifying the sample point cloud data according to high-dimensional features through an element detection model to be trained to obtain the original element types of each point in the sample point cloud data;
dividing the sample point cloud data into a plurality of sample elements according to the high-dimensional features through an element detection model to be trained;
and selecting the original primitive type with the most quantity from the original primitive types of the points corresponding to the sample primitives through the primitive detection model to be trained to determine the original primitive type as the sample primitive type of the corresponding sample primitive.
In one embodiment, segmenting the sample point cloud data into a plurality of sample elements according to the high-dimensional features by the element detection model to be trained comprises:
predicting the space offset of each point in the sample point cloud data according to the high-dimensional characteristics through a primitive detection model to be trained;
calculating the space offset result of the corresponding point according to the space offset of each point;
and dividing the sample point cloud data into a plurality of sample elements according to the spatial offset result and the high-dimensional characteristics of each point in the sample point cloud data.
In one embodiment, the performing primitive detection on the acquired point cloud data to be detected through a pre-trained primitive detection model includes:
acquiring point cloud data to be detected;
inputting the point cloud data to be detected into a pre-trained element detection model, determining a plurality of elements to be detected corresponding to the point cloud data to be detected, and fitting the geometric parameters corresponding to the elements to be detected to obtain an element detection result.
In a second aspect, the application further provides a primitive detection device based on the point cloud data. The device includes:
the geometric parameter prediction module is used for inputting the acquired sample point cloud data into an element detection model to be trained, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to all the sample elements;
the global structure relation extraction module is used for extracting the global structure relation among a plurality of sample elements according to the geometric parameters corresponding to the sample elements through the element detection model to be trained;
the training module is used for constraining the element detection model to be trained according to the error structural relationship in the global structural relationship to obtain a pre-trained element detection model;
and the element detection module is used for carrying out element detection on the acquired point cloud data to be detected through a pre-trained element detection model.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
inputting the acquired sample point cloud data into an element detection model to be trained, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to the sample elements;
extracting the global structure relation among a plurality of sample elements according to the geometric parameters corresponding to the sample elements through an element detection model to be trained;
according to the wrong structural relationship in the global structural relationship, constraining the element detection model to be trained to obtain a pre-trained element detection model;
and carrying out element detection on the acquired point cloud data to be detected through a pre-trained element detection model.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
inputting the acquired sample point cloud data into an element detection model to be trained, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to the sample elements;
extracting the global structure relation among a plurality of sample elements according to the geometric parameters corresponding to the sample elements through an element detection model to be trained;
according to the wrong structural relationship in the global structural relationship, constraining the element detection model to be trained to obtain a pre-trained element detection model;
and carrying out element detection on the acquired point cloud data to be detected through a pre-trained element detection model.
In a fifth aspect, the present application further provides a computer program product. Computer program product comprising a computer program which, when executed by a processor, performs the steps of:
inputting the acquired sample point cloud data into an element detection model to be trained, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to the sample elements;
extracting the global structure relation among a plurality of sample elements according to the geometric parameters corresponding to the sample elements through an element detection model to be trained;
according to the wrong structural relationship in the global structural relationship, constraining the element detection model to be trained to obtain a pre-trained element detection model;
according to the element detection method, device, computer equipment, storage medium and computer program product based on point cloud data, a plurality of sample elements corresponding to sample point cloud data are determined through an element detection model to be trained, geometric parameters corresponding to all the sample elements are fitted, then a global structure relation among the plurality of sample elements is extracted according to the geometric parameters corresponding to all the sample elements, so that the element detection model to be trained is constrained according to an error structure relation in the extracted global structure relation to obtain a pre-trained element detection model, and the obtained point cloud data to be detected is subjected to element detection through the pre-trained element detection model. The element detection model trained in advance is obtained by training based on the global structure relationship among the extracted multiple sample elements, so that the structure relationship of the geometric elements contained in the point cloud data to be detected in the whole point cloud data to be detected can be more accurately judged, and the element detection accuracy can be improved.
Drawings
FIG. 1 is a diagram of an application environment of a method for detecting primitives based on point cloud data according to an embodiment;
FIG. 2 is a schematic flow chart illustrating a method for detecting primitives based on point cloud data according to an embodiment;
FIG. 3 is a schematic flow chart of the step of segmenting the sample point cloud data into a plurality of sample primitives according to high-dimensional features by the primitive detection model to be trained in one embodiment;
FIG. 4 shows a point P in one embodiment 0 Schematic diagram of the spatial offset prediction process of (a);
FIG. 5 is a diagram illustrating a network structure of a primitive test model to be trained in one embodiment;
FIG. 6 is a process for optimization of the feature extraction layer in one embodiment;
FIG. 7 is a schematic flow chart illustrating a method for detecting primitives based on point cloud data according to another embodiment;
FIG. 8 is a diagram illustrating the results of a spatial offset prediction layer in a relationship Net, according to an embodiment;
FIG. 9 is a block diagram showing the structure of a device for detecting elements based on point cloud data according to an embodiment;
FIG. 10 is a diagram of the internal structure of a computer device in one embodiment;
fig. 11 is an internal configuration diagram of a computer device in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The primitive detection method based on the point cloud data provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the three-dimensional scanning device 102 communicates with the computer device 104 over a network. The three-dimensional scanning device 102 is configured to scan the surrounding environment, collect raw point cloud data of man-made objects in the surrounding environment, and send the collected raw point cloud data to the computer device 104. The computer device 104 extracts sample point cloud data from the original point cloud data, trains an element detection model from the sample point cloud data, and performs element detection by the element detection model trained in advance. The three-dimensional scanning device 102 may be, among other things, a laser scanning device, a depth camera-based three-dimensional scanning system, or the like. The computer device 104 may be a terminal or a server. The terminal can be but not limited to various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the internet of things equipment can be smart sound boxes, smart televisions, smart air conditioners, smart vehicle-mounted equipment and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In one embodiment, as shown in fig. 2, there is provided a method for detecting primitives based on point cloud data, which is illustrated by applying the method to the computer device in fig. 1, and includes the following steps:
step 202, inputting the acquired sample point cloud data into a to-be-trained element detection model, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to each sample element.
The sample point cloud data refers to point cloud data used for training the element detection model. The sample elements refer to elements contained in the sample point cloud data and are used for carrying out abstract representation on the artificial objects.
Specifically, in the training process of the primitive detection model, the three-dimensional scanning device scans artificial objects in the surrounding environment, and records the scanned information in a point cloud form to obtain original point cloud data. The original point cloud data is a collection of a large number of points, and one point may include three-dimensional coordinates and a surface normal vector, and may further include color information (RGB), laser reflection Intensity (Intensity), and the like. And the three-dimensional scanning equipment sends the acquired original point cloud data to the computer equipment. And the computer equipment extracts the three-dimensional coordinates and the surface normal vectors of all the points in the original point cloud data, and superposes the extracted three-dimensional coordinates and the surface normal vectors of all the points to obtain the sample point cloud data. For example, the three-dimensional coordinates are three-dimensional vectors, the surface normal vectors are three-dimensional vectors, and sample point cloud data obtained after superposition is six-dimensional vectors.
The computer device stores therein a primitive detection model to be trained. The primitive detection model to be trained may be a deep learning model, such as RelationNet. After the sample point cloud data is obtained, the computer device calls an element detection model to be trained, the obtained sample point cloud data is input into the element detection model to be trained, a plurality of sample elements corresponding to the sample point cloud data are predicted through the element detection model to be trained, and the sample elements can comprise planes, spheres, cylinders, cones, open spline curves, closed spline curves and the like. And then, carrying out parametric fitting on each sample element to obtain a geometric parameter corresponding to each sample element. For example, the geometric parameters corresponding to the plane may be a plane normal vector and a distance of the plane from the origin. The geometric parameters corresponding to a sphere may be the center and radius of the sphere. The corresponding geometric parameters of the cylinder can be the center of a circle, the radius and the axial direction vector. The corresponding geometric parameters of the cone may be the vertex, the included angle, and the axial direction vector. The geometric parameter corresponding to the open/close spline curve may be a 20 x 20 control point grid.
And step 204, extracting the global structure relationship among a plurality of sample elements according to the geometric parameters corresponding to the sample elements through the element detection model to be trained.
The global structural relationship refers to the geometric relationship among the orientation, the placing position, the size and the like of the plurality of sample primitives. The global structure relationship is used for constraining the predicted geometric parameters of each sample element so as to obtain a cleaner and tidier geometric element detection result. The global structural relationship may include geometric relationships such as parallel, vertical, axis alignment, etc. existing between sample primitives such as planes, spheres, cylinders, cones, etc. that make up the artificial object.
Specifically, a plurality of sample primitives are pairwise combined through a primitive detection model to be trained to generate an original candidate set. The original candidate set may include a plurality of primitive pairs. And calculating the global structure relation corresponding to the two sample elements in each element pair according to the geometric parameters of each element pair by using the element detection model to be trained.
Alternatively, since the global structural relationship of some sample primitives is difficult to define, for example, the parallel and vertical relationship between the spheres cannot be determined, and there is no axis alignment relationship between the cylinder and the plane, these sample primitives are excluded from the corresponding global structural relationship. Specifically, whether each primitive pair in the original candidate set meets a preset global structure relationship is identified through a primitive detection model to be trained. And (4) eliminating the primitive pairs which do not meet the preset global structure relationship in the original candidate set to obtain a target candidate set. For example, two-by-two combinations of spheres and planes do not fit into the three global structural relationships mentioned above, thus excluding the primitive pair from the original candidate set. And then extracting the global structure relationship between each element pair in the target candidate set according to the geometric parameters corresponding to each sample element in the target candidate set through the element detection model to be trained.
Illustratively, three global structural relationships of parallel, vertical and axis alignment existing among sample primitives are explained.
(1) Parallel and perpendicular
Parallel and perpendicular are among the sample elements of a man-made object, among which there is one of the most extensive global structural relationships. In order to extract the orientation relationship among the sample elements, the normal vectors of the plane, and the axial direction vectors of the cylinder and the cone can be used as the direction vectors of the sample elements, so as to judge the parallel and perpendicular relationship among the sample elements.
Alternatively, when sample primitive X i And X j When the parallel relation exists, the direction vectors of the parallel relation satisfy the following conditions:
Figure 897921DEST_PATH_IMAGE001
(1)
when the two have a perpendicular relationship, the direction vector thereof should satisfy:
Figure 873967DEST_PATH_IMAGE002
(2)
wherein,
Figure 821195DEST_PATH_IMAGE003
representing sample elements X i The direction vector of (a) is,
Figure 464666DEST_PATH_IMAGE004
representing sample elements X j The direction vector of (2).
Optionally, due to the sparsity of the sample point cloud data and the influence of noise, when the parallel and perpendicular relations between the sample elements are extracted according to the geometric parameters corresponding to each sample element, an error of 10 ° may also be allowed. For example, two sample primitives may be considered to have a perpendicular relationship when their direction vectors are between [85 °,95 ° ]. For another example, two sample primitives may be considered to be in a perpendicular relationship when their direction vectors are between [0 °,10 ° ] or [170 °,180 ° ].
(2) Shaft alignment
Alignment is a feature widely present among man-made objects that causes the center points and axes from different sample elements to lie on the same extension line, and this relationship that exists between sample elements can be referred to as an axis alignment relationship. Axis alignment is common between three geometric primitives, sphere, cylinder and cone, and can include the following two cases: one is that the centre of sphere lies on the extension of the axis of the cylinder or cone, and the other is that the axes of the cylinder and cone lie on the same extension.
In the first case, the distance from the center of the sphere to the extension of the axis can be directly calculated. For the sphere center O (x) 0 ,y 0 ,z 0 ) And the axis of another sample cell
Figure 65411DEST_PATH_IMAGE005
The distance d from the center O to the axis can be expressed as:
Figure 59912DEST_PATH_IMAGE006
(3)
and when the distance between the sphere center calculated by the element detection model to be trained and the axis of the cylinder or the cone is smaller than a preset distance threshold, determining that the sphere and the cylinder or the cone have an axis alignment relation. For example, the preset distance threshold may be 0.05.
For the second case, in this embodiment, the primitive test model to be trained does not directly calculate whether the positions of the two axes belong to the same extension line, but selects the center of a cylinder or the vertex of a cone, and calculates the distance from the center to the extension line of the axis of another primitive by using the method in the first case, and the calculation method is as shown in the above formula (3).
And step 206, according to the error structural relationship in the global structural relationship, constraining the primitive detection model to be trained to obtain a pre-trained primitive detection model.
And comparing the extracted global structural relationship with a preset labeling relationship, and determining that the global structural relationship is an error structural relationship when the comparison is inconsistent. And calculating the loss of the global structure relation according to the error structure relation and a preset global structure relation loss function. Specifically, the preset global structure relationship loss function may include a parallel structure relationship loss function, a vertical structure relationship loss function, and an axis alignment relationship loss function.
When the error structure relationship includes parallel, vertical and axis alignment relationship, the primitive pair corresponding to the error structure relationship can be divided into R pa 、R ot And R al And the three sets respectively represent the element combinations with parallel, vertical and axis alignment relation in the error structure relation. For example,
Figure 404306DEST_PATH_IMAGE007
representing sample elements X i And X j There is a parallel relationship between them. Similarly, corresponding primitive pairs can be respectively stored in G according to the real existing global structure relationship among the extracted sample primitives pa 、G ot And G al In the three sets, the primitive combinations with parallel, vertical and axis alignment relation in the global structure relation are represented respectively.
For parallel relations, if twoThis element X i And X j Misclassified as parallel, but in reality does not have a parallel relationship, then a parallel structure relationship penalty between two primitives needs to be computed, and the parallel structure relationship penalty function can be as follows:
Figure 793961DEST_PATH_IMAGE008
(4)
wherein,
Figure 565608DEST_PATH_IMAGE009
represents a loss of parallel structural relationship, R pa Representing combinations of parallel relational elements in an error-structure relationship, G pa Represents a combination of parallel relationship primitives in a global structural relationship,
Figure 781826DEST_PATH_IMAGE010
representing sample elements X i The direction vector of (a) is,
Figure 929910DEST_PATH_IMAGE004
representing sample elements X j The direction vector of (2).
Similarly, the parallel structure relationship loss function can be as follows:
Figure 547973DEST_PATH_IMAGE011
(5)
wherein,
Figure 631467DEST_PATH_IMAGE012
represents the loss of the vertical structural relationship, R ot Representing combinations of vertical elements in an error-structure relationship, G ot Represents a combination of vertical relationship primitives in a global structural relationship,
Figure 69402DEST_PATH_IMAGE010
representing sample elements X i The direction vector of (a) is,
Figure 755598DEST_PATH_IMAGE004
representing sample elements X j The direction vector of (2).
For an axis alignment relationship, a point M (x) on the axis may be selected 2 ,y 2 ,z 2 ) To another passing point
Figure 493747DEST_PATH_IMAGE013
And the direction vector is
Figure 607196DEST_PATH_IMAGE014
As a loss:
Figure 532427DEST_PATH_IMAGE015
(6)
wherein,
Figure 146948DEST_PATH_IMAGE016
indicating a loss of axis alignment.
And summing the parallel structure relation loss, the vertical structure relation loss and the axis alignment relation loss to obtain the global structure relation loss. And training the primitive detection model to be trained according to the global structure relationship loss, and stopping model training when the global structure relationship loss is not reduced any more or is less than a loss threshold value to obtain the pre-trained primitive detection model.
And 208, performing element detection on the acquired point cloud data to be detected through a pre-trained element detection model.
After training is finished, the computer equipment extracts the three-dimensional coordinates and the surface normal vectors of all points from the obtained original point cloud data, and superposes the extracted three-dimensional coordinates and the surface normal vectors of the points to obtain point cloud data to be detected. The method comprises the steps of inputting point cloud data to be detected into a pre-trained element detection model, determining a plurality of elements to be detected corresponding to the point cloud data to be detected through the pre-trained element detection model, fitting geometric parameters corresponding to the elements to be detected, and determining the determined elements to be detected and the geometric parameters corresponding to the elements to be detected as element detection results.
According to the element detection method based on the point cloud data, a plurality of sample elements corresponding to the sample point cloud data are determined through an element detection model to be trained, geometric parameters corresponding to all the sample elements are fitted, then the global structure relation among the sample elements is extracted according to the geometric parameters corresponding to all the sample elements, the element detection model to be trained is restrained according to the wrong structure relation in the extracted global structure relation, a pre-trained element detection model is obtained, and then element detection is carried out on the obtained point cloud data to be detected through the pre-trained element detection model. The element detection model trained in advance is obtained by training based on the global structure relationship among the extracted multiple sample elements, so that the structure relationship of the geometric elements contained in the point cloud data to be detected in the whole point cloud data to be detected can be more accurately judged, and the element detection accuracy can be improved.
In one embodiment, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting the geometric parameters corresponding to each sample element includes: extracting the characteristics of the sample point cloud data through an element detection model to be trained to obtain high-dimensional characteristics corresponding to the sample point cloud data; determining a plurality of sample elements corresponding to the sample point cloud data according to the high-dimensional features through an element detection model to be trained, and predicting the type of the sample element corresponding to each sample element; and carrying out parametric fitting on the corresponding sample elements according to the sample element types of the sample elements and the sample point cloud data through the element detection model to be trained to obtain the geometric parameters corresponding to the sample elements.
The high-dimensional features refer to features including global structure information and local detail information.
The primitive detection model to be trained may include a feature extraction layer, a classification segmentation layer, a parameterization fitting layer, and a global structure constraint layer. The feature extraction layer may be an edge convolution layer in a DGCNN (Dynamic Graph CNN), and specifically, may be a stacked three-layer edge convolution layer. And extracting high-dimensional features corresponding to the sample point cloud data through a feature extraction layer in the element detection model to be trained. For example, aThe feature extraction layer may extract features of the sample point cloud data by using a KNN (K-Nearest Neighbor) algorithm and a MAX symmetric function. For sample point cloud data with n points
Figure 739603DEST_PATH_IMAGE017
Where R represents a real number and F represents a dimension, e.g., F = 6. The feature extraction layer searches K neighborhood points corresponding to each point by adopting a KNN (K-Nearest Neighbor) algorithm, and uses N to form a set consisting of the neighborhood points corresponding to a plurality of points e To indicate. Constructing an undirected graph according to neighborhood points corresponding to a plurality of points
Figure 289533DEST_PATH_IMAGE018
Wherein
Figure 967639DEST_PATH_IMAGE019
representing the vertices in an undirected graph,
Figure 871004DEST_PATH_IMAGE020
representing an edge in an undirected graph. For vertex V i In a word, have
Figure 52587DEST_PATH_IMAGE021
Having k sides, representing distance p from the center point i The nearest points are respectively { p ji1,…, p jik }. Thus, the feature extraction layer is at the center point p i The extracted high-dimensional features can be expressed as:
Figure 773418DEST_PATH_IMAGE022
(7)
wherein,
Figure 938820DEST_PATH_IMAGE023
denotes p i The corresponding high-dimensional features are used for identifying the features,
Figure 770510DEST_PATH_IMAGE024
represents p i Set of corresponding K neighborhood points, p j Represents p i Corresponding neighborhood point { p } ji1,…, p jik },p i -p j Represents p i And each neighborhood point { p ji1,…, p jik The distance of the (c) from the (c) to the (c) to the (c) to the (c) to the (c) to the (to the,
Figure 806599DEST_PATH_IMAGE025
represents p is i And p, and i and each neighborhood point { p ji1,…, p jik The distance of the multilayer perceptron MLP, as input,
Figure 344938DEST_PATH_IMAGE026
the output of the MLP is represented as,
Figure 997637DEST_PATH_IMAGE027
indicating that the output of the MLP is aggregated using a MAX symmetric function.
By adopting the MAX symmetric function to gather the output of the MLP, the difference of output results caused by different input sequences of sample point cloud data can be avoided. Furthermore, in comparison to using point p alone i Characteristic or point p of i And neighborhood point p j As input to the multi-level perceptron, stacking
Figure 367438DEST_PATH_IMAGE028
The extracted high-dimensional features can be used as input to retain global structural information and local information in a neighborhood. With the advancement of training of the element detection model to be trained, the feature extraction layer not only measures the distance by using the three-dimensional coordinates when searching for the neighborhood points, but also calculates the distance by using the high-dimensional features learned in the training process, thereby achieving the purpose of converging the points which are not close to each other in the three-dimensional space but have higher similarity in the feature space.
In order to extract the global structure relationship, the sample point cloud data and the extracted high-dimensional features are used as input of a classification and segmentation layer, the sample point cloud data is classified and segmented through the classification and segmentation layer according to the high-dimensional features to obtain a classification result and a segmentation result, and therefore the classification and segmentation layer determines the sample element type corresponding to each sample element in the segmentation result according to the classification result.
And carrying out parametric fitting on the corresponding sample elements according to the sample element types of the sample elements and the sample point cloud data through a parametric fitting layer in the element detection model to be trained, and outputting the geometric parameters corresponding to the sample elements. The parameterized Fitting layer may adopt a Network structure of SPFN (supervisory Primitive Fitting Network) or spleenet (surface Fitting Network).
According to different types of input sample primitives, the parametric fitting layer outputs different types of geometric parameters as shown in the following table 1, which are corresponding geometric parameters of each sample primitive.
TABLE 1 geometric parameters corresponding to sample elements
Figure 789192DEST_PATH_IMAGE029
Further, when the sample primitive type is the open/close SPLINE surface, the open/close SPLINE surface is predicted by a network structure of SPLINE-Net through a parameterization fitting layer. The SPLINE-Net also constructs a feature extraction layer based on the edge convolution layer proposed by DGCNN, namely the parameterized fitting layer comprises the feature extraction layer constructed by the edge convolution layer, the input of the feature extraction layer is a point set corresponding to the SPLINE surface, and a 20-by-20 control point grid is output to represent the SPLINE surface.
In this embodiment, the high-dimensional features of the sample point cloud data are extracted through the element detection model to be trained, and since the high-dimensional features include both global structure information and local detail content, the sample element types corresponding to the sample elements can be predicted for subsequently determining a plurality of sample elements corresponding to the sample point cloud data, and richer context information is provided. And then, carrying out parametric fitting on the corresponding sample elements according to the sample element types of the sample elements and the sample point cloud data through the element detection model to be trained, so as to obtain more accurate geometric parameters corresponding to the sample elements.
In an optional manner of this embodiment, performing feature extraction on the sample point cloud data through the primitive detection model to be trained to obtain a high-dimensional feature corresponding to the sample point cloud data includes: extracting shallow layer characteristics corresponding to sample point cloud data through a primitive detection model to be trained; extracting deep features corresponding to sample point cloud data through a primitive detection model to be trained; and combining the shallow feature and the deep feature to obtain the high-dimensional feature corresponding to the sample point cloud data.
In the stage of extracting the characteristics of the sample point cloud data, shallow layer characteristics are extracted based on neighborhood points in a three-dimensional space through a characteristic extraction layer in a primitive detection model to be trained, more local characteristics can be captured, and more detailed information can be reserved. And deep features can be extracted based on neighborhood points in a high-dimensional space, so that the receptive field is enlarged, and more abstract global feature information is provided. The neighborhood points in the three-dimensional space and the neighborhood points in the high-dimensional space can be found by adopting a KNN algorithm. In order to alleviate the problem that the deep features lose the detailed information of the sample point cloud data, the feature extraction layer can adopt a feature extraction method of jump layer connection to combine the shallow features and the deep features to obtain high-dimensional features corresponding to the sample point cloud data, so that the finally extracted high-dimensional features include the global structure information and the local detailed content. For example, the high-dimensional feature may be a 256-dimensional high-dimensional feature. Through the characteristic extraction layer, the sample point cloud data is mapped to a high-dimensional space from an original three-dimensional space, and the carried high-dimensional characteristics can provide richer context information for subsequent element classification and segmentation.
In the embodiment, the superficial layer features and the deep layer features corresponding to the sample point cloud data are extracted through the element detection model to be trained, so that the superficial layer features and the deep layer features are combined to obtain the high-dimensional features corresponding to the sample point cloud data, the high-dimensional features not only comprise global structure information but also contain local detail content, and richer context information can be provided for subsequent element classification and segmentation.
In an optional manner of this embodiment, determining, by the to-be-trained element detection model, a plurality of sample elements corresponding to the sample point cloud data according to the high-dimensional feature, and predicting a sample element type corresponding to each sample element includes: classifying the sample point cloud data according to high-dimensional features through an element detection model to be trained to obtain the original element types of each point in the sample point cloud data; dividing the sample point cloud data into a plurality of sample elements according to the high-dimensional features through an element detection model to be trained; and selecting the original primitive type with the most quantity from the original primitive types of the points corresponding to the sample primitives through the primitive detection model to be trained to determine the original primitive type as the sample primitive type of the corresponding sample primitive.
The element detection model to be trained comprises a classification and segmentation layer, and the element types are predicted point by the classification and segmentation layer according to high-dimensional characteristics by using a multilayer perceptron to obtain the original element types of each point in the sample point cloud data. For example, the multi-layered perceptron may be two fully-connected layers. Meanwhile, complete sample point cloud data are clustered according to high-dimensional features through a classification and segmentation layer by utilizing a differentiable mean shift algorithm, and high-dimensional feature vectors of the point cloud are converted into a small number of element example labels, so that element segmentation is realized. The differentiable mean shift algorithm can be embedded into a primitive detection model to be trained to carry out end-to-end training.
In the process of mean shift, if the bandwidth is narrower, the result of mean shift will generate more clustering results, that is, the sample point cloud data is divided into more sample elements. However, common artificial objects do not contain too many primitives, so that when the number of the divided primitives exceeds a certain threshold, the algorithm will increase the bandwidth of the kernel function and reduce the number of the divided primitives.
It should be noted that, since the primitive classification and division are performed by two different neural network branches, the primitive types corresponding to different points have no direct relation with the primitive division result. Even the points forming the same sample primitive may have different sample primitive types, so that the sample primitive type corresponding to the final sample primitive is voted and determined by all the points forming the sample primitive, that is, the most numerous original primitive types in the corresponding original primitive types forming each sample primitive are selected as the sample primitive type corresponding to the sample primitive.
In the embodiment, the sample point cloud data is classified and segmented according to the high-dimensional features through the element detection model to be trained, and then the original element type with the largest number is selected from the original element types of the points corresponding to all the sample elements to be determined as the sample element type of the corresponding sample element, so that the accurate sample element type can be obtained.
In one embodiment, as shown in fig. 3, segmenting the sample point cloud data into a plurality of sample primitives according to high-dimensional features by the primitive detection model to be trained comprises:
step 302, predicting the space offset of each point in the sample point cloud data according to the high-dimensional characteristics through the element detection model to be trained.
And step 304, calculating the space offset result of the corresponding point according to the space offset of each point.
And step 306, dividing the sample point cloud data into a plurality of sample elements according to the spatial offset result and the high-dimensional features of each point in the sample point cloud data.
The spatial offset is an offset of each point toward the center of the object. The spatial offset result refers to the position coordinates of each point after spatial offset.
The classification and segmentation layer of the primitive detection model to be trained can also comprise a spatial offset layer. The point belonging to the same sample element is moved to the center of the element in the element segmentation process through the space offset layer.
Specifically, the high-dimensional features are used as input of a spatial offset layer, and spatial offset of each point in the sample point cloud data is predicted according to the high-dimensional features through the spatial offset layer. After determining the neighborhood points corresponding to each point in the sample point cloud data, the spatial offset of the neighborhood points corresponding to each point can be obtained. And aiming at each point in the sample point cloud data, calculating to obtain a final spatial offset result of the point according to the position coordinate of the point before spatial offset and the spatial offset of the neighborhood point corresponding to the point. In particular, for sample point cloud data
Figure 586247DEST_PATH_IMAGE030
In other words, point P i The final spatial offset result is adjusted to P i K neighborhood points of
Figure 336028DEST_PATH_IMAGE031
The addition of the spatial offset, this adjustment process may be called neighborhood offset aggregation, and the calculation formula of the neighborhood offset aggregation may be as follows:
Figure 775100DEST_PATH_IMAGE032
(8)
wherein,
Figure 785781DEST_PATH_IMAGE033
is represented in the received neighborhood point P j After the influence of a neighborhood point, point P i The resulting position of the spatial offset is then,
Figure 19316DEST_PATH_IMAGE034
representing point P i The position coordinates before the spatial offset are determined,
Figure 381028DEST_PATH_IMAGE035
to represent
Figure 217265DEST_PATH_IMAGE036
Point of direction
Figure 348032DEST_PATH_IMAGE037
The direction vector of (a) is,
Figure 752469DEST_PATH_IMAGE038
to represent
Figure 601476DEST_PATH_IMAGE039
And with
Figure 116771DEST_PATH_IMAGE037
The difference of the distances between the neighboring points is used as the weight for measuring the central point P of the neighboring point pair i The degree of influence of (c).
Illustratively, as shown in FIG. 4, is a point P in one embodiment 0 Schematic diagram of the spatial offset prediction process of (1). Neighborhood point P 1 、P 2 And P 3 For point P 0 The influence of the spatial offset includes both the influence in the direction and the influence in the distance.
Figure 242990DEST_PATH_IMAGE040
Representing point P 0 The offset spatial offset position.
Because the subsequent element segmentation task depends on the extracted high-dimensional features, the spatial offset layer is utilized to calculate the spatial offset result of the corresponding point, and each point is moved to the center of the respective sample element. Therefore, adjacent sample elements of the same type in the sample point cloud data are separated according to the high-dimensional features and the spatial offset results of all points, so that the high-dimensional features can actively encode all parts, namely the spatial offset results of all sample elements, and the subsequent tasks are helped to be better classified and segmented.
In the embodiment, by adding the spatial offset layer, not only can an explicit constraint be additionally provided, but also the three-dimensional spatial information of the to-be-trained primitive detection model after the spatial offset and the high-dimensional features are fully utilized in the primitive segmentation process, so that the information complementation is realized, and the accuracy of primitive segmentation is greatly improved.
In one embodiment, in the training process of the primitive detection model to be trained, the primitive detection model to be trained is used for performing high-dimensional feature extraction on sample point cloud data, performing spatial offset prediction, primitive classification and global structure relationship extraction in the primitive segmentation process, and after the processing steps are performed, the primitive detection model to be trained is trained by calculating the loss of each processing step. Fig. 5 is a schematic diagram of a network structure of a primitive detection model to be trained. The point cloud coordinate N x 3 represents that the point cloud coordinate is a three-dimensional vector, and the point cloud normal vector N x 3 represents that the point cloud normal vector is a three-dimensional vector. The point cloud feature extractor is a feature extraction layer in a primitive detection model to be trained, sample point cloud data input into the point cloud feature extractor comprises point cloud coordinates N x 3 and point cloud normal vectors N x 3, and the point cloud features are high-dimensional features extracted in the high-dimensional feature extraction step. The element classification means that the sample point cloud data is classified point by point according to the point cloud characteristics to obtain the original element types of each point in the sample point cloud data. Element segmentation refers to segmenting sample point cloud data into a plurality of sample elements according to high-dimensional features. The parametric fitter is a parametric fitting layer, the point cloud offset is a spatial offset of each point in the predicted sample point cloud data, the sum of the spatial offsets of the neighborhood points corresponding to each point is obtained, and the offset point is a spatial offset result of each point.
Specifically, after extracting the global structural relationship, the primitive detection model to be trained may calculate a feature loss corresponding to the high-dimensional feature extraction step, a spatial offset loss corresponding to the spatial offset prediction step, a type prediction loss corresponding to the primitive classification step, and a global structural relationship loss corresponding to the global structural relationship extraction step. And calculating the comprehensive loss of the element detection model to be trained according to the characteristic loss and the corresponding characteristic loss weight, the spatial offset loss and the corresponding spatial offset weight, the type prediction loss and the corresponding type prediction weight, the global structure relationship loss and the corresponding global structure relationship loss weight and the preset loss calculation relationship. And training the primitive detection model to be trained according to the comprehensive loss, and stopping model training until the comprehensive loss does not decrease or reaches a preset iteration number, so as to obtain a pre-trained primitive detection model.
Further, the feature loss may be calculated by performing loss calculation on the extracted high-dimensional features using a triple loss function, where the loss function randomly selects a point from the sample point cloud data as an anchor point P A Then selecting a point from the points belonging to the same sample primitive as the anchor point as the point P in class P Selecting one point from other different sample primitives as an out-of-class point P N These three points are referred to as a triplet t =<P A, P P, P N >Repeating this process generates a set of triples
Figure 552749DEST_PATH_IMAGE041
Then, the feature loss corresponding to the triple set T is:
Figure 889052DEST_PATH_IMAGE042
(9)
wherein,
Figure 208038DEST_PATH_IMAGE043
indicating a characteristic loss, T S Indicating the number of triplets, which may be up to 30,
Figure 313397DEST_PATH_IMAGE044
representing a triplet t i
Figure 794057DEST_PATH_IMAGE045
Indicates how far the point inside the class and the point outside the class should be apart at least
Figure 978176DEST_PATH_IMAGE046
The larger the value, the larger the distance required between the two, and the higher the requirement for feature extraction.
The element detection model is trained by calculating the characteristic loss to realize the optimization of the characteristic extraction layer, the optimization process of the characteristic extraction layer can be as shown in fig. 6, points from the same sample element are close to each other in the characteristic space by optimizing the characteristic extraction layer, and points from different sample elements can be far away from each other to increase the discrimination between different sample elements.
The spatial offset loss can be calculated according to the following formula:
Figure 100853DEST_PATH_IMAGE047
(10)
wherein,
Figure 795139DEST_PATH_IMAGE048
representing the spatial offset loss, N representing the number of points in the sample point cloud data,
Figure 712280DEST_PATH_IMAGE049
indicating points
Figure 23176DEST_PATH_IMAGE050
The amount of spatial offset of (a) is,
Figure 559330DEST_PATH_IMAGE051
is a point
Figure 108123DEST_PATH_IMAGE052
The center of the located sample cell is,
Figure 461744DEST_PATH_IMAGE051
the method is obtained by solving the central point of the three-dimensional bounding box of the sample element when reading the sample point cloud data.
The type prediction loss can adopt cross entropy loss as the type prediction loss, and the calculation mode is as follows:
Figure 994357DEST_PATH_IMAGE053
(11)
wherein,
Figure 193257DEST_PATH_IMAGE054
representing type prediction loss, N representing the number of points in the sample point cloud data,
Figure 252349DEST_PATH_IMAGE052
representing points in the sample point cloud data.
The global structural relationship loss is calculated as in the above equations (4), (5) and (6).
And further calculating the comprehensive loss of the element detection model to be trained according to the characteristic loss and the corresponding characteristic weight, the spatial offset loss and the corresponding spatial offset weight, the type prediction loss and the corresponding type prediction weight, the global structure relation loss and the corresponding global structure relation weight and a preset loss calculation relation. The preset loss calculation relationship refers to a calculation formula of the comprehensive loss, and can be as follows:
Figure 511292DEST_PATH_IMAGE055
(12)
wherein,
Figure 265621DEST_PATH_IMAGE056
the loss of the synthesis is shown as,
Figure 533791DEST_PATH_IMAGE057
representing the spatial offset weight to which the spatial offset penalty corresponds,
Figure 322756DEST_PATH_IMAGE058
which represents the loss of the spatial offset,
Figure 487021DEST_PATH_IMAGE059
the feature weight corresponding to the feature loss is represented,
Figure 869592DEST_PATH_IMAGE060
the loss of the characteristic is represented by,
Figure 675874DEST_PATH_IMAGE061
a type prediction weight corresponding to the type prediction loss is represented,
Figure 53765DEST_PATH_IMAGE062
indicating that the global structural relationship loses the corresponding global structural relationship weight,
Figure 920090DEST_PATH_IMAGE063
the type is represented to predict the loss of,
Figure 649012DEST_PATH_IMAGE064
indicating the loss of the relationship of the parallel structure,
Figure 882154DEST_PATH_IMAGE065
the loss of the relationship of the vertical structure is shown,
Figure 645710DEST_PATH_IMAGE066
indicating a loss of axis alignment. For example,
Figure 151778DEST_PATH_IMAGE057
=10,
Figure 633575DEST_PATH_IMAGE059
=1.5,
Figure 781660DEST_PATH_IMAGE061
=0.5,
Figure 275089DEST_PATH_IMAGE062
=0.1。
in this embodiment, the comprehensive loss of the primitive detection model to be trained is calculated by calculating the feature loss corresponding to the high-dimensional feature extraction step, the spatial offset loss corresponding to the spatial offset prediction step, the type prediction loss corresponding to the primitive classification step, and the global structure relationship loss corresponding to the global structure relationship extraction step, so as to train the primitive detection model to be trained according to the comprehensive loss, and improve the primitive detection accuracy of the primitive detection model as a whole.
In another embodiment, as shown in fig. 7, there is provided a method for primitive detection based on point cloud data, the method comprising the steps of:
step 702, shallow layer feature extraction.
And extracting shallow features corresponding to the sample point cloud data through the element detection model to be trained.
Step 704, deep feature extraction.
And extracting deep features corresponding to the sample point cloud data through the element detection model to be trained.
Step 706, high-dimensional feature extraction.
And combining the shallow feature and the deep feature to obtain the high-dimensional feature corresponding to the sample point cloud data.
Step 708, point-by-point classification.
And classifying the sample point cloud data according to the high-dimensional characteristics through the element detection model to be trained to obtain the original element types of each point in the sample point cloud data.
Step 710, spatial offset prediction.
And predicting the space offset of each point in the sample point cloud data according to the high-dimensional characteristics by using the element detection model to be trained.
Step 712, spatial offset result calculation.
And calculating the space offset result of the corresponding point according to the space offset of each point.
Step 714, primitive segmentation.
And dividing the sample point cloud data into a plurality of sample elements according to the spatial offset result and the high-dimensional characteristics of each point in the sample point cloud data.
At step 716, primitive type determination.
And determining the sample element type of the corresponding sample element in the original element type of the point corresponding to each sample element through the element detection model to be trained to obtain the sample element type of each sample element.
Step 718, fit parameterization.
And carrying out parametric fitting on the corresponding sample elements according to the sample element types of the sample elements and the sample point cloud data through the element detection model to be trained to obtain the geometric parameters corresponding to the sample elements.
And 720, extracting the global structure relationship.
And extracting the global structure relation among the plurality of sample elements according to the geometric parameters corresponding to the sample elements through the element detection model to be trained.
Step 722, the combined loss calculation.
Calculating the characteristic loss corresponding to the high-dimensional characteristic extraction step, the spatial offset loss corresponding to the spatial offset prediction step, the type prediction loss corresponding to the element classification step and the global structure relationship loss corresponding to the global structure relationship extraction step, so as to calculate the comprehensive loss of the element detection model to be trained according to the characteristic loss and the corresponding characteristic loss weight, the spatial offset loss and the corresponding spatial offset weight, the type prediction loss and the corresponding type prediction weight, the global structure relationship loss and the corresponding global structure relationship loss weight and the preset loss calculation relationship. And training the primitive detection model to be trained according to the comprehensive loss, and stopping model training until the comprehensive loss does not decrease or reaches a preset iteration number, so as to obtain a pre-trained primitive detection model.
Step 724, primitive detection.
The method comprises the steps of obtaining point cloud data to be detected, inputting the point cloud data to be detected into a pre-trained element detection model, determining a plurality of elements to be detected corresponding to the point cloud data to be detected, and fitting geometric parameters corresponding to the elements to be detected to obtain element detection results.
The point cloud data to be detected refers to point cloud data which needs to be subjected to element detection in the actual application process. The primitive detection process is the same as the geometric parameter fitting process of the primitive detection model in the training process, and details are not repeated here.
In the embodiment, since the extracted high-dimensional features include both global structure information and local detail content, richer context information can be provided for subsequent primitive classification and primitive segmentation. And then, carrying out parametric fitting on the corresponding sample elements according to the sample element types of the sample elements and the sample point cloud data through the element detection model to be trained, so as to obtain more accurate geometric parameters corresponding to the sample elements. Through the spatial migration prediction, adjacent sample elements of the same class in the sample point cloud data can be separated, so that the spatial migration result of each sample element can be actively coded by the high-dimensional characteristics, and the subsequent tasks can be better classified and segmented. The spatial offset results in a first global structural relationship. And then, extracting the global structure relationship among the sample primitives according to the geometric parameters corresponding to the sample primitives to be used as a second global structure relationship. The element detection model trained in advance is obtained based on the global structure relationship training, so that the structure relationship of geometric elements contained in the point cloud data to be detected in the whole point cloud data to be detected can be judged more accurately, and the element detection accuracy is greatly improved.
Illustratively, the pre-trained primitive detection model may be a RelationNet model. In order to verify the advantages of the relationship net model in the application, for an ABC data set, four reference methods are selected for comparing results, wherein the four reference methods include two traditional methods: nearest Neighbor (NN) and RANdom SAmple Consensus (RANSAC), and two leading deep learning methods at present: SPFN and ParseNet (parametric surface fitting networks). By training on the ABC data set and performing the corresponding tests, the relationship net model achieves the leading results, as shown in table 2.
TABLE 2 comparison of relationship Net with existing methods on ABC test set
Figure 952058DEST_PATH_IMAGE067
By adding spatial offset prediction and global structural relationship constraint, the relationship net improves the segmentation precision and the classification precision, wherein the MIoU of primitive segmentation is promoted from 82.14% to 85.08% and is improved by 2.94%, and the MIoU of primitive classification is promoted from 88.6% to 90.1% and is improved by 1.5%.
Compared with ParseNet, the ParseNet completely performs element segmentation according to the extracted point cloud characteristics, and due to the point cloud offset provided by the spatial offset prediction layer, the relationship Net can actively encode the relative position relation between a single point and the element where the single point is located in the characteristic extraction process, so that a sharper edge is generated compared with other methods. In addition, the global structure constraint layer carries out loss calculation based on parameters fitted by the whole primitive, and the number of considered points is more, so that the extracted loss also has stronger constraint capability, and the relationship net is not easy to generate large-area segmentation errors.
As shown in fig. 8, the result of the spatial offset prediction layer in relationship net is shown, and the original point cloud is the input point cloud of relationship net.
The spatial migration prediction layer can accurately migrate points to the central positions of the elements when processing input point clouds with simple structures, and can generate a remarkable interval between different elements even when processing the input point clouds with complex structures, so as to help subsequent element segmentation tasks.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a point cloud data-based element detection device for realizing the point cloud data-based element detection method. The solution of the problem provided by the apparatus is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the point cloud data-based primitive detection apparatus provided below can be referred to the above limitations on the point cloud data-based primitive detection method, and are not described herein again.
In one embodiment, as shown in fig. 9, there is provided a primitive detection apparatus based on point cloud data, including: a geometric parameter fitting module 902, a global structure relationship extraction module 904, a training module 906, and a primitive detection module 908, wherein:
a geometric parameter fitting module 902, configured to input the obtained sample point cloud data into an element detection model to be trained, determine a plurality of sample elements corresponding to the sample point cloud data, and fit geometric parameters corresponding to each sample element;
a global structure relationship extracting module 904, configured to extract, through the primitive detection model to be trained, a global structure relationship among the multiple sample primitives according to the geometric parameters corresponding to the sample primitives;
the training module 906 is configured to constrain the primitive detection model to be trained according to an error structural relationship in the global structural relationship, so as to obtain a pre-trained primitive detection model;
and the primitive detection module 908 is configured to perform primitive detection on the acquired point cloud data to be detected through a pre-trained primitive detection model.
In one embodiment, the geometric parameter fitting module 902 includes:
the characteristic extraction module is used for extracting the characteristics of the sample point cloud data through the element detection model to be trained to obtain high-dimensional characteristics corresponding to the sample point cloud data;
the element detection module is used for determining a plurality of sample elements corresponding to the sample point cloud data according to the high-dimensional features through an element detection model to be trained and predicting the sample element type corresponding to each sample element;
and the parametric fitting module is used for carrying out parametric fitting on the corresponding sample elements according to the sample element types of the sample elements and the sample point cloud data through the element detection model to be trained to obtain the geometric parameters corresponding to the sample elements.
In one embodiment, the feature extraction module is further configured to extract shallow features corresponding to the sample point cloud data through a primitive detection model to be trained; extracting deep features corresponding to sample point cloud data through a primitive detection model to be trained; and combining the shallow feature and the deep feature to obtain the high-dimensional feature corresponding to the sample point cloud data.
In one embodiment, the primitive detection module further comprises:
the element classification module is used for classifying the sample point cloud data according to the high-dimensional characteristics through an element detection model to be trained to obtain the original element types of all points in the sample point cloud data;
the element segmentation module is used for segmenting the sample point cloud data into a plurality of sample elements according to the high-dimensional characteristics through an element detection model to be trained;
the primitive classification module is further used for selecting the primitive type with the largest number from the primitive types of the points corresponding to the sample primitives through the primitive detection model to be trained, and determining the primitive type with the largest number as the sample primitive type of the corresponding sample primitive.
In one embodiment, the primitive segmentation module further comprises: the spatial offset module is used for predicting the spatial offset of each point in the sample point cloud data according to the high-dimensional features through the primitive detection model to be trained; calculating the space offset result of the corresponding point according to the space offset of each point; and dividing the sample point cloud data into a plurality of sample elements according to the spatial offset result and the high-dimensional characteristics of each point in the sample point cloud data.
In one embodiment, the primitive detection module 908 is further configured to obtain point cloud data to be detected; inputting the point cloud data to be detected into a pre-trained element detection model, determining a plurality of elements to be detected corresponding to the point cloud data to be detected, and fitting the geometric parameters corresponding to the elements to be detected to obtain an element detection result.
The modules in the above-mentioned primitive detection device based on point cloud data can be wholly or partially implemented by software, hardware and their combination. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, an Input/Output interface (I/O for short), and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing data such as sample point cloud data, element detection models to be trained, element detection models trained in advance and the like. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a method of primitive detection based on point cloud data.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display layer, and an input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display layer and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of primitive detection based on point cloud data. The display layer of the computer device is used for forming a visual visible picture and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configurations shown in fig. 10 and 11 are merely block diagrams of some configurations relevant to the present disclosure, and do not constitute a limitation on the computing devices to which the present disclosure may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above-described method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method for detecting primitives based on point cloud data, the method comprising:
inputting the acquired sample point cloud data into an element detection model to be trained, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to the sample elements;
extracting the global structure relation among a plurality of sample elements according to the geometric parameters corresponding to the sample elements through the element detection model to be trained;
according to the wrong structural relationship in the global structural relationship, constraining the element detection model to be trained to obtain a pre-trained element detection model;
and carrying out element detection on the acquired point cloud data to be detected through the pre-trained element detection model.
2. The method of claim 1, wherein the obtaining sample point cloud data is input into a cell detection model to be trained, a plurality of sample cells corresponding to the sample point cloud data are determined, and fitting geometric parameters corresponding to each sample cell comprises:
extracting the characteristics of the sample point cloud data through the element detection model to be trained to obtain high-dimensional characteristics corresponding to the sample point cloud data;
determining a plurality of sample elements corresponding to the sample point cloud data according to the high-dimensional features through the element detection model to be trained, and predicting the sample element type corresponding to each sample element;
and carrying out parametric fitting on corresponding sample elements through the element detection model to be trained according to the sample element types of the sample elements and the sample point cloud data to obtain the geometric parameters corresponding to the sample elements.
3. The method of claim 2, wherein the extracting the features of the sample point cloud data through the primitive detection model to be trained to obtain the corresponding high-dimensional features of the sample point cloud data comprises:
extracting shallow features corresponding to the sample point cloud data through the element detection model to be trained;
extracting deep features corresponding to the sample point cloud data through the element detection model to be trained;
and combining the shallow feature and the deep feature to obtain a high-dimensional feature corresponding to the sample point cloud data.
4. The method of claim 2, wherein the determining, by the element detection model to be trained, a plurality of sample elements corresponding to the sample point cloud data according to the high-dimensional features, and predicting a sample element type corresponding to each sample element comprises:
classifying the sample point cloud data according to the high-dimensional features through the element detection model to be trained to obtain the original element types of each point in the sample point cloud data;
dividing the sample point cloud data into a plurality of sample elements according to the high-dimensional features through the element detection model to be trained;
and selecting the original primitive type with the largest number from the original primitive types of the points corresponding to the sample primitives through the primitive detection model to be trained to determine the original primitive type as the sample primitive type of the corresponding sample primitive.
5. The method of claim 4, wherein the segmenting the sample point cloud data into a plurality of sample primitives according to the high-dimensional features by the primitive detection model to be trained comprises:
predicting the space offset of each point in the sample point cloud data according to the high-dimensional features through the element detection model to be trained;
calculating the space offset result of the corresponding point according to the space offset of each point;
and segmenting the sample point cloud data into a plurality of sample elements according to the spatial offset result of each point in the sample point cloud data and the high-dimensional features.
6. The method according to any one of claims 1 to 5, wherein the performing primitive detection on the acquired point cloud data to be detected through the pre-trained primitive detection model comprises:
acquiring point cloud data to be detected;
inputting the point cloud data to be detected into the pre-trained element detection model, determining a plurality of elements to be detected corresponding to the point cloud data to be detected, and fitting the geometric parameters corresponding to the elements to be detected to obtain element detection results.
7. A device for detecting elements based on point cloud data, the device comprising:
the geometric parameter fitting module is used for inputting the acquired sample point cloud data into an element detection model to be trained, determining a plurality of sample elements corresponding to the sample point cloud data, and fitting geometric parameters corresponding to all the sample elements;
the global structure relation extraction module is used for extracting the global structure relation among a plurality of sample elements according to the geometric parameters corresponding to the sample elements through the element detection model to be trained;
the training module is used for constraining the element detection model to be trained according to the error structural relationship in the global structural relationship to obtain a pre-trained element detection model;
and the element detection module is used for carrying out element detection on the acquired point cloud data to be detected through the pre-trained element detection model.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202210764430.5A 2022-07-01 2022-07-01 Element detection method and device based on point cloud data and computer equipment Active CN114821013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210764430.5A CN114821013B (en) 2022-07-01 2022-07-01 Element detection method and device based on point cloud data and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210764430.5A CN114821013B (en) 2022-07-01 2022-07-01 Element detection method and device based on point cloud data and computer equipment

Publications (2)

Publication Number Publication Date
CN114821013A true CN114821013A (en) 2022-07-29
CN114821013B CN114821013B (en) 2022-10-18

Family

ID=82523179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210764430.5A Active CN114821013B (en) 2022-07-01 2022-07-01 Element detection method and device based on point cloud data and computer equipment

Country Status (1)

Country Link
CN (1) CN114821013B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228539A (en) * 2016-07-12 2016-12-14 北京工业大学 Multiple geometric primitive automatic identifying method in a kind of three-dimensional point cloud
CN110009726A (en) * 2019-03-08 2019-07-12 浙江中海达空间信息技术有限公司 A method of according to the structural relation between plane primitive to data reduction plane
CN110009745A (en) * 2019-03-08 2019-07-12 浙江中海达空间信息技术有限公司 According to plane primitive and model-driven to the method for data reduction plane
CN112417579A (en) * 2021-01-25 2021-02-26 深圳大学 Semantic-constrained planar primitive topological relation rule detection and recovery method
CN112489207A (en) * 2021-02-07 2021-03-12 深圳大学 Space-constrained dense matching point cloud plane element extraction method
CN112634340A (en) * 2020-12-24 2021-04-09 深圳大学 Method, device, equipment and medium for determining BIM (building information modeling) model based on point cloud data
CN112712596A (en) * 2021-03-29 2021-04-27 深圳大学 Dense matching point cloud building structured model fine reconstruction method
CN113378760A (en) * 2021-06-25 2021-09-10 北京百度网讯科技有限公司 Training target detection model and method and device for detecting target
CN114648676A (en) * 2022-03-25 2022-06-21 北京百度网讯科技有限公司 Point cloud processing model training and point cloud instance segmentation method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228539A (en) * 2016-07-12 2016-12-14 北京工业大学 Multiple geometric primitive automatic identifying method in a kind of three-dimensional point cloud
CN110009726A (en) * 2019-03-08 2019-07-12 浙江中海达空间信息技术有限公司 A method of according to the structural relation between plane primitive to data reduction plane
CN110009745A (en) * 2019-03-08 2019-07-12 浙江中海达空间信息技术有限公司 According to plane primitive and model-driven to the method for data reduction plane
CN112634340A (en) * 2020-12-24 2021-04-09 深圳大学 Method, device, equipment and medium for determining BIM (building information modeling) model based on point cloud data
CN112417579A (en) * 2021-01-25 2021-02-26 深圳大学 Semantic-constrained planar primitive topological relation rule detection and recovery method
CN112489207A (en) * 2021-02-07 2021-03-12 深圳大学 Space-constrained dense matching point cloud plane element extraction method
CN112712596A (en) * 2021-03-29 2021-04-27 深圳大学 Dense matching point cloud building structured model fine reconstruction method
CN113378760A (en) * 2021-06-25 2021-09-10 北京百度网讯科技有限公司 Training target detection model and method and device for detecting target
CN114648676A (en) * 2022-03-25 2022-06-21 北京百度网讯科技有限公司 Point cloud processing model training and point cloud instance segmentation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIYONG LIU等: "Object Geometric Primitives Detection from 3D Point Clouds Based on the Profiles on Cutting Planes", 《2021 3RD INTERNATIONAL SYMPOSIUM ON SMART AND HEALTHY CITIES (ISHC)》 *
陆桂亮: "三维点云场景语义分割建模研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Also Published As

Publication number Publication date
CN114821013B (en) 2022-10-18

Similar Documents

Publication Publication Date Title
CN112529015B (en) Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
CN108734210B (en) Object detection method based on cross-modal multi-scale feature fusion
CN111126202A (en) Optical remote sensing image target detection method based on void feature pyramid network
WO2022193335A1 (en) Point cloud data processing method and apparatus, and computer device and storage medium
CN111352965B (en) Training method of sequence mining model, and processing method and equipment of sequence data
CN115439694A (en) High-precision point cloud completion method and device based on deep learning
CN115601511B (en) Three-dimensional reconstruction method and device, computer equipment and computer readable storage medium
CN113129311B (en) Label optimization point cloud instance segmentation method
CN111275171A (en) Small target detection method based on parameter sharing and multi-scale super-resolution reconstruction
CN116310850B (en) Remote sensing image target detection method based on improved RetinaNet
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN115311502A (en) Remote sensing image small sample scene classification method based on multi-scale double-flow architecture
Sun et al. Two-stage deep regression enhanced depth estimation from a single RGB image
CN116977872A (en) CNN+ transducer remote sensing image detection method
Laupheimer et al. The importance of radiometric feature quality for semantic mesh segmentation
Xiang et al. Crowd density estimation method using deep learning for passenger flow detection system in exhibition center
Chuang et al. Learning-guided point cloud vectorization for building component modeling
CN115953330B (en) Texture optimization method, device, equipment and storage medium for virtual scene image
CN114821013B (en) Element detection method and device based on point cloud data and computer equipment
CN116977265A (en) Training method and device for defect detection model, computer equipment and storage medium
Yuan et al. Regularity selection for effective 3D object reconstruction from a single line drawing
US20220180548A1 (en) Method and apparatus with object pose estimation
CN116206302A (en) Three-dimensional object detection method, three-dimensional object detection device, computer equipment and storage medium
CN113139540B (en) Backboard detection method and equipment
CN115310672A (en) City development prediction model construction method, city development prediction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant