CN113591869A - Point cloud instance segmentation method and device, electronic equipment and storage medium - Google Patents

Point cloud instance segmentation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113591869A
CN113591869A CN202110887047.4A CN202110887047A CN113591869A CN 113591869 A CN113591869 A CN 113591869A CN 202110887047 A CN202110887047 A CN 202110887047A CN 113591869 A CN113591869 A CN 113591869A
Authority
CN
China
Prior art keywords
point
instance
points
point set
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110887047.4A
Other languages
Chinese (zh)
Inventor
陈少宇
程天恒
张骞
黄畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Information Technology Co Ltd
Original Assignee
Beijing Horizon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Information Technology Co Ltd filed Critical Beijing Horizon Information Technology Co Ltd
Priority to CN202110887047.4A priority Critical patent/CN113591869A/en
Publication of CN113591869A publication Critical patent/CN113591869A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/231Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The embodiment of the disclosure discloses a point cloud instance segmentation method and device, electronic equipment and a storage medium, wherein the point cloud instance segmentation method comprises the following steps: respectively predicting semantic labels of each point in the point cloud of the target scene; clustering each point in the point cloud based on the semantic label of each point to obtain at least one point set; in at least one point set, semantic labels of points in the same point set are the same; clustering at least one point set based on semantic labels of the at least one point set to obtain a first example segmentation result, wherein the first example segmentation result comprises point sets respectively corresponding to at least one example; respectively predicting confidence degrees of all the examples in the first example segmentation result; and filtering the first example segmentation result based on the confidence degrees of the examples to obtain a second example segmentation result. The embodiment of the disclosure can generate a more precise and accurate example segmentation result.

Description

Point cloud instance segmentation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to computer vision technologies, and in particular, to a point cloud instance segmentation method and apparatus, an electronic device, and a storage medium.
Background
In computer vision, an instance segmentation task firstly classifies each pixel in an image to form a semantic segmentation result, and on the basis, the pixels under each class are classified according to different objects to distinguish the instance to which each pixel belongs. The example segmentation has very wide application in the fields of unmanned driving, household robots and the like.
Example segmentation of three-dimensional (3D) vision is of great importance in real life, for example, in automatic driving, in addition to detecting vehicles and pedestrians on a road, precise control of the distance between the vehicles and the pedestrians is required. In 3D vision, point clouds are a common form of data. Point cloud example segmentation is to segment each different object based on identifying the object class to which each point in a given scene belongs, and indicating the example to which the point belongs. Point cloud instance segmentation is the basis for 3D perception.
In the prior art, an example segmentation method based on candidate areas (probable-based) is mainly adopted to segment point cloud examples, and due to the fact that accurate 3D bounding boxes (bounding boxes) are difficult to predict, the precision and accuracy of point cloud example segmentation results are affected.
Therefore, how to distinguish point clouds of different objects from the point clouds to perform accurate point cloud instance segmentation is a technical problem to be solved urgently.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a point cloud example segmentation method and device, electronic equipment and a storage medium.
According to an aspect of the embodiments of the present disclosure, there is provided a point cloud instance segmentation method, including:
respectively predicting semantic labels of each point in the point cloud of the target scene;
clustering each point in the point cloud based on the semantic label of each point to obtain at least one point set; in the at least one point set, semantic labels of points in the same point set are the same;
clustering the at least one point set based on the semantic label of the at least one point set to obtain a first example segmentation result, wherein the first example segmentation result comprises point sets respectively corresponding to at least one example;
predicting confidence of each instance in the first instance segmentation result respectively;
and filtering the first example segmentation result based on the confidence degrees of the examples to obtain a second example segmentation result.
According to an aspect of the embodiments of the present disclosure, there is provided a point cloud example segmentation apparatus including:
the first prediction module is used for respectively predicting semantic labels of all points in the point cloud of the target scene;
the first clustering module is used for clustering each point in the point cloud based on the semantic label of each point to obtain at least one point set; in the at least one point set, semantic labels of points in the same point set are the same;
the second clustering module is used for clustering the at least one point set based on the semantic label of the at least one point set to obtain a first example segmentation result, and the first example segmentation result comprises point sets respectively corresponding to at least one example;
the second prediction module is used for respectively predicting the confidence of each example in the first example segmentation result;
and the filtering module is used for filtering the first example segmentation result based on the confidence degrees of the examples to obtain a second example segmentation result.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the point cloud instance segmentation method in a vehicle collision according to any one of the above embodiments of the present disclosure.
According to still another aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the point cloud example segmentation method in the vehicle collision according to any one of the above embodiments of the present disclosure.
Based on the safety protection method and device, the electronic equipment and the medium in vehicle collision provided by the embodiment of the disclosure, semantic labels of all points in point cloud of a target scene are respectively predicted, and all points in the point cloud are clustered based on the semantic labels of all points to obtain at least one point set, wherein the semantic labels of the points in the same point set are the same; then, clustering is carried out on at least one point set based on semantic labels of at least one point set to obtain a first example segmentation result, the confidence degrees of all examples in the first example segmentation result are respectively predicted, and then the first example segmentation result is filtered based on the confidence degrees of all examples to obtain a second example segmentation result. Therefore, the embodiment of the disclosure uses a hierarchical clustering mode, firstly, a point set is formed by hierarchical clustering of points (point-level), then, a preliminary first example segmentation result is formed by hierarchical clustering of the point set at the set-level, then, the confidence of each example is predicted, and the point set of the example with lower confidence is filtered from the first example segmentation result, that is, the example prediction result with lower confidence is filtered, so that an accurate second example segmentation result is obtained, a finer and more accurate example segmentation result can be generated, and the precision and the accuracy of the example segmentation result are improved compared with an example segmentation method based on a region proposal.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a scene diagram to which the present disclosure is applicable.
Fig. 2 is a schematic flow chart of a point cloud example segmentation method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a schematic flow chart of a point cloud example segmentation method according to another exemplary embodiment of the present disclosure.
Fig. 4 is a schematic flow chart of a point cloud example segmentation method according to still another exemplary embodiment of the present disclosure.
Fig. 5 is a schematic flow chart of a point cloud example segmentation method according to still another exemplary embodiment of the present disclosure.
Fig. 6 is a schematic flow chart of a point cloud example segmentation method according to still another exemplary embodiment of the present disclosure.
Fig. 7(a) is a schematic process diagram of an example point cloud segmentation method provided by an exemplary application embodiment of the present disclosure.
FIG. 7(b) is a variation process of partial points in the point cloud in the embodiment of FIG. 7 (a).
Fig. 8 is a structural diagram of an example point cloud segmentation apparatus according to an exemplary embodiment of the present disclosure.
Fig. 9 is a structural diagram of an example point cloud segmentation apparatus according to another exemplary embodiment of the present disclosure.
Fig. 10 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
In the prior art, an example segmentation method based on a candidate region is mainly adopted to segment a point cloud example. An example segmentation method based on region proposal firstly obtains interested candidate regions in a scene, such as: 3D, etc., and predict an instance mask (mask) of the object within the candidate region.
In the process of implementing the invention, the inventor discovers through research that an example segmentation method based on the region proposal generally needs 2 processes, firstly obtains a candidate region and then performs example segmentation, and the segmentation process is complicated; in addition, since the method of obtaining the candidate region is to simply estimate the approximate geometry of the object, such as the 3D bounding box, but the 3D bounding box does not need to have a strong understanding of the geometry of the underlying object, which may result in that the 3D bounding box includes multiple objects or only includes a portion of a single object, it is difficult to predict an accurate candidate region, and the accuracy and precision of the example segmentation result are seriously affected.
Therefore, how to distinguish point clouds of different objects from the point clouds to perform accurate point cloud instance segmentation is a technical problem to be solved urgently.
In view of this, embodiments of the present disclosure provide a point cloud instance segmentation method and apparatus, an electronic device, and a storage medium, where a hierarchical clustering manner is used, a point set is formed by hierarchical clustering of points, a preliminary first instance segmentation result is formed by hierarchical clustering of the point set in the set, and then an instance prediction result with a low confidence is filtered from the first instance segmentation result to obtain an accurate second instance segmentation result, so that a finer and more accurate instance segmentation result can be generated, and accuracy of the instance segmentation result is improved compared with an instance segmentation method based on a region proposal.
Exemplary System
The embodiment of the disclosure can be applied to vehicle control scenes such as automatic driving, point cloud editing scenes such as decoration design, robot control scenes, scenes of Augmented Reality (AR) maps, and the like.
For example, in a vehicle control scene, a position corresponding to a certain object needs to be reached in automatic driving, and based on the embodiment of the present disclosure, instance segmentation is performed on a point cloud in a current scene, so as to accurately determine the position corresponding to the object.
For another example, in a point cloud editing scene such as decoration design, a user wants to replace a sofa in a living room with a new sofa, and wants to see the effect of placing the new sofa first. According to the embodiment of the disclosure, the original sofa in the living room can be accurately segmented and removed from the point cloud, and then a new sofa is placed in.
For another example, in a robot control scene, a user sends an instruction to a robot to enable the robot to take an apple, and the like, and the robot can perform accurate example segmentation processing on point clouds in a target scene by using the point cloud example segmentation method provided by the embodiment of the disclosure to find an apple example.
As another example, in the scene of an AR map, the user wears AR glasses and, based on the scene seen, wishes a virtual puppy to go near a toy duck. The electronic equipment receives a request input by a user, the request carries a target of the toy duck, the request is used for indicating that the toy duck is found, and the electronic equipment can adopt the point cloud example segmentation method provided by the embodiment of the disclosure to perform accurate example segmentation processing on the point cloud of the scene to find the target toy duck.
Fig. 1 is a diagram of a scenario to which the present disclosure is applicable. As shown in fig. 1, when the embodiment of the present disclosure is applied to a vehicle control scene, a point cloud of a target scene is acquired by a point cloud acquisition device 101 with respect to the target scene, and is input to a point cloud example segmentation device 102 according to any embodiment of the present disclosure. The point cloud example segmentation device 102 predicts semantic labels of points in a point cloud of a target scene respectively, clusters the points in the point cloud based on the semantic labels of the points to obtain at least one point set, clusters the at least one point set based on the semantic labels of the at least one point set to obtain a first example segmentation result including a point set of at least one example, then predicts confidence degrees of the examples in the first example segmentation result respectively, and filters the first example segmentation result based on the confidence degrees of the examples to obtain a second example segmentation result including the semantic labels of the examples (such as pedestrian 1, pedestrian 2, …, object 1, object 2, …, etc.) in the target scene, a set of points belonging to the examples, and positions of the points. The vehicle control device 103 determines a position corresponding to a target object in the target scene to be reached based on the second example division result, and controls the vehicle to travel to the position corresponding to the target object. Further, the point cloud collecting device 101 is, for example, a color and depth (RGB-D) device, a laser radar (2D/3D), a stereo camera (stereo camera), a time-of-flight camera (ToF) camera, or the like.
Exemplary method
Fig. 2 is a schematic flow chart of a point cloud example segmentation method according to an exemplary embodiment of the present disclosure. The embodiment of the disclosure can be applied to electronic equipment such as a user terminal and a server, and can also be applied to objects such as vehicles. As shown in fig. 2, the point cloud example segmentation method of one embodiment includes the following steps:
step 201, semantic labels of each point in the point cloud of the target scene are respectively predicted.
In the embodiment of the disclosure, the target scene may be a decoration scene, a vehicle driving scene, a scene in an AR map, a scene where a robot is located, and the like, and any scene in which object positioning is required may be used. In some possible implementations, the target scene may be a driving scene outside the vehicle, a living room, a bedroom, a dining room, a kitchen, a bathroom, and the like. The embodiment of the disclosure can be executed for any target scene, and the specific application scene is not limited.
In the embodiment of the present disclosure, a Point Cloud (Point Cloud) refers to a set of points obtained after acquiring a spatial coordinate of each sampling Point on the surface of an object, and is a Point Cloud. In some implementations, the point cloud may be represented by a set of vectors in a three-dimensional coordinate system to represent the shape of the outer surface of objects in the target scene. The point cloud may include, in addition to three-dimensional coordinate (x, y, z) location information corresponding to points on the respective characters in the target scene: color information of any one or more of the RGB values, the gradation values, the depth information, and the like of each point.
The point cloud in the embodiment of the present disclosure may be a point cloud obtained based on a laser measurement principle, or a point cloud obtained according to a photogrammetry principle, for example, in some of the implementation manners, the point cloud may be obtained by collecting through collecting equipment, such as RGBD equipment, a laser radar, a stereo camera, a ToF camera, and the like. Alternatively, the point cloud in the embodiment of the present disclosure may also be any point cloud obtained in other manners, which is not specifically limited in the embodiment of the present disclosure.
The semantic tags (semantic labels) in the embodiments of the present disclosure are used to indicate categories to which people, animals, tables, chairs, etc. belong, for example.
202, clustering each point in the point cloud based on the semantic label of each point to obtain at least one point set (set).
In the at least one point set, the semantic labels of points in the same point set are the same, and the semantic labels of the points in the point set are also the semantic labels of the point set; the semantic labels of the points between different sets of points may be the same or different.
The set of points clustered in step 202 may include semantic labels of the set of points, position information and color information of each point in the set of points.
Step 203, clustering the at least one point set based on the semantic label of the at least one point set to obtain a first instance segmentation result.
Wherein the first instance segmentation result comprises a set of points respectively corresponding to at least one instance.
Examples in the embodiments of the present disclosure refer to concrete objects that embody an abstract category concept in the category. That is, examples are different individuals that are precisely distinguished in the same category, and may refer to any real object or the like. Examples are, for example, specific persons 1, 2, …, table 1, table 2, …, chair 1, chair 2, …, and the like.
Based on the first instance splitting result obtained in step 202, an instance Identification (ID) and a point set of each instance may be included. The instance ID is used to uniquely identify an object in the target scene, such as person 1, person 2, …, table 1, table 2, …, chair 1, chair 2, …, and so on.
And step 204, predicting the confidence of each instance in the first instance segmentation result respectively.
And step 205, filtering the first example segmentation result based on the confidence of each example to obtain a second example segmentation result.
Based on the embodiment, a hierarchical clustering mode is used, a point set is formed by hierarchical clustering of points, a preliminary first example segmentation result is formed by hierarchical clustering of the point set in the set, confidence coefficients of all examples are predicted, the point set of the example with lower confidence coefficient is filtered from the first example segmentation result, namely the example prediction result with lower confidence coefficient is filtered, so that an accurate second example segmentation result is obtained, a finer and more accurate example segmentation result can be generated, and the accuracy and the precision of the example segmentation result are improved compared with an example segmentation method based on a region proposal.
Fig. 3 is a schematic flow chart of a point cloud example segmentation method according to another exemplary embodiment of the present disclosure. As shown in fig. 3, based on the embodiment shown in fig. 2, step 201 may include the following steps:
in step 2011, feature extraction is performed on each point in the point cloud to obtain features of each point.
Optionally, in some embodiments, a neural network, such as 3D Unet, may be used to perform feature extraction on each point in the point cloud, so as to obtain features of each point. The 3D Unet may be a neural network of an Unet-like structure formed by stacked 3D sparse convolution layers.
Step 2012, the categories of the points are predicted based on the characteristics of the points, respectively, to obtain semantic labels of the points.
And 2013, respectively predicting the space vector from each point to the center of the corresponding instance based on the characteristics of each point.
The example center is the coordinate mean of all points in one example. The spatial vector of a point to the center of the belonging instance represents the offset from a point to its instance center.
Optionally, in some embodiments, each point in the point cloud may be subjected to feature extraction through a neural network, which is referred to as a point-by-point prediction network (point-by-point prediction network) for convenience of distinguishing, so as to obtain features of each point; and predicting the category of each point based on the characteristics of each point to obtain a semantic label (semantic label) of each point and a space vector (center shift vector) from the point to the center of the corresponding instance.
Fig. 4 is a schematic flow chart of a point cloud example segmentation method according to still another exemplary embodiment of the present disclosure. As shown in fig. 4, based on the embodiment shown in fig. 3, step 202 may include the following steps:
step 2021, based on the space vectors from each point to the center of the instance to which the point belongs, the points are migrated to the center of the instance to which the point belongs, so that the points belonging to the same instance are closer in the three-dimensional space.
Step 2022, determining that the two points belong to the same set when the distance between the two points is smaller than a first preset threshold and the semantic labels of the two points are the same for any two points migrated to the center of the corresponding instance in the point cloud, and dividing the points migrated to the center of the corresponding instance in the point cloud into at least one point set.
The specific value of the first preset threshold can be set according to the actual prediction requirement, and can be adjusted in real time according to the requirement of the case prediction precision.
Alternatively, in some embodiments, each point may be migrated to the center of the corresponding instance by a neural network, referred to herein as a point aggregation module (point aggregation module) for easy differentiation, using a spatial vector from each point to the center of the corresponding instance. For any two points after the migration, if the distance between the two points after the migration is smaller than a first preset threshold and the semantic labels of the two points are the same, the two points belong to the same set, and based on the fact that the two points belong to the same set, the whole point cloud is divided into a point set (set), wherein each point set can be regarded as an example of preliminary prediction.
In three-dimensional space, the points of the same instance are adjacent. Based on the embodiment, the characteristic that points of the same instance are adjacent is used as a constraint condition, the points in the point cloud are clustered once based on the semantic labels and the central displacement vector, and a point set is formed by hierarchical clustering of the points so as to obtain an initial prediction instance.
Optionally, in some embodiments, in step 203, for any two point sets of the at least one point set that satisfy the preset condition, the two point sets may be merged when the distance between the two point sets is smaller than a second preset threshold and the semantic tags corresponding to the two point sets are the same.
The specific value of the second preset threshold can be set according to the actual prediction requirement, and can be adjusted in real time according to the requirement of the example prediction precision.
Optionally, in some embodiments, the point set may be further clustered at the set level by a neural network, referred to herein as a set aggregation module (set aggregation module), for easy differentiation, to form a preliminary first example segmentation result.
Since the spatial vector of the predicted point to the center of the instance may not be completely accurate, hierarchical clustering of points cannot guarantee that all points in an instance are divided into a set. Most points with accurate point-to-instance center spatial vectors can come together to form incomplete instance predictions, which can be referred to as primary instances (primary). However, the points of the space vector prediction difference from a few points to the center of the corresponding instance are separated from most points, forming these small instances, which may be referred to as fragments (fragments). The fragments are too small to be considered as complete instances, but may be missing parts of the main instance. In this embodiment, considering that the number of point sets is large, a hard threshold is directly used to filter the point sets improperly, which may cause a partial loss of the main instance, and for any two point sets that satisfy the preset condition, the distance between the two point sets is smaller than the second preset threshold and the semantic tags corresponding to the two point sets are the same at the set level, the two point sets may be considered as the main instance and the fragment of the same instance, and the two point sets are merged, thereby implementing complete instance prediction.
Optionally, in other embodiments, before step 203, any two point sets that satisfy the preset condition may also be determined from the at least one point set, respectively. The two point sets that satisfy the preset condition may include, but are not limited to: in the two point sets, the number of points in one point set is greater than a first preset number, the number of points in the other point set is less than a second preset number, and the first preset number is greater than the second preset number.
Based on the embodiment, it can be considered that one point set of which the number is greater than the first preset number is a main instance of one instance, and the number of points is less than the second preset number is a fragment of one instance.
Fig. 5 is a schematic flow chart of a point cloud example segmentation method according to still another exemplary embodiment of the present disclosure. As shown in fig. 5, on the basis of any one of the embodiments shown in fig. 2 to fig. 4, step 204 may perform the following steps for each instance in the first instance segmentation result:
step 20411, feature extraction is performed on the point set of the instance to obtain features in the instance.
Step 20412, based on the in-instance features, a confidence level for the instance is determined.
Based on the embodiment, the confidence of the instance can be accurately determined based on the manner of feature extraction in the instance.
Fig. 6 is a schematic flow chart of a point cloud example segmentation method according to still another exemplary embodiment of the present disclosure. As shown in fig. 6, on the basis of any one of the embodiments shown in fig. 2 to fig. 4, step 204 may perform the following steps for each instance in the first instance segmentation result:
step 20421, feature extraction is performed on the point set of the instance to obtain features in the instance.
At step 20422, the mask for the instance is predicted based on the intra-instance features.
And step 20423, based on the mask of the instance, filtering out background points which do not belong to the instance in the point set of the instance to obtain a foreground point set of the instance.
The foreground point of an instance is the point belonging to the instance, and the point not belonging to the instance is the background point of the instance.
Step 20424, determine confidence (certainty) of the instance based on the in-instance features corresponding to the foreground point set of the instance.
The confidence of an instance represents the probability of belonging to the instance, and the higher the confidence is, the higher the probability of belonging to the instance is.
Based on the embodiment, the mask of the feature prediction example in the example can be extracted, the background points of the example can be filtered, and the confidence of the example is generated only by using the features in the example corresponding to the foreground point set, so that more accurate example confidence can be obtained, and the accuracy of the example prediction result can be improved.
Optionally, in some embodiments, in step 205, a set of points of the first example segmentation result, of which the confidence level is lower than a third preset threshold, may be filtered to obtain a second example segmentation result.
The specific value of the third preset threshold can be set according to the actual prediction requirement, and can be adjusted in real time according to the requirement of the example prediction precision.
Based on the embodiment, the point sets of the instances with the confidence levels lower than the third preset threshold in the first instance segmentation result are filtered, so that an accurate instance prediction result can be obtained.
Alternatively, in some embodiments, the confidence level of the instance may be predicted and the instances with low confidence levels may be filtered through a neural network, referred to herein as an instance sub-network (i.e., an intra-instance prediction network) for ease of differentiation. For example, after feature extraction is performed on a point set of an instance to obtain features in the instance, a mask of the instance can be predicted based on the features in the instance through a multilayer perceptron (MLP) and a sigmoid function, and background points are filtered. The confidence of the instance is then generated by another (MLP) and sigmoid function.
Optionally, in this embodiment of the present disclosure, the example subnetwork may be obtained by training a plurality of training samples in advance. Each training sample comprises an example segmentation result corresponding to the point cloud sample.
Optionally, in some of these embodiments, the example subnetwork may be trained as follows: respectively inputting the example segmentation result corresponding to each point cloud sample into an example subnetwork, respectively executing the operation of step 20421 and 20424 in the embodiment shown in fig. 6 on the example segmentation result corresponding to each point cloud sample by the example subnetwork to obtain a mask prediction result of each example, obtaining an intersection and union ratio (intersection of intersection, IoU) between the mask prediction result of each example and the corresponding example mask marking information in the point cloud sample, and training the example subnetwork based on the IoU until a preset training completion condition is met, for example, the training times reach a preset training time. After training is completed, the confidence level of the sub-network prediction of the instance represents IoU between the predicted instance and the actual instance result.
Fig. 7(a) is a schematic process diagram of an example point cloud segmentation method provided by an exemplary application embodiment of the present disclosure. FIG. 7(b) is a variation process of partial points in the point cloud in the embodiment of FIG. 7 (a). Referring to fig. 7(a) and 7(b) together, the application embodiment includes the following steps:
step 301, respectively performing feature extraction on each point in a point cloud (point cloud) by predicting a 3D Unet in a network point by point to obtain a feature (point feature) of each point, and then entering two branches: one of the branches predicts the category of each point based on the characteristics of each point respectively to obtain the semantic label (semantic label) of each point; and the other branch predicts the category of each point respectively based on the characteristics of each point and predicts a space vector (center shift vector) from the point to the center of the corresponding instance.
As shown in fig. 7(b) at 1, which is a part of the points in the point cloud. The dots with different colors belong to different categories, and the black dot with the darkest color belongs to the background dot.
Step 302, respectively migrating each point to the center of the corresponding instance based on the space vector from each point to the center of the corresponding instance through a point hierarchical clustering module, so that the points belonging to the same instance are closer in a three-dimensional space, as shown in 2 in fig. 7 (b); dividing each point in the point cloud migrating to the center of the instance into at least one point set, as shown in 3 in fig. 7 (b).
Step 303, by using the set-level clustering module, when the distance between two point sets is smaller than the second preset threshold and the semantic tags corresponding to the two point sets are the same, merging the two point sets, that is, merging the primary instance and the fragment of the same instance, so that the primary instance (primary) absorbs (absorbing) the fragments (fragments) around (absorbing), thereby forming a complete instance, as shown in fig. 7(b) 4.
Step 304, extracting features of the point set of the example, predicting a mask of the example based on the obtained features in the example, filtering background points in the point set of the example to obtain a foreground point set of the example, and determining the confidence of the example based on the features in the example corresponding to the foreground point set of the example.
Step 305, filtering the point set of the instance with the confidence level lower than the third preset threshold value to obtain an accurate second instance segmentation result (instance prediction).
Any one of the point cloud instance segmentation methods provided by the embodiments of the present disclosure may be performed by any suitable device with data processing capability, including but not limited to: terminal equipment, a server and the like. Alternatively, any of the point cloud instance segmentation methods provided by the embodiments of the present disclosure may be executed by a processor, for example, the processor may execute any of the point cloud instance segmentation methods mentioned by the embodiments of the present disclosure by calling corresponding instructions stored in a memory. And will not be described in detail below.
Exemplary devices
Fig. 8 is a structural diagram of an example point cloud segmentation apparatus according to an exemplary embodiment of the present disclosure. The point cloud example segmentation device can be arranged in electronic equipment such as a terminal device, a server and the like, or on an object such as a vehicle and the like, and executes the point cloud example segmentation device in the vehicle collision of any embodiment of the disclosure. As shown in fig. 8, the point cloud example segmentation apparatus of this embodiment includes: a first prediction module 401, a first clustering module 402, a second clustering module 403, a second prediction module 404, and a filtering module 405. Wherein:
the first prediction module 401 is configured to predict semantic tags of each point in the point cloud of the target scene respectively.
The first clustering module 402 is configured to cluster each point in the point cloud based on the semantic label of each point predicted by the first prediction module 401, so as to obtain at least one point set. In the at least one point set, semantic labels of points in the same point set are the same.
The second clustering module 403 is configured to cluster the at least one point set based on the semantic label of the at least one point set obtained by clustering by the first clustering module 402, so as to obtain a first example segmentation result, where the first example segmentation result includes point sets respectively corresponding to at least one example.
And a second prediction module 404, configured to predict confidence of each instance in the first instance segmentation result obtained by clustering by the second clustering module 403, respectively.
The first filtering module 405 is configured to filter the first example segmentation result based on the confidence of each example predicted by the second prediction module 404, so as to obtain a second example segmentation result.
Based on the embodiment, a hierarchical clustering mode is used, a point set is formed by hierarchical clustering of points, a preliminary first example segmentation result is formed by hierarchical clustering of the point set in the set, confidence coefficients of all examples are predicted, the point set of the example with lower confidence coefficient is filtered from the first example segmentation result, namely the example prediction result with lower confidence coefficient is filtered, so that an accurate second example segmentation result is obtained, a finer and more accurate example segmentation result can be generated, and the accuracy and the precision of the example segmentation result are improved compared with an example segmentation method based on a region proposal.
Fig. 9 is a structural diagram of an example point cloud segmentation apparatus according to another exemplary embodiment of the present disclosure. As shown in fig. 9, in some embodiments, the first prediction module 401 may include: the first feature extraction unit is used for respectively extracting features of each point in the point cloud to obtain the features of each point; the category prediction unit is used for predicting the categories of each point based on the characteristics of each point to obtain the semantic label of each point; and the vector prediction unit is used for predicting the space vector from each point to the center of the corresponding instance based on the characteristics of each point.
Additionally, referring back to fig. 9, in some embodiments, the first clustering module 402 can include: the migration unit is used for migrating each point to the center of the corresponding instance based on the space vector from each point to the center of the corresponding instance; and the set generation unit is used for respectively aiming at any two points which are migrated to the example centers in the point cloud, determining that the two points belong to the same set when the distance between the two points is smaller than a first preset threshold and the semantic labels of the two points are the same, and dividing the points migrated to the example centers in the point cloud into at least one point set.
Optionally, in some embodiments, the second clustering module 403 is specifically configured to: and respectively combining any two point sets meeting preset conditions in the at least one point set when the distance between the two point sets is smaller than a second preset threshold and the semantic labels corresponding to the two point sets are the same.
In addition, referring to fig. 9 again, in the point cloud example segmentation apparatus provided in another exemplary embodiment of the present disclosure, the apparatus may further include: a determining module 406, configured to determine any two point sets that meet a preset condition from the at least one point set respectively, so that the second clustering module 403 performs point set merging. The two point sets that satisfy the preset condition may include, but are not limited to: in the two point sets, the number of points in one point set is greater than a first preset number, the number of points in the other point set is less than a second preset number, and the first preset number is greater than the second preset number.
Additionally, referring back to fig. 9, in some embodiments, the second prediction module 404 may include: respectively aiming at each example in the first example segmentation result: the second feature extraction unit is used for extracting features of the point set of the example to obtain features in the example; a determining unit for determining a confidence level of the instance based on the features within the instance.
In addition, referring to fig. 9 again, in the point cloud example segmentation apparatus provided in another exemplary embodiment of the present disclosure, the apparatus may further include: and a third prediction module 407, configured to predict a mask of the instance based on the features in the instance extracted by the second feature extraction unit. And a second filtering module 408, configured to filter out background points, which do not belong to the instance, in the point set of the instance based on the mask of the instance, to obtain a foreground point set of the instance. Accordingly, in this embodiment, the determining unit is specifically configured to: and determining the confidence of the example based on the characteristics in the example corresponding to the foreground point set.
Optionally, in some embodiments, the first filtering module 405 is specifically configured to filter a point set of an instance, in the first instance segmentation result, of which the confidence is lower than a third preset threshold, to obtain a second instance segmentation result.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 10. The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom.
FIG. 10 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 10, the electronic device includes one or more processors 501 and memory 502.
The processor 501 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
Memory 502 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 501 to implement the point cloud instance segmentation methods of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device may further include: an input device 503 and an output device 504, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is a first device or a second device, the input device 503 may be the microphone or the microphone array described above for capturing the input signal of the sound source. When the electronic device is a stand-alone device, the input means 503 may be a communication network connector for receiving the acquired input signals from the first device and the second device.
The input device 503 may also include, for example, a keyboard, a mouse, and the like.
The output device 504 may output various information to the outside, including the determined distance information, direction information, and the like. The output devices 504 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 10, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in a point cloud instance segmentation method according to various embodiments of the present disclosure as described in the "exemplary methods" section of this specification above.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the point cloud instance segmentation method according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A point cloud instance segmentation method, comprising:
respectively predicting semantic labels of each point in the point cloud of the target scene;
clustering each point in the point cloud based on the semantic label of each point to obtain at least one point set; in the at least one point set, semantic labels of points in the same point set are the same;
clustering the at least one point set based on the semantic label of the at least one point set to obtain a first example segmentation result, wherein the first example segmentation result comprises point sets respectively corresponding to at least one example;
predicting confidence of each instance in the first instance segmentation result respectively;
and filtering the first example segmentation result based on the confidence degrees of the examples to obtain a second example segmentation result.
2. The method of claim 1, wherein the separately predicting semantic labels for points in a point cloud of a target scene comprises:
respectively extracting the features of each point in the point cloud to obtain the features of each point;
predicting the category of each point based on the characteristics of each point to obtain the semantic label of each point;
and respectively predicting the space vector of each point to the center of the corresponding instance based on the characteristics of each point.
3. The method of claim 2, wherein the clustering the points in the point cloud based on the semantic labels of the points to obtain at least one point set comprises:
respectively migrating the points to the centers of the belonged examples based on the space vectors from the points to the centers of the belonged examples;
respectively aiming at any two points which are migrated to the example center in the point cloud, determining that the two points belong to the same set when the distance between the two points is smaller than a first preset threshold and the semantic labels of the two points are the same, and dividing the points migrated to the example center in the point cloud into at least one point set.
4. The method according to claim 1, wherein the clustering the at least one point set based on the semantic label of the at least one point set to obtain a first instance segmentation result comprises:
and respectively combining any two point sets meeting preset conditions in the at least one point set when the distance between the two point sets is smaller than a second preset threshold and the semantic labels corresponding to the two point sets are the same.
5. The method of claim 4, further comprising:
respectively determining any two point sets meeting preset conditions from the at least one point set;
wherein, two point sets that satisfy the preset condition include: in the two point sets, the number of points in one point set is greater than a first preset number, the number of points in the other point set is less than a second preset number, and the first preset number is greater than the second preset number.
6. The method according to any of claims 1-5, wherein said separately predicting the confidence of each instance in the first instance segmentation result comprises:
respectively dividing each instance in the result of the first instance by:
extracting features of the point set of the example to obtain features in the example;
based on the in-instance features, a confidence level for the instance is determined.
7. The method of claim 6, wherein after the feature extraction of the point set of the instance to obtain the intra-instance features, further comprising:
predicting a mask for the instance based on the intra-instance features;
filtering out background points which do not belong to the instance in the point set of the instance based on the mask of the instance to obtain a foreground point set of the instance;
the determining the confidence level of the instance based on the intra-instance features includes:
and determining the confidence of the example based on the characteristics in the example corresponding to the foreground point set.
8. A point cloud instance segmentation apparatus, comprising:
the first prediction module is used for respectively predicting semantic labels of all points in the point cloud of the target scene;
the first clustering module is used for clustering each point in the point cloud based on the semantic label of each point to obtain at least one point set; in the at least one point set, semantic labels of points in the same point set are the same;
the second clustering module is used for clustering the at least one point set based on the semantic label of the at least one point set to obtain a first example segmentation result, and the first example segmentation result comprises point sets respectively corresponding to at least one example;
the second prediction module is used for respectively predicting the confidence of each example in the first example segmentation result;
and the first filtering module is used for filtering the first example segmentation result based on the confidence degrees of the examples to obtain a second example segmentation result.
9. A computer-readable storage medium storing a computer program for executing the point cloud instance segmentation method of any one of claims 1-7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the point cloud example segmentation method of any one of the claims 1-7.
CN202110887047.4A 2021-08-03 2021-08-03 Point cloud instance segmentation method and device, electronic equipment and storage medium Pending CN113591869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110887047.4A CN113591869A (en) 2021-08-03 2021-08-03 Point cloud instance segmentation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110887047.4A CN113591869A (en) 2021-08-03 2021-08-03 Point cloud instance segmentation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113591869A true CN113591869A (en) 2021-11-02

Family

ID=78254552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110887047.4A Pending CN113591869A (en) 2021-08-03 2021-08-03 Point cloud instance segmentation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113591869A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115512147A (en) * 2022-11-16 2022-12-23 北京亮道智能汽车技术有限公司 Semantic information based clustering method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886272A (en) * 2019-02-25 2019-06-14 腾讯科技(深圳)有限公司 Point cloud segmentation method, apparatus, computer readable storage medium and computer equipment
CN110969642A (en) * 2019-12-19 2020-04-07 深圳云天励飞技术有限公司 Video filtering method and device, electronic equipment and storage medium
CN111190981A (en) * 2019-12-25 2020-05-22 中国科学院上海微系统与信息技术研究所 Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium
CN111582054A (en) * 2020-04-17 2020-08-25 中联重科股份有限公司 Point cloud data processing method and device and obstacle detection method and device
CN112489212A (en) * 2020-12-07 2021-03-12 武汉大学 Intelligent three-dimensional mapping method for building based on multi-source remote sensing data
CN112802111A (en) * 2021-04-01 2021-05-14 中智行科技有限公司 Object model construction method and device
CN112883979A (en) * 2021-03-11 2021-06-01 先临三维科技股份有限公司 Three-dimensional instance segmentation method, device, equipment and computer-readable storage medium
CN112989942A (en) * 2021-02-09 2021-06-18 四川警察学院 Target instance segmentation method based on traffic monitoring video
WO2021134296A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Obstacle detection method and apparatus, and computer device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886272A (en) * 2019-02-25 2019-06-14 腾讯科技(深圳)有限公司 Point cloud segmentation method, apparatus, computer readable storage medium and computer equipment
CN110969642A (en) * 2019-12-19 2020-04-07 深圳云天励飞技术有限公司 Video filtering method and device, electronic equipment and storage medium
CN111190981A (en) * 2019-12-25 2020-05-22 中国科学院上海微系统与信息技术研究所 Method and device for constructing three-dimensional semantic map, electronic equipment and storage medium
WO2021134296A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Obstacle detection method and apparatus, and computer device and storage medium
CN111582054A (en) * 2020-04-17 2020-08-25 中联重科股份有限公司 Point cloud data processing method and device and obstacle detection method and device
CN112489212A (en) * 2020-12-07 2021-03-12 武汉大学 Intelligent three-dimensional mapping method for building based on multi-source remote sensing data
CN112989942A (en) * 2021-02-09 2021-06-18 四川警察学院 Target instance segmentation method based on traffic monitoring video
CN112883979A (en) * 2021-03-11 2021-06-01 先临三维科技股份有限公司 Three-dimensional instance segmentation method, device, equipment and computer-readable storage medium
CN112802111A (en) * 2021-04-01 2021-05-14 中智行科技有限公司 Object model construction method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴向阳等: "《空地一体化成图技术》", 30 November 2020, pages: 101 - 102 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115512147A (en) * 2022-11-16 2022-12-23 北京亮道智能汽车技术有限公司 Semantic information based clustering method and device, electronic equipment and storage medium
CN115512147B (en) * 2022-11-16 2023-04-11 北京亮道智能汽车技术有限公司 Semantic information based clustering method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110363058B (en) Three-dimensional object localization for obstacle avoidance using one-shot convolutional neural networks
JP7167397B2 (en) Method and apparatus for processing point cloud data
US10009579B2 (en) Method and system for counting people using depth sensor
Hoang Ngan Le et al. Robust hand detection and classification in vehicles and in the wild
EP2562688B1 (en) Method of Separating Object in Three Dimensional Point Cloud
WO2020119661A1 (en) Target detection method and device and pedestrian detection method and system
KR20190069457A (en) IMAGE BASED VEHICLES LOSS EVALUATION METHOD, DEVICE AND SYSTEM,
CN108734058B (en) Obstacle type identification method, device, equipment and storage medium
US20050094879A1 (en) Method for visual-based recognition of an object
KR102374776B1 (en) System and method for re-identifying target object based on location information of cctv and movement information of object
CN110781768A (en) Target object detection method and device, electronic device and medium
CN111241989A (en) Image recognition method and device and electronic equipment
KR102592551B1 (en) Object recognition processing apparatus and method for ar device
John et al. Real-time road surface and semantic lane estimation using deep features
KR20210043628A (en) Obstacle detection method, intelligent driving control method, device, medium, and device
CN113378760A (en) Training target detection model and method and device for detecting target
Han et al. Parking Space Recognition for Autonomous Valet Parking Using Height and Salient‐Line Probability Maps
CN116758518B (en) Environment sensing method, computer device, computer-readable storage medium and vehicle
CN110706238B (en) Method and device for segmenting point cloud data, storage medium and electronic equipment
Mewada et al. Automatic room information retrieval and classification from floor plan using linear regression model
CN114972758A (en) Instance segmentation method based on point cloud weak supervision
CN113591869A (en) Point cloud instance segmentation method and device, electronic equipment and storage medium
CN115131634A (en) Image recognition method, device, equipment, storage medium and computer program product
CN117475253A (en) Model training method and device, electronic equipment and storage medium
CN115136205A (en) Unknown object recognition for robotic devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination