WO2020233069A1 - 点云数据处理方法、装置、电子设备及存储介质 - Google Patents
点云数据处理方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2020233069A1 WO2020233069A1 PCT/CN2019/121776 CN2019121776W WO2020233069A1 WO 2020233069 A1 WO2020233069 A1 WO 2020233069A1 CN 2019121776 W CN2019121776 W CN 2019121776W WO 2020233069 A1 WO2020233069 A1 WO 2020233069A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- discrete convolution
- data
- cloud data
- weight
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19173—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/26—Techniques for post-processing, e.g. correcting the recognition result
- G06V30/262—Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
- G06V30/274—Syntactic or semantic context, e.g. balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Definitions
- This application relates to the field of computer application technology, and in particular to a point cloud data processing method, device, electronic equipment, and computer-readable storage medium.
- Point cloud recognition is an important issue in the field of computer vision and deep learning. By learning point cloud data, the three-dimensional structure of an object can be recognized.
- the embodiments of the present application provide a point cloud data processing method, device, and electronic equipment.
- the embodiment of the present application provides a point cloud data processing method, the method includes:
- the point cloud data in the target scene and the weight vector of the first discrete convolution kernel perform interpolation processing on the point cloud data based on the point cloud data and the weight vector of the first discrete convolution kernel to obtain the first Weight data; the first weight data characterizes the weight at the corresponding position of the weight vector of the point cloud data assigned to the first discrete convolution kernel; based on the first weight data and the first discrete convolution kernel Perform a first discrete convolution process on the point cloud data to obtain a first discrete convolution result; based on the first discrete convolution result, obtain the spatial structure of at least part of the point cloud data in the point cloud data feature.
- the embodiment of the present application also provides a point cloud data processing device, the device includes: an acquisition unit, an interpolation processing unit, and a feature acquisition unit; wherein,
- the obtaining unit is configured to obtain point cloud data in the target scene and the weight vector of the first discrete convolution kernel
- the interpolation processing unit is configured to perform interpolation processing on the point cloud data based on the point cloud data and the weight vector of the first discrete convolution kernel to obtain first weight data;
- the first weight data represents the Weights at positions corresponding to the weight vectors of the point cloud data assigned to the first discrete convolution kernel;
- the feature acquisition unit is configured to perform first discrete convolution processing on the point cloud data based on the first weight data and the weight vector of the first discrete convolution kernel to obtain a first discrete convolution result;
- the first discrete convolution result obtains the spatial structure feature of at least part of the point cloud data in the point cloud data.
- the embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the method described in the embodiment of the present application are implemented.
- An embodiment of the present application also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
- the processor implements the method described in the embodiment of the present application when the program is executed. A step of.
- An embodiment of the present application further provides a computer program product, wherein the computer program product includes computer executable instructions, and after the computer executable instructions are executed, any point cloud data processing method provided in the embodiments of the present application can be implemented .
- the point cloud data processing method, device, electronic device, and computer-readable storage medium include: obtaining point cloud data in a target scene and a weight vector of a first discrete convolution kernel; The point cloud data and the weight vector of the first discrete convolution kernel interpolate the point cloud data to obtain first weight data; the first weight data is used to establish the point cloud data and the first The correlation of the weight vectors of the discrete convolution kernels; performing discrete convolution processing on the point cloud data based on the first weight data and the weight vectors of the first discrete convolution kernel to obtain at least part of the point cloud data The spatial structure characteristics of point cloud data.
- the association between the point cloud data and the first discrete convolution kernel is established, that is, the characteristic point cloud data allocated to the first discrete convolution kernel is obtained.
- the weight vector corresponds to the weight at the position, so as to align the discrete point cloud data with the weight vector of the discrete convolution kernel, and explicitly define the geometric relationship between the point cloud data and the first discrete convolution kernel, so that the discrete volume
- the spatial structure characteristics of point cloud data can be better captured in the process of product processing.
- FIG. 1 is a first flowchart of a point cloud data processing method according to an embodiment of the application
- FIGS. 2a and 2b are respectively schematic diagrams of interpolation processing in a point cloud processing method according to an embodiment of the application;
- FIG. 3 is a schematic diagram 2 of a flow chart of a point cloud data processing method according to an embodiment of the application;
- FIG. 4 is a schematic diagram of the structure of the first network in the point cloud data processing method according to an embodiment of the application;
- FIG. 5 is a third schematic flowchart of a point cloud data processing method according to an embodiment of the application.
- FIG. 6 is a schematic structural diagram of a second network in the point cloud data processing method according to an embodiment of the application.
- FIG. 7 is a schematic diagram 1 of the composition structure of a point cloud data processing device according to an embodiment of the application.
- FIG. 8 is a second schematic diagram of the composition structure of the point cloud data processing device according to an embodiment of the application.
- FIG. 9 is a third schematic diagram of the composition structure of a point cloud data processing device according to an embodiment of the application.
- FIG. 10 is a schematic diagram of the composition structure of an electronic device according to an embodiment of the application.
- Fig. 1 is a schematic flow chart 1 of a point cloud data processing method according to an embodiment of the application; as shown in Fig. 1, the method includes:
- Step 101 Obtain the point cloud data in the target scene and the weight vector of the first discrete convolution kernel
- Step 102 Perform interpolation processing on the point cloud data based on the point cloud data and the weight vector of the first discrete convolution kernel to obtain first weight data;
- the first weight data represents the point cloud data distribution The weight at the corresponding position of the weight vector to the first discrete convolution kernel;
- Step 103 Perform first discrete convolution processing on the point cloud data based on the first weight data and the weight vector of the first discrete convolution kernel to obtain a first discrete convolution result;
- Step 104 Obtain the spatial structure feature of at least part of the point cloud data in the point cloud data based on the first discrete convolution result.
- the point cloud data refers to a collection of point data on the appearance surface of an object in the target scene obtained by a measuring device, and a collection of massive points representing the surface characteristics of the object in the target scene.
- the point cloud data includes three-dimensional coordinate data of each point.
- the point cloud data can be represented by a matrix vector of N*3, where N represents the number of points in the point cloud data, and the three-dimensional coordinates of each point can be represented by 1*3 Feature vector representation.
- the point cloud data includes not only the three-dimensional coordinate data of each point, but also color information, such as color data including red (Red), green (Green), and blue (Blue) ( (Referred to as RGB data for short), the point cloud data can be represented by an N*6 matrix, and the data of each point can be represented by a 1*6 matrix. Among them, three-dimensional data is used to represent the three-dimensional coordinates of the point. The data in the remaining three dimensions can be used to represent data in three colors.
- the point cloud data further includes description information
- the description information may be represented by the characteristics of each point in the point cloud data, and the characteristics of each point in the point cloud data may include features such as normal direction and curvature.
- the description information can also be represented by feature vectors containing the features of cloud data. It can be understood that the point cloud data includes position information and feature vectors corresponding to the point cloud data.
- the weight vector of the discrete convolution kernel (including the weight vector of the first discrete convolution kernel in this embodiment and the weight vector of the second discrete convolution kernel and the third discrete convolution kernel in subsequent embodiments
- the weight vector of is the weight vector of the three-dimensional discrete convolution kernel.
- the three-dimensional discrete convolution kernel corresponds to a cube area in the process of discrete convolution processing
- the eight vertices of the cube area correspond to the weight vector of the discrete convolution kernel (the first discrete convolution kernel in this embodiment)
- the weight vector of the discrete convolution kernel in this embodiment does not refer to one weight vector, but at least eight weight vectors.
- the eight weight vectors may be the weights of the same discrete convolution kernel.
- the vector can also be the weight vector of multiple different discrete convolution kernels.
- the weight vector of the discrete convolution kernel corresponds to convolution parameters
- the convolution parameters may include the size and length of the convolution kernel; wherein the size and length of the convolution kernel determine the size range of the convolution operation, That is, it determines the size or side length of the cube area.
- the point cloud data is first interpolated through the technical solution described in step 102 in this embodiment. , So as to establish the association between the point cloud data and the weight vector of the first discrete convolution kernel, so as to align the position of the point cloud data and the weight vector of the first discrete convolution kernel, so that the discrete convolution process can be better Capture the spatial structure characteristics of point cloud data.
- the point cloud data is interpolated based on the weight vector of the point cloud data and the first discrete convolution kernel to obtain the first weight
- the data includes: obtaining first weight data according to a preset interpolation processing method based on the point cloud data and the weight vector of the first discrete convolution kernel, and the first weight data represents the distribution of the point cloud data to The weight at the corresponding position of the weight vector of the first discrete convolution kernel that meets the preset condition; wherein the point cloud data is located in a specific geometry enclosed by the weight vector of the first discrete convolution kernel that meets the preset condition In the shape area.
- the point cloud data can be interpolated through different preset interpolation processing methods.
- the interpolation processing can be implemented by interpolation functions, that is, the point cloud data can be interpolated by different interpolation functions.
- the interpolation processing method may be a trilinear interpolation processing method or a Gaussian interpolation processing method, or the point cloud data can be interpolated through a trilinear interpolation function or a Gaussian function.
- the weight vector of the discrete convolution kernel in this embodiment, the weight vector of the first discrete convolution kernel
- the point cloud data specifically, the coordinates of the point cloud data
- the weight vectors of the first discrete convolution kernels corresponding to the same point cloud data and satisfying preset conditions are different, and the specific geometric shape regions are also different.
- the weight vector of the first discrete convolution kernel that meets the preset condition is the weight vector of the discrete convolution kernel that encloses the specific geometric shape area where the point cloud data is located.
- the specific geometric shape area is the cube area corresponding to the discrete convolution kernel, that is, eight weight vectors (this embodiment Where is the cube area formed by the weight vector of the first discrete convolution kernel.
- the eight vertices of the cube area correspond to eight weight vectors, and the 8 weight vectors corresponding to the 8 vertices of each cube area may be the same discrete volume.
- the product of the kernel may also be of multiple different discrete convolution kernels.
- the weight vector of the first discrete convolution kernel that meets the preset condition is the weight vector corresponding to the eight vertices of the cube area where the point is located, as shown in FIG. 2a.
- the point cloud data is processed through the trilinear interpolation method, and the obtained first weight data represents the weight of the corresponding position of each weight vector in the eight weight vectors corresponding to the cube area where the point cloud data is allocated.
- the specific geometric shape area is the weight vector of the discrete convolution kernel (in this embodiment, the first discrete convolution).
- the weight vector of the kernel is a spherical region with the center of the sphere and a predetermined length as a radius; wherein the radii of the spherical regions corresponding to the weight vectors of different discrete convolution kernels can be the same or different. It can be understood that in practical applications, the number of spherical regions where the point cloud data is located can be one, two or more than two, or zero, as shown in Figure 2b.
- the point cloud data is processed by Gaussian interpolation, and the obtained first weight data represents the distribution of the point cloud data to the center of the sphere where the point cloud data is located (that is, the weight vector of a certain first discrete convolution kernel )the weight of.
- a point cloud data can be associated with the eight weight vectors of a discrete convolution kernel, as shown in the scene shown in Figure 2a; it can also be associated with a partial weight vector of a discrete convolution kernel (for example, a first discrete convolution kernel).
- the weight vector of the convolution kernel is associated, as shown in the scene shown in Figure 2b; it can also be associated with part of the weight vector of each discrete convolution kernel in multiple discrete convolution kernels.
- each spherical shape The radius of the region is so large that the point cloud data is in a spherical region corresponding to the weight vectors of multiple different discrete convolution kernels.
- discrete convolution processing refers to a processing manner in which two discrete sequences are multiplied and added in pairs according to an agreed rule.
- the point cloud data is subjected to discrete convolution processing based on the first weight data and the weight vector of the first discrete convolution kernel, which is equivalent to a weighted discrete convolution in this embodiment.
- the processing method that is, for each pairwise multiplication processing of the related sequence, the multiplication result is multiplied by the first weight data.
- the weight vector of the first discrete convolution kernel and the feature vector of the point cloud data are multiplied in pairs, and the result of each pairwise multiplication is multiplied by the first weight data, and then Add again.
- the method further includes: comparing the first discrete convolution result based on a normalized parameter Perform normalization processing; the normalization parameter is determined according to the amount of point cloud data in the specific geometric shape area where the point cloud data is located. As an example, if the three-linear interpolation processing method is adopted, the number of point clouds in a certain cube area as shown in Figure 2a is 4, and then for each of the 4 point cloud data, After the discrete convolution process obtains the first discrete convolution result, the first discrete convolution result is normalized based on the value 4.
- the number of point clouds in a spherical area as shown in Figure 2b is two, and then for each of the two point cloud data, After the discrete convolution processing obtains the first discrete convolution result, the first discrete convolution result is normalized based on the value 2.
- Represents the output discrete convolution result after normalization processing (in this embodiment, the first discrete convolution result after normalization processing); Represents the output point cloud position; N p'represents the number of point cloud data in the specific geometric shape area; p'represents the weight vector of the discrete convolution kernel (in this embodiment, the weight vector of the first discrete convolution kernel)
- p ⁇ represents the position corresponding to the point cloud data
- T(p ⁇ ,p') represents the position corresponding to the weight vector based on the discrete convolution kernel (in this embodiment, the weight vector of the first discrete convolution kernel)
- the position corresponding to the point cloud data and the weight data determined by the interpolation function T (the first weight data in this embodiment);
- W(p') represents the weight vector of the discrete convolution kernel (the first discrete volume in this embodiment)
- the weight vector of the product core Represents the feature vector of the point cloud data in the specific geometric area.
- step 103 of this embodiment the point cloud data is subjected to the first discrete convolution processing based on the first weight data and the weight vector of the first discrete convolution kernel, that is, the point cloud data is allocated to the first discrete volume that meets the preset conditions
- the point cloud data is discretely convolved through the weight vector of the first discrete convolution kernel to obtain the feature vector that characterizes the spatial structure of the point cloud data, that is, the first discrete Convolution result.
- the neural network can recognize the spatial structure characteristics of the point cloud data and then determine the category of the object in the target scene, such as vehicles, people, etc., through this The neural network can directly output the category of the object in the target scene.
- the spatial structure feature of at least one point data in the point cloud data can also be identified through the neural network to determine the semantic information of the at least one point data in the point cloud data.
- the semantic information of the point data can indicate the category of the point data, and the point data The category indicates the object information to which the point data belongs.
- the target scene includes multiple objects such as people and vehicles
- it can be identified through the semantic information of the point data to determine whether the object corresponding to the point data in the point cloud data is a person or
- all the point data corresponding to the person and all the point data corresponding to the vehicle can be identified through the semantic information identification of the point data.
- step 104 of this embodiment by performing the first discrete convolution processing on the point cloud data, the purpose is to enlarge the difference between the point data in the point cloud data and other point data, so as to obtain at least part of the point cloud data.
- the spatial structure feature of the point cloud data wherein the spatial structure feature represents the feature of the point cloud data in a three-dimensional space scene, and the feature of the point cloud data may include normal direction, curvature, etc., by comparing the point cloud data
- the determination of the spatial structure characteristics of at least part of the point cloud data specifically based on the normal direction and curvature of the point cloud data combined with the determination of the location of the point cloud data, is the subsequent determination of the object in the target scene and the category of the object, or Determining the semantic information of at least one point data in the point cloud data provides a basis.
- the technical solution of this embodiment is suitable for fields such as virtual reality, augmented reality, medical treatment, aviation, intelligent driving, and robotics.
- the point cloud data is recognized by the processing method in this embodiment, and the object to which each point data in the point cloud data belongs can be determined, thereby realizing each Semantic separation of point data; or the classification of objects in the scene corresponding to the point cloud data can be determined, so as to identify whether the scene in front of the driving vehicle includes other vehicles or pedestrians, etc., and provides for the subsequent operations performed by the driving vehicle Basic data.
- the association between the point cloud data and the first discrete convolution kernel is established, that is, the characteristic point cloud data allocated to the first discrete convolution kernel is obtained.
- the weight vector corresponds to the weight at the position, so as to align the discrete point cloud data with the weight vector of the discrete convolution kernel, and explicitly define the geometric relationship between the point cloud data and the first discrete convolution kernel, so that the discrete volume
- the spatial structure characteristics of point cloud data can be better captured in the process of product processing.
- Fig. 3 is a schematic diagram 2 of the flow of the point cloud data processing method according to an embodiment of the application; as shown in Fig. 3, the method includes:
- Step 201 Obtain the point cloud data in the target scene and the weight vector of the first discrete convolution kernel
- Step 202 Perform interpolation processing on the point cloud data based on the point cloud data and the weight vector of the first discrete convolution kernel to obtain first weight data;
- the first weight data represents the point cloud data distribution To the weight at the corresponding position of the weight vector of the first discrete convolution kernel; wherein the weight vector of the first discrete convolution kernel is n groups, the first weight data is n groups, and n is 2 or more Integer
- Step 203 Perform the k-th first discrete convolution processing on the weight vector of the k-th group of first discrete convolution kernels and the point cloud data based on the k-th group of first weight data and the k-th group of first convolution parameters, Obtaining the k-th first discrete convolution result; the k-th group of first convolution parameters correspond to the size range of the k-th first discrete convolution processing; k is an integer greater than or equal to 1 and less than or equal to n;
- Step 204 Determine the spatial structure feature of the point cloud data based on the n first discrete convolution results.
- step 201 to step 202 in this embodiment please refer to the detailed description of step 101 to step 102 in the foregoing embodiment, which will not be repeated here.
- step 101 to step 102 in the foregoing embodiment please refer to the detailed description of step 101 to step 102 in the foregoing embodiment, which will not be repeated here.
- step 102 to step 103 in the foregoing embodiment can refer to the detailed description of step 102 to step 103 in the foregoing embodiment, which will not be repeated here.
- the weight vectors of the first discrete convolution kernel are n groups; then the weight vector based on the point cloud data and the first discrete convolution kernel The weight vector performs interpolation processing on the point cloud data to obtain first weight data, including: performing interpolation processing on the point cloud data respectively based on the point cloud data and the weight vector of the k-th group of first discrete convolution kernels, Obtain the k-th group of first weight data; k is an integer greater than or equal to 1 and less than or equal to n; n is an integer greater than or equal to 2.
- the weight vector of the first discrete convolution kernel may have n groups, and the point cloud data and the weight vectors of the k-th group of the first discrete convolution kernel among the weight vectors of the n groups of first discrete convolution kernels are input respectively To the interpolation function, the k-th group of first weight data is obtained. That is to say, by inputting the point cloud data and the weight vectors of the n groups of first discrete convolution kernels into the interpolation function respectively, n groups of first weight data can be obtained.
- the three-dimensional discrete convolution kernel corresponds to a cube area during the discrete convolution process
- the eight vertices of the cube area correspond to eight weight vectors (denoted as the weight vector of the first discrete convolution kernel )
- each three-dimensional discrete convolution kernel corresponds to a convolution parameter, that is, the weight vector of the first discrete convolution kernel corresponding to the three-dimensional discrete convolution kernel corresponds to a convolution parameter
- the convolution parameter may include a convolution kernel Size and length; wherein the size and length of the convolution kernel determine the size range of the convolution operation, that is, the size or side length of the cube area.
- the k-th group of first weight data and the k-th group of first convolution parameters are used to perform the k-th first group of weight vectors and point cloud data on the k-th group of first discrete convolution kernels.
- Discrete convolution processing to obtain the k-th first discrete convolution result For the specific first discrete convolution processing process, refer to the description in the foregoing embodiment, and will not be repeated here.
- the interpolation processing and discrete convolution processing in this embodiment can be implemented through the interpolated discrete convolution layer in the network. It can be understood that, in this embodiment, interpolation processing and discrete convolution processing are respectively performed on the same point cloud data through n interpolation discrete convolution layers, thereby obtaining n first discrete convolution results.
- the k-th group of first convolution parameters corresponds to the size range of the k-th first discrete convolution process, that is, the discrete convolutions corresponding to at least part of the first convolution parameters in the n groups of first convolution parameters
- the size range of product processing is different. It can be understood that the larger the first convolution parameter, the larger the size range of discrete convolution processing, and the larger the receptive field; correspondingly, the smaller the first convolution parameter, the smaller the size range of discrete convolution processing. , Feel the smaller the wild.
- the point cloud data can be discretely convolved by using the weight vectors of a set of first discrete convolution kernels corresponding to the smaller first convolution parameters to obtain the fine spatial structure characteristics of the target object surface;
- the weight vectors of a set of first discrete convolution kernels corresponding to the first convolution parameters of perform discrete convolution processing on the point cloud data to obtain the spatial structure characteristics of the background.
- the network including n discrete convolutional layers in this embodiment can respectively pass through the weight vectors of the k-th group of first discrete convolution kernels and the corresponding weight vectors of the n groups of first discrete convolution kernels.
- the k sets of first convolution parameters perform interpolation processing and discrete convolution processing on the point cloud data.
- the network is a neural network with multiple receptive fields, which can capture the surface fine spatial structure characteristics of the point cloud data and the spatial structure characteristics of the background information. , Which facilitates the determination of the subsequent point cloud data category, that is, the category of the object in the target scene (ie, the classification task), and can improve the accuracy of the classification task.
- the point cloud data is based on the weight vectors of the n sets of first discrete convolution kernels and the n sets of first convolution parameters in a parallel manner for one interpolation processing and discrete convolution processing, based on the obtained n
- the first discrete convolution results determine the spatial structure characteristics of the point cloud data.
- multiple interpolation processing and discrete convolution processing may be performed in sequence. In each of the interpolation processing and discrete convolution processing, it may be based on multiple The weight vectors of the first discrete convolution kernels and the multiple groups of first convolution parameters are processed in parallel for interpolation processing and discrete convolution processing.
- the determining the spatial structure feature of the point cloud data based on the n first discrete convolution results includes: based on the first processed data and the second discrete convolution kernel The weight vector of the first processed data is interpolated to obtain the second weight data; the second weight data represents the corresponding position of the weight vector assigned to the second discrete convolution kernel by the first processed data Weight; wherein, the first processed data is determined according to the result of the previous discrete convolution processing, and in the case where the result of the previous discrete convolution processing is n first discrete convolution results, the first processed data is determined according to The n first discrete convolution results are determined; based on the second weight data and the weight vector of the second discrete convolution kernel, a second discrete convolution process is performed on the first processed data to obtain a second discrete Convolution result; based on the second discrete convolution result, the spatial structure feature of the point cloud data is obtained.
- n first discrete convolution results are integrated to obtain first processed data.
- the data of the corresponding channel in each of the n first discrete convolution results may be weighted and summed to obtain the first processed data.
- the specific implementation manners of the interpolation processing and the discrete convolution processing are the same as the foregoing embodiments, and will not be repeated here.
- the first processed data may be determined according to the result of the previous discrete convolution processing, and the method for determining the first processed data is similar to the foregoing implementation manner, and will not be repeated here.
- the weight vector of the second discrete convolution kernel is in one group, the second weight data is in one group, and l is an integer greater than or equal to 2; the said second weight data is based on the second weight data and the The weight vector of the second discrete convolution kernel performs discrete convolution processing on the first processed data again, including: performing the second discrete convolution process on the m-th group based on the m-th group of second weight data and the m-th group of second convolution parameters
- the weight vector of the product kernel and the first processed data are subjected to the m-th second discrete convolution processing to obtain the m-th second discrete convolution result; the m-th group of second convolution parameters corresponds to the m-th discrete convolution
- the size range of the convolution processing; m is an integer greater than or equal to 1 and less than or equal to l; the obtaining the spatial structure feature of the point cloud data based on the second discrete convolution result includes: based on l second discrete convolution As
- the point cloud data first passes through the k-th group of first discrete convolution kernels in the weight vectors of the n groups of first discrete convolution kernels
- the weight vector of the first discrete convolution kernel is interpolated separately, and the weight vector of the k-th group of the first discrete convolution kernel in the weight vector of the n groups of first discrete convolution kernels and the k-th group of the n groups of first convolutional layer parameters
- n first discrete convolution results are obtained; then n first discrete convolution results are integrated into the first processed data, and then through l sets of second discrete convolution kernels Interpolation is performed on the weight vectors of the m-th group of second discrete convolution kernels in the weight vector of, and the weight vectors of the m-th group of second discrete convolution kernels in the weight vectors
- the point cloud data in this embodiment has undergone the processing process of interpolation-discrete convolution-interpolation-discrete convolution, and each time the interpolation processing and the discrete convolution processing process, the point cloud data is processed through multiple paths. Perform interpolation processing and discrete convolution processing.
- the number of loop processing can be determined based on actual conditions, for example, it can be three times.
- each set of interpolation convolutional layers can perform interpolation processing on input data and Discrete convolution processing, that is, each set of interpolation convolution layers can perform the interpolation processing and discrete convolution processing procedures in this embodiment.
- each interpolated convolution block includes three interpolated convolution layers , Including 1*1*1 interpolation convolution layer (InterpConv), 3*3*3 interpolation convolution layer and 1*1*1 interpolation convolution layer; among them, 1*1*1 interpolation convolution
- the layer is used to adjust the channel.
- the convolution parameters corresponding to the 3*3*3 interpolation convolution layer in different interpolation convolution blocks are different, for example, the convolution parameters corresponding to the 3*3*3 interpolation convolution layer in the first interpolation convolution block.
- the convolution parameter l 0.1.
- the convolution parameter l represents the convolution kernel length (kernel length).
- twice the length of the convolution kernel may represent the side length of the cube formed by eight weight vectors shown in FIG. 2.
- the input point cloud data is represented by an N*3 matrix vector; after the point cloud data is interpolated and convolved through the 1*1*1 interpolation convolution layer of three paths, the data obtained is 32 channels Data, denoted as N*32; then input 32 channels of data (ie N*32) into the 3*3*3 interpolation convolution layer, and the obtained data is 64 channels downsampled to 1/2 of the original data The data is denoted as N/2*64; then the 64-channel data (N/2*64) downsampled to 1/2 of the original data is input to the 1*1*1 interpolation convolution layer for interpolation convolution processing Then, obtain the 128-channel data down-sampled to 1/2 of the original data, denoted as N/2*128.
- the above processing process can be recorded as a processing process in a point cloud processing block, and the point cloud processing block includes three interpolated convolution blocks (InterpConv Block).
- the point cloud data can be repeatedly interpolated and convolved through at least two point cloud processing blocks.
- the point cloud data is repeatedly interpolated and convolved through two point cloud processing blocks.
- the number of interpolation convolution blocks in each point cloud processing block may be the same or different. In this example, the number of interpolation convolution blocks in the two point cloud processing blocks is the same, which is three. After integrating the three N/2*128 data, the integrated N/2*128 data is processed again through the three interpolation convolution blocks in the point cloud processing block.
- the processing process is the same as the first point cloud mentioned above.
- the processing of data blocks is similar.
- the convolution parameters corresponding to the interpolation convolution block in the second point cloud processing block can be different from the convolution parameters corresponding to the interpolation convolution block in the first point cloud processing block, and the interpolation in the second point cloud processing block
- the convolution parameter corresponding to the convolution block is greater than the convolution parameter corresponding to the interpolation convolution block in the first point cloud processing block, for example, 3*3*3 in the first interpolation convolution block in the second point cloud processing block
- the convolution parameter corresponding to the 3*3*3 interpolation convolution layer in the second interpolation convolution block in the second point cloud processing block is
- the convolution parameter corresponding to each discrete convolution process is Gradually increasing.
- different discrete convolution processing corresponds to The convolution parameters can be different.
- the three N/4*256 data obtained by the second point cloud processing block are integrated, specifically, after adding the three N/4*256 channels to obtain 768
- the data of the channel is recorded as N/4*768.
- the maximum pooling process is performed on N/4*1024 based on the maximum pooling layer (Maxpooling) to obtain the data representing the global feature vector, which is recorded as 1*1024.
- 1*1024 is processed based on the fully connected layer (FC), and 40 channels of data are obtained, which is recorded as 1*40.
- Each channel corresponds to one dimension, that is, 40 dimensions of data are output, and each dimension corresponds to a category.
- the method further includes: Step 205: Determine the category of the object in the target scene based on the spatial structure feature of the point cloud data.
- the category of the object corresponding to the point cloud data is determined based on the outputted data representing the spatial structure of the point cloud data in multiple dimensions, that is, the category of the object in the target scene is determined. Specifically, the category of the object is determined based on the data of the dimension with the largest value among the data of the multiple dimensions. For example, in the example shown in Figure 4, 40 dimensions of data are output, and each dimension of data can correspond to a category, then the data of the dimension with the largest value is determined from the data of 40 dimensions, and the dimension with the largest value is determined The category corresponding to the data is determined as the category of the object.
- the weight data used to establish the association between the point cloud data and the first discrete convolution is obtained, that is, the characteristic point cloud data is obtained and assigned to the first discrete convolution.
- the weight vector of a discrete convolution kernel corresponds to the weight of the position, so that the discrete point cloud data and the weight vector of the discrete convolution kernel are aligned, so that the space of the point cloud data can be better captured during the discrete convolution processing.
- the point cloud data is discretely convolved through different convolution parameters to achieve the fine spatial structure features of the surface of the point cloud data and the spatial structure features of the background information, which can improve the point The accuracy of object classification corresponding to cloud data.
- Fig. 5 is a third schematic flow chart of a point cloud data processing method according to an embodiment of the application; as shown in Fig. 5, the method includes:
- Step 301 Obtain the point cloud data in the target scene and the weight vector of the first discrete convolution kernel
- Step 302 Perform interpolation processing on the point cloud data based on the point cloud data and the weight vector of the first discrete convolution kernel to obtain first weight data;
- the first weight data represents the point cloud data distribution The weight at the corresponding position of the weight vector to the first discrete convolution kernel;
- Step 303 Perform first discrete convolution processing on the point cloud data based on the first weight data and the weight vector of the first discrete convolution kernel to obtain a first discrete convolution result;
- Step 304 Perform first upsampling processing on the first discrete convolution result to obtain a first upsampling processing result
- Step 305 Obtain a spatial structure feature of at least one point data in the point cloud data based on the first upsampling processing result.
- step 301 to step 302 in this embodiment please refer to the detailed description of step 101 to step 102 in the foregoing embodiment, which will not be repeated here.
- step 101 to step 102 in the foregoing embodiment please refer to the detailed description of step 101 to step 102 in the foregoing embodiment, which will not be repeated here.
- step 102 to step 103 in the foregoing embodiment can refer to the detailed description of step 102 to step 103 in the foregoing embodiment, which will not be repeated here.
- the first discrete convolution result needs to be subjected to a first upsampling process to restore the first discrete convolution result
- the size of, that is, the size of the first discrete convolution result is enlarged to obtain the first up-sampling processing result, and the spatial structure feature of at least one point data in the point cloud data is obtained based on the first up-sampling processing result.
- the structure corresponding to interpolation processing and discrete convolution processing can be referred to as an encoder structure
- the structure corresponding to upsampling processing can be referred to as a decoder structure.
- the primary interpolation processing, discrete convolution processing, and up-sampling processing of the point cloud data in order to better identify the spatial structure feature of at least one point data in the point cloud data, you can Perform multiple interpolation processing, discrete convolution processing and up-sampling processing multiple times in sequence.
- the obtaining the spatial structure feature of at least one point data in the point cloud data based on the first upsampling processing result includes: based on the result of the previous upsampling processing And the weight vector of the third discrete convolution kernel to interpolate the result after the previous up-sampling processing to obtain third weight data; the third weight data represents the result of the previous up-sampling processing and is allocated to the The weight at the corresponding position of the weight vector of the third discrete convolution kernel; if the previous upsampling process is the first upsampling process on the first discrete convolution result, the result after the previous upsampling process is the first An upsampling result; based on the third weight data and the weight vector of the third discrete convolution kernel, perform a third discrete convolution process on the result after the previous upsampling process to obtain a third discrete convolution result Perform a second up-sampling process on the third discrete convolution result to obtain a second up-sampling process result; obtain
- the interpolation processing can be repeated, the second discrete convolution processing and the second upsampling processing, the number of repetitions can be based on The actual situation is pre-configured.
- FIG. 6 is a schematic structural diagram of the second network in the point cloud data processing method according to an embodiment of the application; as shown in FIG. 6, including an encoder and a decoder; wherein, the encoder includes a plurality of interpolation convolutional layers (InterpConv ), the point cloud data is sequentially subjected to interpolation processing and discrete convolution processing through the plurality of interpolation convolution layers, and each interpolation convolution layer can perform the interpolation processing and discrete convolution processing procedures in this embodiment.
- the convolution parameters corresponding to the multiple interpolation convolution layers may be different.
- the convolution parameters corresponding to each of the plurality of interpolation convolution layers may be gradually increased.
- the convolution parameter l represents the convolution kernel length (kernel length).
- twice the length of the convolution kernel is the side length of the cube formed by eight weight vectors shown in FIG. 2.
- the input point cloud data is represented by an N*3 matrix vector; after the point cloud data is interpolated and convolved through the first 3*3*3 interpolation convolution layer, the data obtained is downsampled to The 16-channel data of 1/2 of the original data is denoted as N/2*16; the 16-channel data (N/2*16) down-sampled to 1/2 of the original data is input to the 1*1*1 interpolation volume After the multiplication layer is subjected to interpolation and convolution processing, the data with the number of channels adjusted to 32 channels is obtained, denoted as N/2*32.
- the 1*1*1 interpolation convolutional layers are all the same For adjusting the number of channels. Input the N/2*32 data into the second 3*3*3 interpolation convolution layer for interpolation and convolution processing, and obtain the 32-channel data down-sampled to 1/4 of the original data, denoted as N/4 *32. After inputting N/4*32 data into the 1*1*1 interpolation convolution layer for interpolation convolution processing, the number of channels is adjusted to 64 channels, which is recorded as N/4*64.
- upsampling is performed on the N/16*256 data, and the obtained data is the 256-channel data that is up-sampled to 1/8 of the original data, denoted as N/8*256; for N/8* Up-sampling of 256 data, the obtained data is the 128-channel data that is up-sampled to 1/4 of the original data, denoted as N/4*128; the up-sampling of N/4*128 data, the obtained data is The 128-channel data that is up-sampled to 1/2 of the original data is recorded as N/2*128; the N/2*128 data is up-sampled, and the obtained data is the 128-channel data that is up-sampled to the original data , Marked as N*128.
- N*128 data into the 1*1*1 interpolation convolutional layer for interpolation and convolution processing to obtain N*m data, where m can be expressed as the number of point clouds in the point cloud data, that is to say The feature data corresponding to multiple dimensions of each point cloud.
- the method further includes step 306: determining the semantic information of the at least one point data based on the spatial structure feature of the at least one point data in the point cloud data.
- the semantic information of the at least one point data is determined based on the output data representing the spatial structure of the at least one point data in multiple dimensions, that is, the category of the at least one point data is determined.
- the category indicates the object information to which the point data belongs. For example, if the target scene includes multiple objects such as people and vehicles, the semantic information of the point data can be used to identify the object corresponding to the point data in the point cloud data. If it is a vehicle, all the point data corresponding to the person and all the point data corresponding to the vehicle can be identified through the semantic information recognition of the point data.
- the semantic information of the point data is determined based on the data of the dimension with the largest value among the feature data of the multiple dimensions corresponding to each point data in the at least one point data.
- N dimensions of feature data are output for each point data, and each dimension of data can correspond to a category, then the data of the dimension with the largest value is determined from the data of N dimensions.
- the category corresponding to the data of the dimension with the largest value is determined as the semantic information of the point data.
- the weight data used to establish the association between the point cloud data and the first discrete convolution is obtained, that is, the representative point cloud data is obtained and assigned to the first discrete volume.
- the weight vector of the convolution kernel corresponds to the weight of the position, so that the discrete point cloud data and the weight vector of the discrete convolution kernel are aligned, so that the spatial structure characteristics of the point cloud data can be better captured during the discrete convolution processing. So as to better obtain the semantic information of the point cloud data.
- FIG. 7 is a schematic diagram 1 of the composition structure of a point cloud data processing device according to an embodiment of the application; as shown in FIG. 7, the device includes: an acquisition unit 41, an interpolation processing unit 42 and a feature acquisition unit 43; wherein,
- the obtaining unit 41 is configured to obtain the point cloud data in the target scene and the weight vector of the first discrete convolution kernel
- the interpolation processing unit 42 is configured to perform interpolation processing on the point cloud data based on the point cloud data and the weight vector of the first discrete convolution kernel to obtain first weight data; the first weight data represents The point cloud data is assigned to the weight at the corresponding position of the weight vector of the first discrete convolution kernel;
- the feature acquisition unit 43 is configured to perform first discrete convolution processing on the point cloud data based on the first weight data and the weight vector of the first discrete convolution kernel to obtain a first discrete convolution result; Obtain the spatial structure feature of at least part of the point cloud data in the point cloud data based on the first discrete convolution result.
- the interpolation processing unit 42 is configured to obtain the first weight data according to a preset interpolation processing method based on the point cloud data and the weight vector of the first discrete convolution kernel, so
- the first weight data represents the weight for assigning the point cloud data to the corresponding position of the weight vector of the first discrete convolution kernel that meets the preset condition; wherein, the point cloud data is located in the first place that meets the preset condition.
- the feature acquisition unit 43 is further configured to perform first discrete convolution processing on the point cloud data based on the first weight data and the weight vector of the first discrete convolution kernel to obtain the first Discrete convolution result; normalize the first discrete convolution result based on a normalized parameter; the normalized parameter is based on the point cloud data in the specific geometric shape area where the point cloud data is located The number is determined; based on the normalized result, the spatial structure characteristics of at least part of the point cloud data in the point cloud data are obtained.
- the weight vector of the first discrete convolution kernel is in n groups, the first weight data is in n groups, and n is an integer greater than or equal to 2;
- the feature acquisition unit 43 is configured to be based on the first
- the k sets of first weight data and the k th set of first convolution parameters perform the k th first discrete convolution processing on the weight vector of the k th set of first discrete convolution kernel and the point cloud data to obtain the k th set A discrete convolution result;
- the k-th group of first convolution parameters corresponds to the size range of the k-th discrete convolution process;
- k is an integer greater than or equal to 1 and less than and equal to n; based on n first discrete convolution results Determine the spatial structure characteristics of the point cloud data.
- the interpolation processing unit 42 is further configured to perform interpolation processing on the first processed data based on the first processed data and the weight vector of the second discrete convolution kernel to obtain the first processed data.
- Two-weight data the second weight data represents the weight at the corresponding position of the weight vector assigned to the second discrete convolution kernel by the first processed data; wherein, the first processed data is based on the previous discrete convolution
- the result of the processing is determined. In a case where the result of the previous discrete convolution processing is n first discrete convolution results, the first processed data is determined according to the n first discrete convolution results;
- the feature acquisition unit 43 is further configured to perform a second discrete convolution process on the first processed data based on the second weight data and the weight vector of the second discrete convolution kernel to obtain a second discrete convolution Result; based on the second discrete convolution result, the spatial structure feature of the point cloud data is obtained.
- the weight vector of the second discrete convolution kernel is a group of 1, the second weight data is a group of 1, and 1 is an integer greater than or equal to 2;
- the feature acquisition unit 43 is configured to be based on the first The m groups of second weight data and the m group of second convolution parameters perform the mth second discrete convolution processing on the weight vectors of the mth group of second discrete convolution kernels and the first processed data to obtain the mth group The second discrete convolution result;
- the m-th group of second convolution parameters correspond to the size range of the m-th discrete convolution processing;
- m is an integer greater than or equal to 1 and less than or equal to 1, and is also configured to be based on l second The discrete convolution result determines the spatial structure characteristics of the point cloud data.
- the device further includes a first determining unit 44, configured to determine the location of the object in the target scene based on the spatial structure feature of the point cloud data category.
- the feature acquiring unit 43 is configured to perform a first discrete convolution process on the point cloud data based on the first weight data and the weight vector of the first discrete convolution kernel to obtain A first discrete convolution result; performing a first upsampling process on the first discrete convolution result to obtain a first upsampling process result; obtaining at least one point in the point cloud data based on the first upsampling process result
- the spatial structure characteristics of the data is configured to perform a first discrete convolution process on the point cloud data based on the first weight data and the weight vector of the first discrete convolution kernel to obtain A first discrete convolution result; performing a first upsampling process on the first discrete convolution result to obtain a first upsampling process result; obtaining at least one point in the point cloud data based on the first upsampling process result The spatial structure characteristics of the data.
- the interpolation processing unit 42 is further configured to compare the previous value based on the result of the previous upsampling processing and the weight vector of the third discrete convolution kernel.
- the result of one upsampling processing is interpolated to obtain third weight data;
- the third weight data represents the result of the previous upsampling processing and is assigned to the corresponding position of the weight vector of the third discrete convolution kernel
- the weight of the previous upsampling process is the first upsampling process performed on the first discrete convolution result, the result after the previous upsampling process is the first upsampling result;
- the feature acquisition unit 43 is further configured to perform a third discrete convolution process on the result after the previous upsampling process based on the third weight data and the weight vector of the third discrete convolution kernel to obtain the first Three discrete convolution results; performing a second upsampling process on the third discrete convolution result to obtain a second upsampling process result; obtaining at least one point data in the point cloud data based on the second upsampling process result The characteristics of the spatial structure.
- the device further includes a second determining unit 45 configured to determine the at least one point cloud data based on the spatial structure feature of the at least one point data The semantic information of a point data.
- the acquisition unit 41, the interpolation processing unit 42, the feature acquisition unit 43, the first determination unit 44, and the second determination unit 45 in the device can be implemented by a central processing unit (CPU, Central Processing Unit, Digital Signal Processor (DSP, Digital Signal Processor), Microcontroller Unit (MCU) or Programmable Gate Array (FPGA, Field-Programmable Gate Array) implementation.
- CPU Central Processing Unit
- DSP Digital Signal Processor
- MCU Microcontroller Unit
- FPGA Field-Programmable Gate Array
- the point cloud data processing device provided in the above embodiment performs point cloud data processing
- only the division of the above-mentioned program modules is used as an example for illustration.
- the above-mentioned processing can be allocated to different The program module is completed, that is, the internal structure of the device is divided into different program modules to complete all or part of the processing described above.
- the point cloud data processing device provided in the foregoing embodiment and the point cloud data processing method embodiment belong to the same concept, and the specific implementation process is detailed in the method embodiment, which will not be repeated here.
- FIG. 10 is a schematic diagram of the composition structure of an electronic device according to an embodiment of the application; as shown in FIG. 10, it includes a memory 52, a processor 51, and a computer program stored in the memory 52 and running on the processor 51.
- the processor Step 51 implements the steps of the point cloud data processing method described in the embodiment of the present application when the program is executed.
- bus system 53 various components in the electronic device may be coupled together through the bus system 53.
- the bus system 53 is used to implement connection and communication between these components.
- the bus system 53 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the bus system 53 in FIG. 10.
- the memory 52 may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory.
- the non-volatile memory can be a read only memory (ROM, Read Only Memory), a programmable read only memory (PROM, Programmable Read-Only Memory), an erasable programmable read only memory (EPROM, Erasable Programmable Read- Only Memory, Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), magnetic random access memory (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface memory , CD-ROM, or CD-ROM (Compact Disc Read-Only Memory); magnetic surface memory can be magnetic disk storage or tape storage.
- the volatile memory may be random access memory (RAM, Random Access Memory), which is used as an external cache.
- RAM random access memory
- SRAM static random access memory
- SSRAM synchronous static random access memory
- DRAM dynamic random access Memory
- SDRAM Synchronous Dynamic Random Access Memory
- DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- ESDRAM enhanced -Type synchronous dynamic random access memory
- SLDRAM SyncLink Dynamic Random Access Memory
- direct memory bus random access memory DRRAM, Direct Rambus Random Access Memory
- DRRAM Direct Rambus Random Access Memory
- the memory 52 described in the embodiment of the present application is intended to include, but is not limited to, these and any other suitable types of memory.
- the method disclosed in the foregoing embodiment of the present application may be applied to the processor 51 or implemented by the processor 51.
- the processor 51 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 51 or instructions in the form of software.
- the aforementioned processor 51 may be a general-purpose processor, a DSP, or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and the like.
- the processor 51 may implement or execute various methods, steps, and logical block diagrams disclosed in the embodiments of the present application.
- the general-purpose processor may be a microprocessor or any conventional processor.
- the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
- the software module may be located in a storage medium, and the storage medium is located in the memory 52.
- the processor 51 reads the information in the memory 52 and completes the steps of the foregoing method in combination with its hardware.
- the electronic device may be used by one or more application specific integrated circuits (ASIC, Application Specific Integrated Circuit), DSP, programmable logic device (PLD, Programmable Logic Device), complex programmable logic device (CPLD, Complex Programmable Logic Device, FPGA, general-purpose processor, controller, MCU, microprocessor (Microprocessor), or other electronic components are used to implement the aforementioned methods.
- ASIC Application Specific Integrated Circuit
- DSP digital signal processor
- PLD programmable logic device
- CPLD Complex Programmable Logic Device
- FPGA general-purpose processor
- controller MCU
- microprocessor Microprocessor
- the embodiment of the present application also provides a computer storage medium, such as a memory 52 including a computer program, which can be executed by the processor 51 of an electronic device to complete the steps described in the foregoing method.
- the computer storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM, etc.; it may also be various devices including one or any combination of the foregoing memories.
- the computer storage medium provided by the embodiment of the present application has computer instructions stored thereon, and when the instruction is executed by a processor, the point cloud data processing method described in the embodiment of the present application is implemented.
- An embodiment of the present application further provides a computer program product, wherein the computer program product includes computer executable instructions, and after the computer executable instructions are executed, any point cloud data processing method provided in the embodiments of the present application can be implemented .
- the disclosed device and method may be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
- the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms of.
- the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units; Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- the functional units in the embodiments of the present application can all be integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
- the unit can be implemented in the form of hardware, or in the form of hardware plus software functional units.
- the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, ROM, RAM, magnetic disk, or optical disk.
- the above-mentioned integrated unit of this application is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
- the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: removable storage devices, ROM, RAM, magnetic disks, or optical disks and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Image Generation (AREA)
Abstract
Description
Claims (23)
- 一种点云数据处理方法,所述方法包括:获得目标场景中的点云数据以及第一离散卷积核的权重向量;基于所述点云数据和所述第一离散卷积核的权重向量对所述点云数据进行插值处理,获得第一权重数据;所述第一权重数据表征所述点云数据分配至所述第一离散卷积核的权重向量对应位置处的权重;基于所述第一权重数据和所述第一离散卷积核的权重向量对所述点云数据进行第一离散卷积处理,获得第一离散卷积结果;基于所述第一离散卷积结果,获得所述点云数据中至少部分点云数据的空间结构特征。
- 根据权利要求1所述的方法,其中,所述基于所述点云数据和所述第一离散卷积核的权重向量对所述点云数据进行插值处理,获得第一权重数据,包括:基于所述点云数据和所述第一离散卷积核的权重向量按照预设的插值处理方式获得第一权重数据,所述第一权重数据表征将所述点云数据分配至满足预设条件的第一离散卷积核的权重向量对应位置处的权重;其中,所述点云数据位于所述满足预设条件的第一离散卷积核的权重向量所围成的特定几何形状区域内。
- 根据权利要求2所述的方法,其中,在获得所述第一离散卷积结果之后,所述方法还包括:基于归一化参数对第一离散卷积结果进行归一化处理;所述归一化参数是根据所述点云数据所在的所述特定几何形状区域内的点云数据的数量确定的;所述基于所述第一离散卷积结果,获得所述点云数据中至少部分点云数据的空间结构特征,包括:基于归一化处理后的结果,获得所述点云数据中至少部分点云数据的空间结构特征。
- 根据权利要求1至3任一项所述的方法,其中,所述第一离散卷积核的权重向量为n组,所述第一权重数据为n组,n为大于等于2的整数;所述基于所述第一权重数据和所述第一离散卷积核的权重向量对所述点云数据进行第一离散卷积处理,获得第一离散卷积结果,包括:基于第k组第一权重数据以及第k组第一卷积参数对第k组第一离散卷积核的权重向量和所述点云数据进行第k个第一离散卷积处理,获得第k个第一离散卷积结果;所述第k组第一卷积参数对应于第k个第一离散卷积处理的尺寸范围;k为大于等于1且小于等于n的整数;所述基于所述第一离散卷积结果,获得所述点云数据中至少部分点云数据的空间结构特征,包括:基于n个第一离散卷积结果确定所述点云数据的空间结构特征。
- 根据权利要求4所述的方法,其中,所述基于所述n个第一离散卷积结果确定所述点云数据的空间结构特征,包括:基于第一处理数据和第二离散卷积核的权重向量对所述第一处理数据进行插值处理,获得第二权重数据;所述第二权重数据表征所述第一处理数据分配至所述第二离散卷积核的权重向量对应位置处的权重;其中,所述第一处理数据根据前一次离散卷积处理的结果确定,在前一次离散卷积处理的结果为n个第一离散卷积结果的情况下,所述第一处理数据根据所述n个第一离散卷积结果确定;基于所述第二权重数据和所述第二离散卷积核的权重向量对所述第一处理数据进行第二离散卷积处理,获得第二离散卷积结果;基于所述第二离散卷积结果,获得所述点云数据的空间结构特征。
- 根据权利要求5所述的方法,其中,所述第二离散卷积核的权重向量为l组,所 述第二权重数据为l组,l为大于等于2的整数;所述基于所述第二权重数据和所述第二离散卷积核的权重向量对所述第一处理数据重新进行离散卷积处理,包括:基于第m组第二权重数据以及第m组第二卷积参数对第m组第二离散卷积核的权重向量和所述第一处理数据进行第m个第二离散卷积处理,获得第m个第二离散卷积结果;所述第m组第二卷积参数对应于第m个第二离散卷积处理的尺寸范围;m为大于等于1且小于等于l的整数;所述基于第二离散卷积结果,获得所述点云数据的空间结构特征,包括:基于l个第二离散卷积结果确定所述点云数据的空间结构特征。
- 根据权利要求1至6任一项所述的方法,其中,所述方法还包括:基于所述点云数据的空间结构特征确定所述目标场景中的对象的类别。
- 根据权利要求1至3任一项所述的方法,其中,所述基于所述第一离散卷积结果,获得所述点云数据中至少部分点云数据的空间结构特征,包括:对所述第一离散卷积结果进行第一上采样处理,获得第一上采样处理结果;基于所述第一上采样处理结果获得所述点云数据中至少一个点数据的空间结构特征。
- 根据权利要求8所述的方法,其中,所述基于所述第一上采样处理结果获得所述点云数据中至少一个点数据的空间结构特征,包括:基于前一次上采样处理后的结果和第三离散卷积核的权重向量对所述前一次上采样处理后的结果进行插值处理,获得第三权重数据;所述第三权重数据表征所述前一次上采样处理后的结果分配至所述第三离散卷积核的权重向量对应位置处的权重;在前一次上采样处理是对第一离散卷积结果进行的第一上采样处理的情况下,前一次上采样处理后的结果为第一上采样结果;基于所述第三权重数据和所述第三离散卷积核的权重向量对所述前一次上采样处理后的结果进行第三离散卷积处理,获得第三离散卷积结果;对所述第三离散卷积结果进行第二上采样处理,获得第二上采样处理结果;基于所述第二上采样处理结果获得所述点云数据中至少一个点数据的空间结构特征。
- 根据权利要求1至3、8和9任一项所述的方法,其中,所述方法还包括:基于所述点云数据中至少一个点数据的空间结构特征确定所述至少一个点数据的语义信息。
- 一种点云数据处理装置,所述装置包括:获取单元、插值处理单元和特征获取单元;其中,所述获取单元,配置为获得目标场景中的点云数据以及第一离散卷积核的权重向量;所述插值处理单元,配置为基于所述点云数据和所述第一离散卷积核的权重向量对所述点云数据进行插值处理,获得第一权重数据;所述第一权重数据表征所述点云数据分配至所述第一离散卷积核的权重向量对应位置处的权重;所述特征获取单元,配置为基于所述第一权重数据和所述第一离散卷积核的权重向量对所述点云数据进行第一离散卷积处理,获得第一离散卷积结果;基于所述第一离散卷积结果获得所述点云数据中至少部分点云数据的空间结构特征。
- 根据权利要求11所述的装置,其中,所述插值处理单元,配置为基于所述点云数据和所述第一离散卷积核的权重向量按照预设的插值处理方式获得第一权重数据,所述第一权重数据表征将所述点云数据分配至满足预设条件的第一离散卷积核的权重向量对应位置处的权重;其中,所述点云数据位于所述满足预设条件的第一离散卷积核 的权重向量所围成的特定几何形状区域内。
- 根据权利要求12所述的装置,其中,所述特征获取单元,配置为:基于所述第一权重数据和所述第一离散卷积核的权重向量对所述点云数据进行第一离散卷积处理,获得第一离散卷积结果;基于归一化参数对第一离散卷积结果进行归一化处理;所述归一化参数是根据所述点云数据所在的所述特定几何形状区域内的点云数据的数量确定的;基于归一化处理后的结果,获得所述点云数据中至少部分点云数据的空间结构特征。
- 根据权利要求11至13任一项所述的装置,其中,所述第一离散卷积核的权重向量为n组,所述第一权重数据为n组,n为大于等于2的整数;所述特征获取单元,配置为:基于第k组第一权重数据以及第k组第一卷积参数对k组第一离散卷积核的权重向量和所述点云数据进行第k个第一离散卷积处理,获得第k个第一离散卷积结果;所述第k组第一卷积参数对应于第k个第一离散卷积处理的尺寸范围;k为大于等于1且小于等于n的整数;基于n个第一离散卷积结果确定所述点云数据的空间结构特征。
- 根据权利要求14所述的装置,其中,所述插值处理单元,还配置为:基于第一处理数据和第二离散卷积核的权重向量对所述第一处理数据进行插值处理,获得第二权重数据;所述第二权重数据表征所述第一处理数据分配至所述第二离散卷积核的权重向量对应位置处的权重;其中,所述第一处理数据根据前一次离散卷积处理的结果确定,在前一次离散卷积处理的结果为n个第一离散卷积结果的情况下,所述第一处理数据根据所述n个第一离散卷积结果确定;所述特征获取单元,还配置为基于所述第二权重数据和所述第二离散卷积核的权重向量对所述第一处理数据进行第二离散卷积处理,获得第二离散卷积结果;基于所述第二离散卷积结果,获得所述点云数据的空间结构特征。
- 根据权利要求15所述的装置,其中,所述第二离散卷积核的权重向量为l组,所述第二权重数据为l组,l为大于等于2的整数;所述特征获取单元,配置为基于第m组第二权重数据以及第m组第二卷积参数对第m组第二离散卷积核的权重向量和所述第一处理数据进行第m个第二离散卷积处理,获得第m个第二离散卷积结果;所述第m组第二卷积参数对应于第m个离散卷积处理的尺寸范围;m为大于等于1且小于等于l的整数;基于l个第二离散卷积结果确定所述点云数据的空间结构特征。
- 根据权利要求11至16任一项所述的装置,其中,所述装置还包括第一确定单元,配置为基于所述点云数据的空间结构特征确定所述目标场景中的对象的类别。
- 根据权利要求11至13任一项所述的装置,其中,所述特征获取单元,配置为:基于所述第一权重数据和所述第一离散卷积核的权重向量对所述点云数据进行第一离散卷积处理,获得第一离散卷积结果;对所述第一离散卷积结果进行第一上采样处理,获得第一上采样处理结果;基于所述第一上采样处理结果获得所述点云数据中至少一个点数据的空间结构特征。
- 根据权利要求18所述的装置,其中,所述插值处理单元,还配置为:基于前一次上采样处理后的结果和第三离散卷积核的权重向量对所述前一次上采样处理后的结果进行插值处理,获得第三权重数据;所述第三权重数据表征所述前一次上采样处理后的结果分配至所述第三离散卷积核的权重向量对应位置处的权重;在前一次上采样处理是对第一离散卷积结果进行的第一上采样处理的情况下,前一次上采样处理后的结果为第一上采样结果;所述特征获取单元,还配置为基于所述第三权重数据和所述第三离散卷积核的权重向量对所述前一次上采样处理后的结果进行第三离散卷积处理,获得第三离散卷积结 果;对所述第三离散卷积结果进行第二上采样处理,获得第二上采样处理结果;基于所述第二上采样处理结果获得所述点云数据中至少一个点数据的空间结构特征。
- 根据权利要求11至13、18和19任一项所述的装置,其中,所述装置还包括第二确定单元,配置为基于所述点云数据中至少一个点数据的空间结构特征确定所述至少一个点数据的语义信息。
- 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1至10任一项所述方法的步骤。
- 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至10任一项所述方法的步骤。
- 一种计算机程序产品,所述计算机程序产品包括计算机可执行指令,该计算机可执行指令被执行后,能够实现权利要求1至10任一项所述的方法步骤。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020565957A JP7475287B2 (ja) | 2019-05-22 | 2019-11-28 | ポイントクラウドデータの処理方法、装置、電子機器、記憶媒体及びコンピュータプログラム |
KR1020207031573A KR102535158B1 (ko) | 2019-05-22 | 2019-11-28 | 포인트 클라우드 데이터 처리 방법, 장치, 전자 기기 및 저장 매체 |
SG11202010693SA SG11202010693SA (en) | 2019-05-22 | 2019-11-28 | Method and device for processing point cloud data, electronic device and storage medium |
US17/082,686 US20210042501A1 (en) | 2019-05-22 | 2020-10-28 | Method and device for processing point cloud data, electronic device and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910430700.7A CN110163906B (zh) | 2019-05-22 | 2019-05-22 | 点云数据处理方法、装置、电子设备及存储介质 |
CN201910430700.7 | 2019-05-22 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/082,686 Continuation US20210042501A1 (en) | 2019-05-22 | 2020-10-28 | Method and device for processing point cloud data, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020233069A1 true WO2020233069A1 (zh) | 2020-11-26 |
Family
ID=67632023
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/121776 WO2020233069A1 (zh) | 2019-05-22 | 2019-11-28 | 点云数据处理方法、装置、电子设备及存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20210042501A1 (zh) |
JP (1) | JP7475287B2 (zh) |
KR (1) | KR102535158B1 (zh) |
CN (1) | CN110163906B (zh) |
SG (1) | SG11202010693SA (zh) |
WO (1) | WO2020233069A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163906B (zh) * | 2019-05-22 | 2021-10-29 | 北京市商汤科技开发有限公司 | 点云数据处理方法、装置、电子设备及存储介质 |
CN110969689A (zh) * | 2019-12-03 | 2020-04-07 | 上海眼控科技股份有限公司 | 点云特征提取方法、装置、计算机设备和存储介质 |
CN112935703B (zh) * | 2021-03-19 | 2022-09-27 | 山东大学 | 识别动态托盘终端的移动机器人位姿校正方法及系统 |
CN113189634B (zh) * | 2021-03-02 | 2022-10-25 | 四川新先达测控技术有限公司 | 一种类高斯成形方法 |
CN112991473B (zh) * | 2021-03-19 | 2023-07-18 | 华南理工大学 | 一种基于立方体模板的神经网络编码解码方法及系统 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358655A (zh) * | 2017-07-27 | 2017-11-17 | 深圳前海倍思拓技术有限公司 | 基于离散平稳小波变换的半球面和圆锥面模型的辨识方法 |
CN108921939A (zh) * | 2018-07-04 | 2018-11-30 | 王斌 | 一种基于图片的三维场景重建方法 |
CN110163906A (zh) * | 2019-05-22 | 2019-08-23 | 北京市商汤科技开发有限公司 | 点云数据处理方法、装置、电子设备及存储介质 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0631250B1 (en) * | 1993-06-21 | 2002-03-20 | Nippon Telegraph And Telephone Corporation | Method and apparatus for reconstructing three-dimensional objects |
BRPI0520370B8 (pt) * | 2005-06-28 | 2023-01-31 | Scanalyse Pty Ltd | Sistema e método para medição e mapeamento de uma superfície com relação a uma referência |
CN101063967B (zh) * | 2006-04-28 | 2010-11-10 | 鸿富锦精密工业(深圳)有限公司 | 点云自动修剪系统及方法 |
WO2008066740A2 (en) * | 2006-11-22 | 2008-06-05 | Parker Vision, Inc. | Multi-dimensional error correction for communications systems |
US10378891B2 (en) * | 2007-12-28 | 2019-08-13 | Outotec Pty Ltd | System and method for measuring and mapping a surface relative to a reference |
US9068809B1 (en) * | 2013-06-06 | 2015-06-30 | The Boeing Company | Quasi-virtual locate/drill/shim process |
CN105590311A (zh) * | 2014-11-13 | 2016-05-18 | 富泰华工业(深圳)有限公司 | 图形化平面数据分析系统及方法 |
US10459084B2 (en) * | 2014-12-30 | 2019-10-29 | Nokia Technologies Oy | Range sensing using a hybrid range sensing device |
CN105550688B (zh) * | 2015-12-04 | 2019-03-29 | 百度在线网络技术(北京)有限公司 | 点云数据的分类方法及装置 |
US11458034B2 (en) * | 2016-05-03 | 2022-10-04 | Icarus Medical, LLC | Method for automating body part sizing |
CN107918753B (zh) * | 2016-10-10 | 2019-02-22 | 腾讯科技(深圳)有限公司 | 点云数据处理方法及装置 |
CN108230329B (zh) * | 2017-12-18 | 2021-09-21 | 孙颖 | 基于多尺度卷积神经网络的语义分割方法 |
US11221413B2 (en) * | 2018-03-14 | 2022-01-11 | Uatc, Llc | Three-dimensional object detection |
US10572770B2 (en) * | 2018-06-15 | 2020-02-25 | Intel Corporation | Tangent convolution for 3D data |
CN109410307B (zh) * | 2018-10-16 | 2022-09-20 | 大连理工大学 | 一种场景点云语义分割方法 |
CN109597087B (zh) * | 2018-11-15 | 2022-07-01 | 天津大学 | 一种基于点云数据的3d目标检测方法 |
-
2019
- 2019-05-22 CN CN201910430700.7A patent/CN110163906B/zh active Active
- 2019-11-28 KR KR1020207031573A patent/KR102535158B1/ko active IP Right Grant
- 2019-11-28 WO PCT/CN2019/121776 patent/WO2020233069A1/zh active Application Filing
- 2019-11-28 SG SG11202010693SA patent/SG11202010693SA/en unknown
- 2019-11-28 JP JP2020565957A patent/JP7475287B2/ja active Active
-
2020
- 2020-10-28 US US17/082,686 patent/US20210042501A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358655A (zh) * | 2017-07-27 | 2017-11-17 | 深圳前海倍思拓技术有限公司 | 基于离散平稳小波变换的半球面和圆锥面模型的辨识方法 |
CN108921939A (zh) * | 2018-07-04 | 2018-11-30 | 王斌 | 一种基于图片的三维场景重建方法 |
CN110163906A (zh) * | 2019-05-22 | 2019-08-23 | 北京市商汤科技开发有限公司 | 点云数据处理方法、装置、电子设备及存储介质 |
Non-Patent Citations (2)
Title |
---|
KOMARICHEV, ARTEM ET AL.: "A-CNN: Annularly Convolutional Neural Networks on Point Clouds", ARXIV.ORG, 16 April 2019 (2019-04-16), pages 7413 - 7422, XP033686912 * |
WU, WENXUAN ET AL.: "PointConv: Deep Convolutional Networks on 3D Point Clouds", ARXIV.ORG, 11 April 2019 (2019-04-11), pages 9613 - 9622, XP033687285 * |
Also Published As
Publication number | Publication date |
---|---|
US20210042501A1 (en) | 2021-02-11 |
CN110163906B (zh) | 2021-10-29 |
SG11202010693SA (en) | 2020-12-30 |
JP2021528726A (ja) | 2021-10-21 |
KR102535158B1 (ko) | 2023-05-22 |
KR20200139761A (ko) | 2020-12-14 |
CN110163906A (zh) | 2019-08-23 |
JP7475287B2 (ja) | 2024-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020233069A1 (zh) | 点云数据处理方法、装置、电子设备及存储介质 | |
CN109522874B (zh) | 人体动作识别方法、装置、终端设备及存储介质 | |
WO2020177651A1 (zh) | 图像分割方法和图像处理装置 | |
CN110378381B (zh) | 物体检测方法、装置和计算机存储介质 | |
CN110473137B (zh) | 图像处理方法和装置 | |
CN107292256B (zh) | 基于辅任务的深度卷积小波神经网络表情识别方法 | |
CN112990010B (zh) | 点云数据处理方法、装置、计算机设备和存储介质 | |
CN111914997B (zh) | 训练神经网络的方法、图像处理方法及装置 | |
CN112529904B (zh) | 图像语义分割方法、装置、计算机可读存储介质和芯片 | |
EP4006776A1 (en) | Image classification method and apparatus | |
CN109948454B (zh) | 表情数据库的增强方法、训练方法、计算设备及存储介质 | |
CN110619334B (zh) | 基于深度学习的人像分割方法、架构及相关装置 | |
WO2021147551A1 (zh) | 点云数据处理方法、智能行驶方法及相关装置、电子设备 | |
WO2023065665A1 (zh) | 图像处理方法、装置、设备、存储介质及计算机程序产品 | |
WO2020207134A1 (zh) | 图像处理方法、装置、设备以及计算机可读介质 | |
CN113298931B (zh) | 一种物体模型的重建方法、装置、终端设备和存储介质 | |
CN116030259A (zh) | 一种腹部ct图像多器官分割方法、装置及终端设备 | |
CN112149662A (zh) | 一种基于扩张卷积块的多模态融合显著性检测方法 | |
Paulus et al. | Color cluster rotation | |
CN112333468B (zh) | 图像处理方法、装置、设备及存储介质 | |
AU2012268887A1 (en) | Saliency prediction method | |
US11797854B2 (en) | Image processing device, image processing method and object recognition system | |
CN116883770A (zh) | 深度估计模型的训练方法、装置、电子设备及存储介质 | |
CN114792370A (zh) | 一种全肺图像分割方法、装置、电子设备及存储介质 | |
CN110223334B (zh) | 一种景深图获取方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20207031573 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2020565957 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19930049 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19930049 Country of ref document: EP Kind code of ref document: A1 |