CN114219855A - Point cloud normal vector estimation method and device, computer equipment and storage medium - Google Patents

Point cloud normal vector estimation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114219855A
CN114219855A CN202111365196.0A CN202111365196A CN114219855A CN 114219855 A CN114219855 A CN 114219855A CN 202111365196 A CN202111365196 A CN 202111365196A CN 114219855 A CN114219855 A CN 114219855A
Authority
CN
China
Prior art keywords
point cloud
point
coordinate
area
normal vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111365196.0A
Other languages
Chinese (zh)
Inventor
林安成
李俊
项羽升
苏天晴
茆胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youxiang Zhilian Technology Co ltd
Original Assignee
Shenzhen Youxiang Zhilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Youxiang Zhilian Technology Co ltd filed Critical Shenzhen Youxiang Zhilian Technology Co ltd
Priority to CN202111365196.0A priority Critical patent/CN114219855A/en
Publication of CN114219855A publication Critical patent/CN114219855A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The application is applicable to the technical field of computers, and provides a method and a device for estimating a point cloud normal vector, computer equipment and a storage medium. The method for estimating the point cloud normal vector comprises the following steps: acquiring point cloud data and a color image corresponding to the point cloud data; associating each coordinate point in a target point cloud area in the point cloud data with the color image, and determining image characteristic information corresponding to each coordinate point in the target point cloud area; and fusing the coordinate information of each coordinate point in the target point cloud area with the corresponding image characteristic information, and determining the normal vector of the target point cloud area according to the fusion result. According to the method, the image characteristic information corresponding to each coordinate point in one point cloud area is obtained, the coordinate information of each coordinate point is fused with the corresponding image characteristic, then the normal vector of the point cloud area is determined according to the fusion result, and compared with the method that the point cloud area is fitted into a plane and the normal vector of the fitting plane is used as the normal vector of the point cloud area, the estimation precision of the normal vector is improved.

Description

Point cloud normal vector estimation method and device, computer equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a method and a device for estimating a point cloud normal vector, computer equipment and a storage medium.
Background
With the development of science and technology, artificial intelligence gradually permeates our lives, such as unmanned automobiles, indoor robots, and the like, which have the ability to feel the environment and human visual functions through computer vision technology. The processing of three-dimensional point cloud data is one of the important research directions of computer vision technology. The point cloud normal vector estimation is a basic task of three-dimensional point cloud processing, and the accurate normal vector estimation can be directly used for tasks such as three-dimensional reconstruction, point cloud registration and scene rendering. The point cloud is a set of three-dimensional coordinate points, each point in the point cloud can be regarded as a coordinate point in a space, and the point cloud is generally directly obtained from space detection sensors such as a laser radar and a binocular camera. The normal vector of the point cloud is a vector of each three-dimensional coordinate point in the point cloud in the vertical direction on the surface of the real object.
In the existing point cloud data processing process, point cloud normal vectors are generally estimated by directly fitting point clouds in a local area with a preset size into a plane through a regression model based on principal component analysis, and then calculating the normal vector of the plane as the normal vector of the point clouds in the local area.
Therefore, the precision of the point cloud normal vector obtained by estimation is low by utilizing the existing point cloud normal vector estimation method.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for estimating a point cloud normal vector, a computer device, and a storage medium, which can solve the technical problem that an existing method for estimating a point cloud normal vector is low in accuracy of an estimated point cloud normal vector.
A first aspect of an embodiment of the present application provides a method for estimating a point cloud normal vector, including:
acquiring point cloud data acquired by first equipment and a color image acquired by second equipment and corresponding to the point cloud data;
associating each coordinate point in a target point cloud area in the point cloud data with the color image, and determining image characteristic information corresponding to each coordinate point in the target point cloud area; the target point cloud area is any one point cloud area in the point cloud data;
and fusing the coordinate information of each coordinate point in the target point cloud area with the corresponding image characteristic information, and determining the normal vector of the target point cloud area according to the fusion result. In a possible implementation manner of the first aspect, the target point cloud area may be obtained by first obtaining point cloud data and then generating a plurality of point cloud areas from the point cloud data. For example, each coordinate in the point cloud data may be used as a central point, and coordinate points in a specified range around the central point may be searched, where each central point and points in the specified range around the central point form a point cloud area.
A second aspect of the embodiments of the present application provides an apparatus for estimating a point cloud normal vector, including:
the acquisition module is used for acquiring point cloud data acquired by first equipment and a color image acquired by second equipment and corresponding to the point cloud data;
the association module is used for associating each coordinate point in a target point cloud area in the point cloud data with the color image and determining image characteristic information corresponding to each coordinate point in the target point cloud area; the target point cloud area is any one point cloud area in the point cloud data;
and the normal vector determination module is used for fusing the coordinate information of each coordinate point in the target point cloud area with the corresponding image characteristic information and determining the normal vector of the target point cloud area according to a fusion result.
A third aspect of embodiments of the present application provides a computer device, which includes a memory and a processor, where the memory stores thereon a computer program that is executable on the processor, and the processor implements the steps of the method for estimating a point cloud normal vector as described in the first aspect.
A fourth aspect of an embodiment of the present application provides a computer-readable storage medium, including: there is stored a computer program which, when executed by a processor, carries out the steps of the method of estimating a point cloud normal vector as described in the first aspect above.
Compared with the prior art, the embodiment of the application has the advantages that: according to the method and the device for determining the normal vector of the point cloud area, each point in the point cloud area in the point cloud data is associated with the corresponding image of the point cloud data, so that the image characteristic information of each coordinate point in the point cloud area can be obtained, then the coordinate information of each coordinate point in the point cloud area is fused with the corresponding image characteristic, and further the normal vector of the point cloud area can be determined according to the fusion result. And when the normal vectors of all point cloud areas in the point cloud data are determined, obtaining an estimation result of the normal vectors of the whole point cloud data. Compared with the prior art that the point cloud data normal vector is estimated, the method has the advantages that the local area in the point cloud data is fitted into the plane, and then the normal vector of the fitted plane is used as the normal vector of the local area of the point cloud, so that the accuracy of point cloud data normal vector estimation can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is an application environment diagram of a method for estimating a point cloud normal vector according to an embodiment of the present application;
fig. 2 is a flowchart of a method for estimating a point cloud normal vector according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of a scene point cloud collection collected by a point cloud collection device
Fig. 4 is a flowchart of determining image feature information corresponding to each point in a cloud area of a target point according to an embodiment of the present application;
fig. 5 is a flowchart for fusing coordinate information of each point in a target point cloud region with image feature information corresponding to the point, and determining a normal vector of the target point cloud region according to a fusion result, according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a process of encoding coordinate information of a point in a point cloud area into high-dimensional feature information according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of determining a normal vector of a target point cloud region according to the stitching fusion vector provided in the embodiment of the present application;
FIG. 8 is a flowchart for obtaining training data of a normal vector estimation model according to an embodiment of the present disclosure;
fig. 9 is a projection view of sample point cloud data on a camera plane according to an embodiment of the present disclosure;
FIG. 10 is a projection diagram of a virtual point cloud obtained in the embodiment of the present application;
fig. 11 is a block diagram of a point cloud data processing apparatus according to an embodiment of the present disclosure;
fig. 12 is a block diagram of an internal structure of a computer device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a diagram of an application environment of a method for estimating a point cloud normal vector according to an embodiment of the present disclosure, as shown in fig. 1, the application environment includes a computer device 110, a first device 120, and a second device 130.
The computer device 110 may be an independent physical server or terminal, may also be a server cluster formed by a plurality of physical servers, and may be a cloud server providing basic cloud computing services such as a cloud server, a cloud database, a cloud storage, and a CDN.
The first device 120 is a point cloud collection device, which may be, but is not limited to, a radar device. First device 120 may be communicatively coupled to computer device 110.
The second device 130 may be an image capture device, which may be, but is not limited to, a camera. The second device 130 may be communicatively coupled to a computer device.
As shown in fig. 2, in an embodiment, a method for estimating a point cloud normal vector is provided, and this embodiment is mainly illustrated by applying the method to the computer device 110 in fig. 1. A method for estimating a point cloud normal vector specifically comprises the following steps:
step S202, point cloud data collected by first equipment and a color image corresponding to the point cloud data collected by second equipment are obtained.
In the embodiment of the present application, the point cloud data refers to point cloud data acquired by a cloud acquisition device, which is a set including a plurality of three-dimensional coordinate points. As shown in fig. 3, the point cloud image of a scene collected by the point cloud collection device is shown, the elliptical area at the upper right corner in the image is a partial enlarged view of the corresponding position in the image, and all coordinate points collected in the scene constitute the point cloud data of the scene. The color image corresponding to the point cloud data is a color image of a scene which is acquired by the image acquisition device and corresponds to the point cloud data. Each coordinate point in the point cloud data acquired by the point cloud acquisition device can correspond to the spatial coordinate information of each object in the color image at the coordinate point.
Step S204, associating each coordinate point in a target point cloud area in the point cloud data with the color image, and determining image characteristic information corresponding to each coordinate point in the target point cloud area; the target point cloud area is any one point cloud area in the point cloud data.
In the embodiment of the application, the target point cloud area may be any one point cloud area in the point cloud data, and the point cloud area is a local area including a plurality of coordinate points in a coordinate point set of the point cloud data. By dividing the point cloud data into a plurality of point cloud areas, the normal vector of each coordinate point in the point cloud data can be estimated by calculating the normal vector of each point cloud area. In the method for estimating the point cloud normal vector, the normal vector of which point cloud area is calculated, and the point cloud area is the target point cloud area.
In the embodiment of the application, before associating each coordinate point in the target point cloud area in the point cloud data with the color image corresponding to the point cloud data, the point cloud data is generated into a plurality of point cloud areas. Generating the point cloud data into a plurality of point cloud areas, including: and respectively taking each coordinate point in the point cloud data as a central point, searching for coordinate points in a spherical range according to a specified radius, and forming a point cloud area by each central point and the coordinate points in the spherical range.
In the embodiment of the application, when one point in a coordinate point set of point cloud data is taken as a central point, then a spherical search is performed around the central point by a specified radius, coordinate points in the spherical range form a point cloud area, and when each coordinate point in the point cloud data is taken as the central point, a plurality of point cloud areas can be generated by the point cloud data. For example, in the KITTI data, we may select the size of the horizontal span multiplied by 0.01 as the designated radius, for example, when we process the point cloud observation frame by frame, for the current frame (one point cloud), it is assumed that the left side of the radar is in the case of the current frame (one point cloud)The side most distal point is a and its right most distal point is B. The horizontal span then refers to the lateral distance between a and B, i.e. adding a and B to the depths of the left and right sides, respectively, and in particular this span may be 30, then the specified radius is 30 x 0.01 to 0.3. The number of coordinate points included in the plurality of point cloud regions generated according to the point cloud data may be all specified values, and the specific numerical value of the specified value is not limited in this embodiment, for example, the specified value may be 20. In order to assign a value to the number of points in each point cloud area generated by the point cloud data, after the spherical search is performed, the number of coordinate points searched in the spherical search range may be compared with the assigned value, and when the number of points searched in the spherical search range is smaller than the assigned value, a point that is consistent with the coordinate of the central point may be supplemented in the point cloud area, so that the number of points included in the point cloud area is the assigned value. And supplementing a point consistent with the central point coordinate in the point cloud area, namely supplementing the point assigned with the central point coordinate in the point cloud area. Taking the specified value as 20 as an example, since the point cloud data has numerical information of three spatial dimensions, each target point cloud region can be described by a 20 × 3 matrix P, and then the geometric center and the radius r of the target point cloud region are normalized to the whole local patch, so that the numerical values are stably distributed around 0, after normalization, the value of the 20 × 3 matrix storing the original coordinates can be converted into another 20 × 3 matrix with the element values stably distributed around 0, and then the matrix formula of the target point cloud region can be specifically expressed as:
Figure BDA0003360416180000061
wherein, P is the matrix expression of the target point cloud region, row is the row index of the matrix, norm refers to the standardization/normalization (normalization) of the data, r is the designated radius, and 20 is the designated value of the point cloud number in the target point cloud region.
In this embodiment of the application, the feature information of the image may also be referred to as semantic information of the image, where the image feature information and the image semantic information are both information that can be used to represent features such as color features, texture features, or shape features of the image, for example, both the image feature information and the image semantic information may include size information, pixel information, and the like of the image, but not limited thereto, the extraction of the image feature information is extraction of the image semantic information, and the extraction method of the semantic information of the image is not limited in this embodiment, and may be obtained through a network model such as U-network (U Neural network) or Res-network (redundant network) or vgg (visual Geometry group).
In this embodiment of the application, a specific method for associating each coordinate point in the target point cloud region with the color image corresponding to the point cloud data to determine the image feature information corresponding to each coordinate point in the target point cloud region is not limited, for example, as shown in fig. 4, step S204 may specifically include the following steps:
step S402, projecting each coordinate point in the target cloud area into a camera imaging plane where the color image is located to obtain a projection point corresponding to each coordinate point in the target cloud area.
In the embodiment of the present application, the projection of each point in the target point cloud region onto the camera imaging plane where the image is located may be implemented by a coordinate conversion method, for example, a conversion matrix T from a radar coordinate system to a camera coordinate system and an internal parameter matrix of a camera may be used to project the spatial coordinates of 20 points in each target point cloud region onto the camera imaging plane, so as to implement the association between each coordinate point in the target point cloud region and the color image corresponding to the point cloud data. The calculation formula of the obtained projection point is specifically as follows:
Figure BDA0003360416180000071
wherein P is1As coordinates of the projected point, P2The coordinate of a point corresponding to the projection point in the cloud area of the target point, K is an internal parameter matrix of the camera, and T is a transformation matrix from a radar coordinate system to a camera coordinate system.
Step S404, extracting image feature information corresponding to the projection point position in the color image, to obtain image feature information of each coordinate point in the target point cloud region.
In this embodiment of the application, before associating each coordinate point in the cloud region of the target point with the color image corresponding to the point cloud set, image feature information of the color image corresponding to the point cloud data may be obtained first, for example, semantic information of the color image corresponding to the point cloud data may be obtained through a U-Net network model, so as to obtain an image feature map of the color image corresponding to the point cloud data, and further, image feature information of each point in the image may be obtained from the image feature map. The U-Net network model comprises an encoder-decoder framework and residual connection operation, the length and the width of output features of the U-Net network model are consistent with those of input features, the fact that the obtained image feature information is subsequently associated with coordinate information can be guaranteed to use a full convolution neural network, the U-Net network model can also receive input of any size, and future generalization to other scenes is facilitated. Moreover, each position on the image feature map output by the U-Net network model is associated with a pixel region where the same position of the color image corresponding to the point cloud data is located, for example, an image feature information value of a (10,20) coordinate position on the image feature map is necessarily calculated from a region near the (10,20) coordinate position in the image corresponding to the point cloud set, and the image feature information acquisition mode of the image corresponding to the point cloud set is consistent with the idea of processing the point cloud data through the target point cloud region, and both the image feature information acquisition mode and the point cloud data acquisition mode just use a uniform local idea.
In the embodiment of the application, when the image characteristic information of the color image corresponding to the point cloud data is known, after the projection point corresponding to each coordinate point in the cloud area of the target point is obtained, the image characteristic information corresponding to the projection point can be obtained, and the image characteristic information of each coordinate point in the cloud area of the target point is obtained.
And S206, fusing the coordinate information of each point in the target point cloud area with the corresponding image characteristic information, and determining a normal vector of the target point cloud area according to a fusion result.
In the embodiment of the present application, the coordinate information of each coordinate point in the cloud area of the target point is three-dimensional coordinate information of each point in the spatial coordinate system, and the three-dimensional coordinate information may be directly obtained from the point cloud data collected by the point cloud data collecting device 120. In this embodiment, a specific fusion method for fusing coordinate information of each point in the target point cloud region with image feature information corresponding to the coordinate information and a method for determining a normal vector of the target point cloud region according to a fusion result are not limited, for example, as shown in fig. 5, step S206 may specifically include the following steps:
and S502, splicing the image characteristic information of each coordinate point in the cloud area of the target point with the coordinate information of the coordinate point to obtain a first splicing vector corresponding to each coordinate point.
In the embodiment of the present application, image feature information is taken as an example of a vector formed by a red (red) value, a green (green) value, and a blue (blue) value of a length size value, a width size value, and a color value, coordinate information is a vector formed by coordinate values of the point in three spatial directions, and a first stitching vector corresponding to each point is obtained as a vector formed by a length size value, a width size value, an R value, a G value, a B value, and coordinate values of the three spatial directions. Of course, before the image feature information of each point in the target point cloud area and the coordinate information of the coordinate point are spliced, the coordinate information of each coordinate point in the target point cloud area may be encoded into a high-dimensional feature, so that the coordinate information encoded into the high-dimensional feature and the image feature information are spliced and fused. For example, as shown in fig. 6, in order to encode the coordinate information of the coordinate point in the point cloud area into the high-dimensional feature information, the three-dimensional coordinate information of the point cloud may sequentially pass through three one-dimensional convolutional networks Conv1d, perform the same operations on 20 input three-dimensional vectors by using the same multilayered perceptron, and finally map the vectors into a 256-dimensional space, so as to encode the 3-dimensional coordinate information into the high-dimensional feature information.
S504, performing mean pooling operation on the first mosaic vectors corresponding to all the coordinate points in the target point cloud area to obtain second mosaic vectors corresponding to the target point cloud area.
In this embodiment, one point cloud region includes 20 coordinate points, and each coordinate point may correspond to obtain one corresponding first stitching vector, and this embodiment does not limit the method for calculating the average value of the first stitching vectors corresponding to all coordinate points in the target point cloud region, for example, 20 first stitching vectors corresponding to 20 coordinate points in the target point cloud region may be subjected to one-dimensional convolution processing to transform the length of each first stitching vector, and then the 20 first stitching vectors are compressed into a single vector through mean pooling operation to obtain a second stitching vector.
S506, splicing the first splicing vector and the second splicing vector of each coordinate point in the target point cloud area, or splicing the image characteristic information of each coordinate point in the target point cloud area and the second splicing vector, or splicing the coordinate information of each coordinate point in the target point cloud area and the second splicing vector, and determining a splicing fusion vector.
In this embodiment of the application, the coordinate information of each coordinate point in the target point cloud region is fused with the image feature information corresponding to the coordinate point, and the obtained fusion result is a splicing fusion vector, where the splicing fusion vector may be a vector obtained by splicing a first splicing vector and a second splicing vector of each coordinate point in the target point cloud region, or a vector obtained by splicing the image feature information of each coordinate point in the target point cloud region and the second splicing vector, or a vector obtained by splicing the coordinate information of each coordinate point in the target point cloud region and the second splicing vector. Preferably, a vector obtained by splicing the first splicing vector and the second splicing vector can be used as a splicing fusion vector, the first splicing vector contains image characteristic information and coordinate information corresponding to each coordinate point in the target point cloud region, the vector obtained by splicing the first splicing vector and the second splicing vector is used as a splicing fusion vector, and the obtained splicing fusion vector is equivalent to the image characteristic information and the coordinate information of one coordinate point and the splicing fusion vector obtained by the second splicing vector, so that the normal vector of the target point cloud region is further calculated according to the splicing fusion vector obtained by splicing the first splicing vector and the second splicing vector, and the calculation accuracy of the normal vector is further improved.
And S508, determining a normal vector of the target point cloud area according to the splicing fusion vector.
In the embodiment of the present application, a specific method for determining the normal vector of the target point cloud region according to the splicing fusion vector is not limited, for example, the normal vector of the target point cloud region may be determined by a normal vector estimation network model according to the splicing fusion vector.
The specific structure of the normal vector estimation network model is not limited in this embodiment, for example, the normal vector estimation network model may include two one-dimensional convolution networks, a maximum pooling function and three fully-connected neural network layers, as shown in fig. 7, a flow diagram of determining the normal vector of the target point cloud region according to the stitched fusion vector is shown, for example, the stitched fusion vector is sequentially processed by two one-dimensional convolution networks Conv1d, then 20 vectors are changed into a single vector by one maximum pooling function Max Pool, and finally, regression processing is completed by the three fully-connected neural network layers Linear to obtain a three-dimensional vector, which is the normal vector of the target point cloud region. As shown in fig. 7, this embodiment takes n as 20 as an example for explanation, the obtained concatenation fusion feature vector is [20,1024 +64+256], that is, 20 vectors with length 1344, the feature of [20,1344] is transformed to [20,512] size by a convolution kernel and a one-dimensional convolution network with moving step length of 1, further, the feature of [20,512] is transformed to [20,1024] by another convolution kernel and a one-dimensional convolution network with moving step length of 1, then the feature vector after the largest pooling is transformed to [20,1024] by a fully-connected neural network, so that the 1024-dimensional vector is transformed to 512-dimensional vector, further, the feature vector is further transformed to 256-dimensional and 3-dimensional vector by two fully-connected neural networks, and finally the normal vector of the target point cloud region is obtained.
In the embodiment of the application, in the process of estimating a point cloud normal vector, after each one-dimensional convolution network and all-connected neural network layer (except the last all-connected neural network layer) of data are processed, Batch Normalization and linear rectification function (ReLU) operations are performed once, wherein stable training and speed improvement can be achieved by using Batch Normalization, and the nonlinear capability of a model can be enhanced by applying a linear rectification function after Batch Normalization.
According to the method for estimating the point cloud normal vector, each point in a point cloud area in the point cloud data is associated with the image corresponding to the point cloud data, so that the image characteristic information of each coordinate point in the point cloud area can be obtained, then the coordinate information of each coordinate point in the point cloud area is fused with the corresponding image characteristic, and further the normal vector of the point cloud area can be determined according to the fusion result. And when the normal vectors of all point cloud areas in the point cloud data are determined, obtaining an estimation result of the normal vectors of the whole point cloud data. Compared with the prior art that the point cloud data normal vector is estimated, the method has the advantages that the local area in the point cloud data is fitted into the plane, and then the normal vector of the fitted plane is used as the normal vector of the local area of the point cloud, so that the accuracy of point cloud data normal vector estimation can be effectively improved.
In another embodiment of the present application, as shown in fig. 8, the normal vector estimation network model is trained by using training data, and the training data is obtained by:
step S602, acquiring the number and distribution area of sample projection points of each coordinate point in the sample point cloud data projected on the camera imaging plane.
In this embodiment, the sample point cloud data is point cloud data used for making training sample data, and a specific method for acquiring the number and distribution area of sample projection points of each coordinate point in the sample point cloud data projected on the camera imaging plane is not limited in this embodiment, for example, a projection diagram of the point cloud may be obtained by first projecting each coordinate point in the sample point cloud data on the camera imaging plane, as shown in fig. 9, and then acquiring the number of sample projection points in the projection diagram and the distribution area of the sample projection points in the projection diagram, where the distribution area of the sample projection points may be determined by the coordinates of the leftmost projection point, the coordinates of the right projection point, the coordinates of the uppermost projection point, and the coordinates of the lowermost projection point in the projection diagram.
Step S604, determining a virtual point cloud projection graph according to the number and the distribution area of the sample projection points, wherein the virtual point cloud projection graph is a two-dimensional graph with the uniform distribution of the sample projection points.
In this embodiment of the application, as shown in fig. 10, the virtual point cloud projection diagram may determine a distribution area of the projection points in the virtual point cloud projection diagram according to a distribution area of the sample projection points in the camera imaging plane, for example, a vertical coordinate of a top projection point of the distribution area of the projection points may be sampled as an upper boundary, so that the upper boundary, a left boundary, a right boundary, and a lower boundary of the distribution area of the projection points together form a regular distribution area of the projection points in the virtual point cloud projection diagram, and then the number of projection points is uniformly distributed in the regular distribution area of the projection points in the virtual point cloud projection diagram according to the number of the sample projection points, so as to obtain the virtual point cloud projection diagram.
Step S606, determining the depth value of each sample projection point in the virtual point cloud projection image.
In the embodiments of the present application, the depth value refers to a distance value from the image capturing device to each point in the actual scene. The present embodiment does not limit the specific method for determining the depth values of each point in the virtual point cloud projection map, for example, a depth map of an image corresponding to the sample point cloud data may be obtained, and then the depth values corresponding to each projection point in the virtual point cloud projection map may be extracted from the depth map, where the depth map of the image corresponding to the sample point cloud data may be directly acquired by the image acquisition device.
Step S608, projecting each sample projection point in the virtual point cloud projection image to a spatial coordinate system according to the depth value, determining a three-dimensional coordinate of each sample projection point in the virtual point cloud projection image, and using the obtained three-dimensional coordinate corresponding to each sample projection point as the training data.
In the embodiment of the application, the depth value of each point in the virtual point cloud projection image is determined, which is equivalent to determining the coordinate value of each point in one direction in the space coordinate system, then the three-dimensional coordinates of each point in the virtual point cloud projection image can be determined by projecting each point in the virtual point cloud projection image to the space coordinate system, which is equivalent to determining the coordinate values of each point in the other two directions in the space coordinate system, namely the coordinates of the corresponding point of each projection point in the virtual point cloud projection image in the space coordinate system, and the three-dimensional coordinates of the points are used for determining the coordinates of the corresponding point of each projection point in the virtual point cloud projection image in the space coordinate systemAnd (5) training a point cloud normal vector estimation model. The specific calculation formula for projecting each point in the virtual point cloud projection image to the space coordinate system is as follows: p21(TTKTP11)TWhere K is the intrinsic parameter matrix of the camera, T is the transformation matrix of the radar coordinate system to the camera coordinate system, P21For the projection point to correspond to a coordinate in a spatial coordinate system, P11Are the coordinates of the proxels in the camera coordinate system.
According to the method for estimating the point cloud normal vector, the training data of the normal vector estimation model for determining the first point cloud normal vector are generated through simulation, so that existing massive simulation three-dimensional modeling in an unmanned and robot community can be used, high-quality training data can be obtained without manual marking, the training efficiency of the normal vector estimation model is effectively improved, and the processing efficiency of the point cloud data is further improved.
Fig. 11 shows a block diagram of a device for estimating a point cloud normal vector provided in an embodiment of the present application, which corresponds to the method for estimating a point cloud normal vector in the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description. The estimation apparatus of the point cloud normal vector illustrated in fig. 11 may be an execution subject of the estimation method of the point cloud normal vector provided in the foregoing embodiment. The estimation device of the point cloud normal vector can be integrated into the computer device 110.
Referring to fig. 11, the device for estimating the normal vector of the point cloud includes: an obtaining module 710, an associating module 720 and a normal vector determining module 730.
The acquiring module 710 is configured to acquire point cloud data acquired by a first device and a color image acquired by a second device and corresponding to the point cloud data.
In the embodiment of the present application, the point cloud data refers to point cloud data acquired by a cloud acquisition device, which is a set including a plurality of three-dimensional coordinate points. As shown in fig. 3, the point cloud image of a scene collected by the point cloud collection device is shown, the elliptical area at the upper right corner in the image is a partial enlarged view of the corresponding position in the image, and all coordinate points collected in the scene constitute the point cloud data of the scene. The color image corresponding to the point cloud data is a color image of a scene which is acquired by the image acquisition device and corresponds to the point cloud data. Each coordinate point in the point cloud data acquired by the point cloud acquisition device can correspond to the spatial coordinate information of each object in the color image at the coordinate point.
The association module 720 is configured to associate each coordinate point in a target point cloud region in the point cloud data with the color image, and determine image feature information corresponding to each coordinate point in the target point cloud region; the target point cloud area is any one point cloud area in the point cloud data.
In the embodiment of the application, the target point cloud area may be any one point cloud area in the point cloud data, and the point cloud area is a local area including a plurality of coordinate points in a coordinate point set of the point cloud data. By dividing the point cloud data into a plurality of point cloud areas, the normal vector of each coordinate point in the point cloud data can be estimated by calculating the normal vector of each point cloud area. In the method for estimating the point cloud normal vector, the normal vector of which point cloud area is calculated, and the point cloud area is the target point cloud area.
In this embodiment of the application, the feature information of the image may also be referred to as semantic information of the image, where the image feature information and the image semantic information are both information that can be used to represent features such as color features, texture features, or shape features of the image, for example, both the image feature information and the image semantic information may include size information, pixel information, and the like of the image, but not limited thereto, the extraction of the image feature information is extraction of the image semantic information, and the extraction method of the semantic information of the image is not limited in this embodiment, and may be obtained through a network model such as U-network (U Neural network) or Res-network (redundant network) or vgg (visual Geometry group). The embodiment does not limit the specific method for associating each coordinate point in the target point cloud area with the color image corresponding to the point cloud data to determine the image feature information corresponding to each coordinate point in the target point cloud area.
The normal vector determining module 730 is configured to fuse the coordinate information of each coordinate point in the target point cloud area with the image feature information corresponding to the coordinate point, and determine a normal vector of the target point cloud area according to a fusion result.
In the embodiment of the application, the coordinate information of each coordinate point in the cloud area of the target point is three-dimensional coordinate information of each coordinate point in a spatial coordinate system, and the three-dimensional coordinate information can be directly acquired from point cloud data acquired by a point cloud data acquisition device. In this embodiment, a specific fusion method for fusing the coordinate information of each coordinate point in the target point cloud region with the image feature information corresponding thereto and a method for determining the normal vector of the target point cloud region according to the fusion result are not limited.
According to the estimation device for the point cloud normal vector provided by the embodiment of the application, by arranging the acquisition module 710, the association module 720 and the normal vector determination module 730, the image feature information of each coordinate point in a point cloud area can be acquired by associating each point in the point cloud area in point cloud data with an image corresponding to the point cloud data, then the coordinate information of each coordinate point in the point cloud area is fused with the image feature corresponding to the coordinate point, and further the normal vector of the point cloud area can be determined according to the fusion result. And when the normal vectors of all point cloud areas in the point cloud data are determined, obtaining an estimation result of the normal vectors of the whole point cloud data. Compared with the prior art that the point cloud data normal vector is estimated, the method has the advantages that the local area in the point cloud data is fitted into the plane, and then the normal vector of the fitted plane is used as the normal vector of the local area of the point cloud, so that the accuracy of point cloud data normal vector estimation can be effectively improved. It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
It should also be understood that fig. 9-10 in the drawings of the present application relate to a point cloud reference diagram and a projection diagram of a point cloud in a camera plane coordinate system, and are only reference diagrams for assisting understanding of the present application.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance. It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements in some embodiments of the application, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first table may be named a second table, and similarly, a second table may be named a first table, without departing from the scope of various described embodiments. The first table and the second table are both tables, but they are not the same table.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 12, the computer device 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 12), a memory 71, said memory 71 having stored therein a computer program 72 executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the above-mentioned embodiments of the method for estimating a normal vector of a point cloud, such as the steps S202 to S206 shown in fig. 2. Alternatively, the processor 70, when executing the computer program 72, implements the functions of the modules/units in the above-mentioned various cloud data processing device embodiments, such as the functions of the modules 710 to 730 shown in fig. 11.
The computer device 7 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The computer device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 12 is merely an example of the computer device 7 and does not constitute a limitation of the computer device 7 and may include more or less components than those shown, or some components may be combined, or different components, e.g. the computer device may also include an input transmitting device, a network access device, a bus, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. The memory 71 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 71 may also be used to temporarily store data that has been transmitted or is to be transmitted.
In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The embodiment of the present application further provides a computer device, where the computer device includes at least one memory, at least one processor, and a computer program stored in the at least one memory and executable on the at least one processor, and when the processor executes the computer program, the terminal device implements the steps in the above-mentioned method for estimating a normal vector of an arbitrary point cloud.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiment of the present application provides a computer program product, which when running on a computer device, enables the computer device to implement the steps in the above-mentioned embodiments of the method for estimating a point cloud normal vector when executed.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the embodiments of the method for estimating a normal vector of a point cloud may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application, and are intended to be included within the scope of the present application.

Claims (10)

1. A method for estimating a point cloud normal vector is characterized by comprising the following steps:
acquiring point cloud data acquired by first equipment and a color image acquired by second equipment and corresponding to the point cloud data;
associating each coordinate point in a target point cloud area in the point cloud data with the color image, and determining image characteristic information corresponding to each coordinate point in the target point cloud area; the target point cloud area is any one point cloud area in the point cloud data;
and fusing the coordinate information of each coordinate point in the target point cloud area with the corresponding image characteristic information, and determining the normal vector of the target point cloud area according to the fusion result.
2. The method of claim 1, wherein the associating each coordinate point in a target cloud region in the point cloud data with the color image to determine image feature information corresponding to each coordinate point in the target cloud region comprises:
projecting each coordinate point in the target cloud area into a camera imaging plane where the color image is located to obtain a projection point corresponding to each coordinate point in the target cloud area;
and extracting image characteristic information corresponding to the projection point position in the color image to obtain the image characteristic information of each coordinate point in the target point cloud area.
3. The method of claim 1, wherein the step of fusing coordinate information of each coordinate point in the target point cloud region with corresponding image feature information and determining the normal vector of the target point cloud region according to the fusion result comprises:
splicing the image characteristic information of each coordinate point in the cloud area of the target point with the coordinate information of the coordinate point to obtain a first splicing vector corresponding to each coordinate point;
performing mean pooling operation on the first mosaic vectors corresponding to all coordinate points in the target point cloud area to obtain second mosaic vectors corresponding to the target point cloud area;
splicing the first splicing vector and the second splicing vector of each coordinate point in the target point cloud area, or splicing the image characteristic information of each coordinate point in the target point cloud area with the second splicing vector, or splicing the coordinate information of each coordinate point in the target point cloud area with the second splicing vector, and determining a splicing fusion vector;
and determining a normal vector of the target point cloud area according to the splicing fusion vector.
4. The method of claim 3, wherein determining the normal vector of the target point cloud region according to the stitched fusion vector comprises:
and determining the normal vector of the target point cloud area through a normal vector estimation network model according to the splicing fusion vector.
5. The method for estimating the point cloud normal vector according to claim 4, wherein the normal vector estimation network model is trained by using training data, and the training data is obtained by:
acquiring the number and distribution area of sample projection points of each coordinate point projected on a camera imaging plane in sample point cloud data;
determining a virtual point cloud projection graph according to the number and the distribution area of the sample projection points, wherein the virtual point cloud projection graph is a two-dimensional graph with the uniform distribution of the sample projection points;
determining the depth value of each sample projection point in the virtual point cloud projection image;
and projecting each sample projection point in the virtual point cloud projection image to a space coordinate system according to the depth value, determining the three-dimensional coordinates of each sample projection point in the virtual point cloud projection image, and taking the obtained three-dimensional coordinates corresponding to each sample projection point as the training data.
6. The method of claim 1, wherein the method of estimating a point cloud normal vector comprises the steps of associating each coordinate point in a target cloud region of the point cloud data with the color image, and before: generating a plurality of point cloud areas from the point cloud data;
generating the point cloud data into a plurality of point cloud areas, including:
and respectively taking each coordinate point in the point cloud data as a central point, searching for coordinate points in a spherical range according to a specified radius, and forming a point cloud area by each central point and the coordinate points in the spherical range.
7. The method of claim 6, wherein the number of coordinate points included in each point cloud region in the point cloud data is a designated value;
generating the point cloud data into a plurality of point cloud areas, and further comprising:
comparing the coordinate points searched in the spherical range with the specified value;
when the number of coordinate points searched in the spherical range is smaller than the designated value, supplementing points consistent with the coordinates of the central point in the point cloud area, and enabling the number of coordinate points contained in the point cloud area to be the designated value.
8. An estimation device of a point cloud normal vector is characterized in that the estimation device of the point cloud normal vector comprises:
the acquisition module is used for acquiring point cloud data acquired by first equipment and a color image acquired by second equipment and corresponding to the point cloud data;
the association module is used for associating each coordinate point in a target point cloud area in the point cloud data with the color image and determining image characteristic information corresponding to each coordinate point in the target point cloud area; the target point cloud area is any one point cloud area in the point cloud data;
and the normal vector determination module is used for fusing the coordinate information of each coordinate point in the target point cloud area with the corresponding image characteristic information and determining the normal vector of the target point cloud area according to a fusion result.
9. A computer device, characterized in that the computer device comprises a memory, a processor, a computer program stored on the memory and capable of running on the processor, and the processor when executing the computer program realizes the steps of the estimation method of the point cloud normal vector according to any claim 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for estimating a point cloud normal vector according to any one of claims 1 to 7.
CN202111365196.0A 2021-11-17 2021-11-17 Point cloud normal vector estimation method and device, computer equipment and storage medium Pending CN114219855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111365196.0A CN114219855A (en) 2021-11-17 2021-11-17 Point cloud normal vector estimation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111365196.0A CN114219855A (en) 2021-11-17 2021-11-17 Point cloud normal vector estimation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114219855A true CN114219855A (en) 2022-03-22

Family

ID=80697507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111365196.0A Pending CN114219855A (en) 2021-11-17 2021-11-17 Point cloud normal vector estimation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114219855A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661552A (en) * 2022-12-12 2023-01-31 高德软件有限公司 Point cloud processing method, point cloud anomaly detection method, medium and computing equipment
CN115690332A (en) * 2022-12-30 2023-02-03 华东交通大学 Point cloud data processing method and device, readable storage medium and electronic equipment
CN115830019A (en) * 2023-02-14 2023-03-21 南京慧然科技有限公司 Three-dimensional point cloud calibration processing method and device for steel rail detection
CN116721081A (en) * 2023-06-12 2023-09-08 南京林业大学 Motor car side wall plate defect extraction method based on three-dimensional point cloud and modal conversion

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661552A (en) * 2022-12-12 2023-01-31 高德软件有限公司 Point cloud processing method, point cloud anomaly detection method, medium and computing equipment
CN115690332A (en) * 2022-12-30 2023-02-03 华东交通大学 Point cloud data processing method and device, readable storage medium and electronic equipment
CN115830019A (en) * 2023-02-14 2023-03-21 南京慧然科技有限公司 Three-dimensional point cloud calibration processing method and device for steel rail detection
CN116721081A (en) * 2023-06-12 2023-09-08 南京林业大学 Motor car side wall plate defect extraction method based on three-dimensional point cloud and modal conversion
CN116721081B (en) * 2023-06-12 2024-01-26 南京林业大学 Motor car side wall plate defect extraction method based on three-dimensional point cloud and modal conversion

Similar Documents

Publication Publication Date Title
CN111126272B (en) Posture acquisition method, and training method and device of key point coordinate positioning model
CN110135455B (en) Image matching method, device and computer readable storage medium
CN112052839B (en) Image data processing method, apparatus, device and medium
WO2021175050A1 (en) Three-dimensional reconstruction method and three-dimensional reconstruction device
CN109960742B (en) Local information searching method and device
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
CN114219855A (en) Point cloud normal vector estimation method and device, computer equipment and storage medium
CN110363817B (en) Target pose estimation method, electronic device, and medium
CN112529015A (en) Three-dimensional point cloud processing method, device and equipment based on geometric unwrapping
CN111753698A (en) Multi-mode three-dimensional point cloud segmentation system and method
CN112990010B (en) Point cloud data processing method and device, computer equipment and storage medium
Du et al. Stereo vision-based object recognition and manipulation by regions with convolutional neural network
US20200057778A1 (en) Depth image pose search with a bootstrapped-created database
CN112489099A (en) Point cloud registration method and device, storage medium and electronic equipment
CN114758337B (en) Semantic instance reconstruction method, device, equipment and medium
CN114612902A (en) Image semantic segmentation method, device, equipment, storage medium and program product
CN114565916A (en) Target detection model training method, target detection method and electronic equipment
CN114842466A (en) Object detection method, computer program product and electronic device
CN112258565A (en) Image processing method and device
CN113673308A (en) Object identification method, device and electronic system
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
CN113592015B (en) Method and device for positioning and training feature matching network
CN116486038A (en) Three-dimensional construction network training method, three-dimensional model generation method and device
CN113065521B (en) Object identification method, device, equipment and medium
CN117036658A (en) Image processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination