CN114332448A - Sparse point cloud-based plane expansion method and system and electronic equipment - Google Patents

Sparse point cloud-based plane expansion method and system and electronic equipment Download PDF

Info

Publication number
CN114332448A
CN114332448A CN202011015591.1A CN202011015591A CN114332448A CN 114332448 A CN114332448 A CN 114332448A CN 202011015591 A CN202011015591 A CN 202011015591A CN 114332448 A CN114332448 A CN 114332448A
Authority
CN
China
Prior art keywords
points
plane
spatial
point
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011015591.1A
Other languages
Chinese (zh)
Inventor
兰国清
周俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sunny Optical Zhejiang Research Institute Co Ltd
Original Assignee
Sunny Optical Zhejiang Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sunny Optical Zhejiang Research Institute Co Ltd filed Critical Sunny Optical Zhejiang Research Institute Co Ltd
Priority to CN202011015591.1A priority Critical patent/CN114332448A/en
Publication of CN114332448A publication Critical patent/CN114332448A/en
Pending legal-status Critical Current

Links

Images

Abstract

A sparse point cloud-based plane expansion method, a system and electronic equipment thereof are provided. The sparse point cloud-based plane expansion method comprises the following steps: selecting a local region of interest on a current frame image according to the geometric centers of projection points of all existing space points in the current frame image, which belong to the current plane, on the current frame image; extracting a space point of the current frame image according to the local region of interest to obtain a space point to be confirmed; deleting all the existing spatial points and the spatial points to be confirmed of the current plane to obtain spatial points to be updated belonging to the current plane; and recalculating plane information of the current plane based on the spatial point to be updated to obtain a new expanded plane.

Description

Sparse point cloud-based plane expansion method and system and electronic equipment
Technical Field
The invention relates to the technical field of plane expansion, in particular to a plane expansion method based on sparse point cloud, a system and electronic equipment thereof.
Background
In an AR scene, a three-dimensional space model is often required to realize the function of augmented reality, so that determining an effective space coordinate system in advance is an important prerequisite for influencing the effect of a subsequent modeling algorithm. However, since the scene of the physical space is generally complex, if a coordinate system is determined arbitrarily, the complexity of the subsequent modeling step is often increased, and the visual experience of the user is affected, which requires that a specific shape region is selected in the physical space to determine the spatial coordinate system. The plane is a common shape feature, has the advantages of simple structure and good adaptability, and is suitable for being used as the reference of the space coordinate system.
Planes acquired by plane probing in different physical spaces require further updating information to ensure that the initial coordinate system can be continuously updated and synchronized with the current user's vision. The current common plane expansion scheme is based on point cloud information, and has certain limitation on equipment which cannot be configured with a depth sensor or has high requirements on SLAM real-time performance. For example, the basic implementation manner of the currently common plane expansion technology is to acquire point cloud information of a current physical space according to an external depth sensor or a binocular camera, and implement component and update of a plane coordinate system by using algorithms such as region growing, RANSAC screening, hough transform and the like.
However, although the point cloud can provide sufficient spatial information, the technique can be used to extract plane information more conveniently, and is also suitable for multi-plane detection, but it still has some drawbacks. For example, for planar expansion under a depth sensor, in order to ensure sufficient spatial information, the existing planar expansion scheme needs to input dense point cloud information, and a large amount of time is required for processing the information, which in some light-weight AR devices, an extremely high frame rate must be ensured in order to ensure visual synchronism, so that the dense point cloud scheme is not feasible. For the plane expansion of coefficient point clouds under a binocular camera, the existing plane expansion scheme needs to input spatial feature points extracted by SLAM and carry out triangulation operation to obtain spatial information of the point clouds. Although the operating time of the algorithm is greatly accelerated due to the use of the feature points with sparse space in the scheme, the SLAM system requires that the extracted feature points are always controlled to be in a certain number in real time, and the points are generally difficult to meet the minimum point requirement of plane expansion after being screened, so that the expansion is stopped and is difficult to continue.
Disclosure of Invention
One advantage of the present invention is to provide a sparse point cloud-based plane expansion method, a system and an electronic device thereof, which can solve the problem of real-time update of a plane coordinate system in different physical spaces.
Another advantage of the present invention is to provide a sparse point cloud-based plane expansion method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the sparse point cloud-based plane expansion method can enable a plane coordinate system detected in an initial state to still quickly update information along with a change of a scene in an augmented reality scene.
Another advantage of the present invention is to provide a sparse point cloud-based plane expansion method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the sparse point cloud-based plane expansion method can update a plane coordinate system while ensuring a running time for a device configured with a depth sensor.
Another advantage of the present invention is to provide a sparse point cloud-based plane expansion method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the sparse point cloud-based plane expansion method does not require dense point cloud as input, thereby avoiding a complex calculation process and reducing time overhead.
Another advantage of the present invention is to provide a sparse point cloud-based plane expansion method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the sparse point cloud-based plane expansion method can extract feature points in a current local image window again and solve corresponding spatial points by using binocular image information instead of using the feature points input by SLAM singly as judgment information, so as to eliminate plane expansion interruption caused by too much coefficients of input feature points.
Another advantage of the present invention is to provide a sparse point cloud-based planar expansion method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the sparse point cloud-based planar expansion method can select a range with a certain window size from the center of a plane instead of selecting the entire image, so as to accelerate the feature point extraction speed and ensure the natural gradual change of planar expansion.
Another advantage of the present invention is to provide a sparse point cloud-based plane extension method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the sparse point cloud-based plane extension method can use 3D information generated by a distance from a spatial point to a plane and 2D information generated by a homography matrix calculated by points in previous and subsequent frame planes to jointly constrain candidate points from a geometric space, so as to obtain more accurate interior point judgment.
Another advantage of the present invention is to provide a sparse point cloud-based planar expansion method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the sparse point cloud-based planar expansion method can further screen spaces that cannot be removed in a geometric space by using the color and distance characteristics of the sparse points, so as to reduce misjudgment of planar points due to factors such as a triangularization result error.
The invention has another advantage of providing a sparse point cloud-based plane expansion method, a system and electronic equipment thereof, wherein in an embodiment of the invention, the sparse point cloud-based plane expansion method is simple to implement and high in applicability, has important significance in reducing algorithm time and improving algorithm precision, has great application value for binocular vision or monocular inertial navigation SLAM-based plane expansion algorithms, and is expected to be applied to the fields of augmented reality, automatic driving and the like.
Another advantage of the present invention is to provide a sparse point cloud-based plane expansion method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the sparse point cloud-based plane expansion method continuously extracts spatial feature points through a binocular camera instead of using dense point cloud obtained by a depth sensor as input, so as to eliminate the influence of excessive point cloud calculation on real-time performance.
Another advantage of the present invention is to provide a sparse point cloud-based plane expansion method, a system and an electronic device thereof, wherein to achieve the above advantages, the present invention does not need to adopt a complex structure and a huge amount of computation, and has low requirements on software and hardware. Therefore, the invention successfully and effectively provides a solution, not only provides a sparse point cloud-based plane expansion method and system and electronic equipment thereof, but also increases the practicability and reliability of the sparse point cloud-based plane expansion method and system and electronic equipment thereof.
To achieve at least one of the above advantages or other advantages and objects, the present invention provides a sparse point cloud based planar expansion method, including:
selecting a local region of interest on a current frame image according to the geometric centers of the points projected on the current frame image by all existing space points in the current plane in the previous frame image;
extracting a space point of the current frame image according to the local region of interest to obtain a space point to be confirmed;
deleting all existing space points and the space points to be confirmed in the current plane to obtain space points to be updated which belong to the current plane; and
and carrying out plane information recalculation on the current plane based on the spatial point to be updated so as to obtain a new expansion plane.
According to an embodiment of the present application, the local region of interest is a rectangular region centered on the geometric center and having a size smaller than that of the current frame image.
According to an embodiment of the present application, the step of extracting a spatial point of the current frame image according to the local region of interest to obtain a spatial point to be confirmed includes the steps of:
carrying out corner point detection on the local region of interest of the current frame image so as to extract sparse characteristic points from the local region of interest;
solving feature points on a left eye image and matched feature points on a right eye image by a left and right camera optical flow tracking method to obtain matched left and right eye feature point pairs; and
and performing triangularization calculation on the left and right target characteristic point pairs according to the corresponding normalized coordinates and relative poses of the left and right cameras so as to solve the space coordinates of the landmark points as initial space points.
According to an embodiment of the present application, the step of extracting a spatial point of the current frame image according to the local region of interest to obtain a spatial point to be confirmed further includes the steps of:
and carrying out re-projection error verification on the initial space points to delete the space points with the errors exceeding a threshold value, and taking the remaining initial space points as the space points to be confirmed.
According to an embodiment of the present application, the step of deleting all existing spatial points and the spatial points to be confirmed in the current plane to obtain spatial points to be updated belonging to the current plane includes the steps of:
deleting the spatial points to be confirmed, the spatial distances of which are greater than a first distance threshold value, according to the spatial distance between the spatial points to be confirmed and the current plane, so as to obtain the spatial points after 3D deletion;
projecting the points tracked on the current frame image and not belonging to the current plane on the previous frame image to the current frame image through a homography matrix so as to solve the plane distance between the projection point and the corresponding observation point, and deleting the projection points of which the plane distance is greater than a second distance threshold value so as to obtain 2D deleted space points; and
and performing clustering processing according to all the existing space points in the current plane, the 3D deleted space points and the gray difference and distance between the 2D deleted space points and the central point of the current plane to eliminate points which are not in the same class as the central point and obtain the space points to be updated which belong to the current plane.
According to an embodiment of the application, the homography matrix is calculated from points on the current frame image that belong to the current plane on the previous frame image.
According to another aspect of the present application, the present application further provides a sparse point cloud-based planar extension system, including:
the local ROI extraction module is used for extracting a local region of interest on the current frame image according to the geometric centers of the points projected on the current frame image by all existing space points in the previous frame image, wherein the existing space points belong to the current plane;
a spatial point extraction module, configured to perform spatial point extraction on the current frame image according to the local region of interest to obtain a spatial point to be confirmed;
a spatial point deleting module, configured to delete all existing spatial points in the current plane and the spatial point to be confirmed, so as to obtain a spatial point to be updated belonging to the current plane; and
and the plane expansion module is used for recalculating plane information of the current plane based on the spatial point to be updated so as to obtain a new expansion plane.
According to an embodiment of the present application, the local region of interest is a rectangular region centered on the geometric center and having a size smaller than that of the current frame image.
According to an embodiment of the present application, the spatial point extraction module includes a feature point extraction module, a feature point matching module and a triangularization processing module, which are communicably connected to each other, where the feature point extraction module is configured to perform corner point detection on the local region of interest of the current frame image, so as to extract sparse feature points from the local region of interest; the characteristic point matching module is used for solving characteristic points on a left eye image matched with characteristic points on a right eye image through a left and right camera optical flow tracking method so as to obtain matched left and right eye characteristic point pairs; the triangulation processing module is used for triangulating the left and right target characteristic point pairs according to the corresponding normalized coordinates and relative poses of the left and right cameras so as to solve the space coordinates of the road mark points as initial space points.
According to an embodiment of the present application, the spatial point extraction module further includes a re-projection error verification module, configured to perform re-projection error verification on the initial spatial points, so as to delete spatial points whose errors exceed a threshold, and use remaining initial spatial points as the spatial points to be confirmed.
According to an embodiment of the present application, the spatial point deleting module includes a 3D deleting module, a 2D deleting module and a cluster deleting module, which are communicably connected to each other, wherein the 3D deleting module is configured to delete the spatial point to be confirmed, whose spatial distance is greater than a first distance threshold, according to a spatial distance between the spatial point to be confirmed and the current plane, so as to obtain a 3D deleted spatial point; the 2D deleting module is used for projecting points which are tracked on the current frame image and do not belong to the current plane on the previous frame image to the current frame image through a homography matrix so as to solve the plane distance between the projection point and the corresponding observation point, and deleting the projection points of which the plane distance is greater than a second distance threshold value so as to obtain the space points after 2D deletion; the cluster deleting module is used for performing cluster processing according to all the existing space points in the current plane, the 3D deleted space points and the gray difference and distance between the 2D deleted space points and the central point of the current plane so as to eliminate points which are not in the same class as the central point and obtain the space points to be updated which belong to the current plane.
According to another aspect of the present application, there is further provided an electronic device, comprising:
at least one processor configured to execute instructions; and
a memory communicatively coupled to the at least one processor, wherein the memory has at least one instruction, wherein the instruction is executable by the at least one processor to cause the at least one processor to perform some or all of the steps of a sparse point cloud based tessellation method, wherein the sparse point cloud based tessellation method comprises the steps of:
selecting a local region of interest on a current frame image according to the geometric centers of the points projected on the current frame image by all existing space points in the current plane in the previous frame image;
extracting a space point of the current frame image according to the local region of interest to obtain a space point to be confirmed;
deleting all existing space points and the space points to be confirmed in the current plane to obtain space points to be updated which belong to the current plane; and
and carrying out plane information recalculation on the current plane based on the spatial point to be updated so as to obtain a new expansion plane.
Further objects and advantages of the invention will be fully apparent from the ensuing description and drawings.
These and other objects, features and advantages of the present invention will become more fully apparent from the following detailed description, the accompanying drawings and the claims.
Drawings
Fig. 1 is a schematic flow chart of a sparse point cloud-based planar expansion method according to an embodiment of the present invention.
Fig. 2 shows a flow chart of one of the steps of the sparse point cloud based planar expansion method according to the above embodiment of the present invention.
Fig. 3 is a schematic flow chart illustrating a second step of the sparse point cloud-based planar expansion method according to the above embodiment of the present invention.
Fig. 4 shows a schematic principle diagram of solving spatial points in the sparse point cloud-based planar expansion method according to the above embodiment of the present invention.
Fig. 5 is a schematic application diagram of the sparse point cloud-based planar expansion method according to the above embodiment of the present invention.
FIG. 6 is a block diagram of the sparse point cloud based planar expansion system according to an embodiment of the present invention.
FIG. 7 shows a block diagram schematic of an electronic device according to an embodiment of the invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
In the present invention, the terms "a" and "an" in the claims and the description should be understood as meaning "one or more", that is, one element may be one in number in one embodiment, and the element may be more than one in number in another embodiment. The terms "a" and "an" should not be construed as limiting the number unless the number of such elements is explicitly recited as one in the present disclosure, but rather the terms "a" and "an" should not be construed as being limited to only one of the number.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
At present, the existing plane expansion technical scheme is basically based on point cloud information, and has certain limitation on equipment which cannot be configured with a depth sensor or has high requirements on SLAM real-time performance. Therefore, in order to solve the problem of real-time update (namely plane expansion) of a plane coordinate system in different physical spaces, the application provides a plane expansion method based on sparse point cloud, a system thereof and electronic equipment, so that the plane coordinate system detected in an initial state can still be rapidly updated with the change of a scene in an enhanced implementation scene. Especially for the device without the depth sensor, the updating of the plane coordinate system can be realized while ensuring the running time. According to the method, the characteristic points in the current local window are re-extracted, the spatial information of the characteristic points is obtained through triangulation, the points which do not meet the plane requirement are further eliminated through the distance from the points to the plane, the homography matrix, the sparse characteristic clustering and the like, and the points in the final plane are determined, so that the related information of the current plane coordinate system is re-calculated, and real-time ground plane expansion is realized.
Illustrative method
Referring to fig. 1 to 3 of the drawings, a sparse point cloud based planar expansion method according to an embodiment of the present invention is illustrated. Specifically, as shown in fig. 1, the sparse point cloud-based planar expansion method may include the steps of:
s100: selecting a local region of interest on a current frame image according to geometric centers of projection points of all existing space points in the current frame image, wherein the existing space points belong to the current plane;
s200: extracting a space point of the current frame image according to the local region of interest to obtain a space point to be confirmed;
s300: deleting all the existing spatial points and the spatial points to be confirmed of the current plane to obtain spatial points to be updated belonging to the current plane; and
s400: and recalculating plane information of the current plane based on the spatial point to be updated to obtain a new expansion plane.
It is worth noting that the sparse point cloud-based plane expansion method selects a local region of interest (local ROI) on the current frame image, so as to limit the image range of the subsequent feature point extraction to ensure the uniformity of the newly added feature points, and therefore, the sparse point cloud-based plane expansion method accelerates the feature point extraction speed and ensures the natural gradual change of plane expansion. In addition, compared with the existing plane expansion scheme based on the depth sensor, the plane expansion method based on the sparse point cloud does not need the dense point cloud as input, so that a complex calculation process is avoided, and time overhead is reduced.
Preferably, in order to further reduce the time overhead of the sparse point cloud based planar expansion method and improve the real-time performance, the previous frame image and the current frame image in the sparse point cloud based planar expansion method are both optical key frame images.
More specifically, in the step S100 of the sparse point cloud based planar expansion method, the local region of interest is preferably implemented as a rectangular region centered on the geometric center and smaller in size than the current frame image. For example, the local region of interest may be centered on the geometric center, and have a width of 0.3 to 0.5 times the total number of columns of the current frame image and a height of 0.3 to 0.5 times the total number of rows of the current frame image.
Furthermore, the computational model of the geometric center is as follows:
Figure BDA0002698940920000081
wherein:
Figure BDA0002698940920000082
and
Figure BDA0002698940920000083
respectively an abscissa and an ordinate of the geometric center on the current frame image; x is the number ofiAnd yiRespectively an abscissa and an ordinate of the ith projection point on the current frame image; and N is the total number of the projection points.
According to the above embodiment of the present application, taking an input image as a left-right eye image as an example, and in order to solve the problem that plane expansion is interrupted due to too few feature points caused by unstable SLAM tracking, step S200 of the sparse point cloud-based plane expansion method of the present application first performs feature extraction and optical flow tracking on the input left-right eye image to obtain feature point pairs corresponding to left and right eyes; and finally, calculating the 3D point information in the world coordinate system through the input SLAM pose information to finish the extraction of the space point. Of course, in other examples of the present invention, the input image may also be implemented, but not limited to, as a TOF image, which can obtain the required sparse spatial points directly through feature point extraction.
Specifically, as shown in fig. 2, the step S200 of the sparse point cloud-based planar expansion method of the present application may include the steps of:
s210: performing corner detection on the local region of interest of the current frame image to extract sparse feature points from the local region of interest;
s220: solving feature points on a left eye image and matched feature points on a right eye image by a left and right camera optical flow tracking method to obtain matched left and right eye feature point pairs; and
s230: and performing triangularization calculation on the left and right target characteristic point pairs according to the corresponding normalized coordinates and relative poses of the left and right cameras so as to solve the space coordinates of the landmark points as initial space points.
More specifically, in step S210 of the sparse point cloud-based planar expansion method, a Shi-Tomasi operator is selected to perform corner detection on the image in the local region of interest, so that while the maximum feature point number in the sparsity control field of the feature points is within a certain range, a certain distance is maintained between adjacent feature points.
In step S220 of the sparse point cloud-based plane expansion method, coordinates of feature points of a left-eye image on a right-eye image are obtained in a pyramid optical flow tracking manner, and meanwhile, tracking point pairs with excessively large distance changes are deleted, so as to ensure that relatively accurate feature point pairs are obtained.
The step S230 of the sparse point cloud-based plane expansion method is mainly used for calculating the spatial point coordinates required for plane expansion, and in specific implementation, the spatial coordinates of the landmark points are solved mainly by using the normalized coordinates and the relative poses corresponding to the left and right cameras. For example, as shown in FIG. 4, substituting the normalized coordinates of the current left and right cameras into the reprojection equation d for the camerasrXr=dlRr-lXl+Tr-lWherein d isrAnd dlRespectively representing the depths of the left camera and the right camera; xr and Xl represent coordinate values of the spatial point under the left camera and the right camera coordinate systems, respectively; rr-lRepresenting a rotation matrix between the left and right cameras; t isr-lRepresenting a translation vector between the left and right cameras.
It is to be noted that, in an example of the present application, the sparse point cloud-based planar expansion method of the present application may directly perform subsequent deletion and calculation by using the initial spatial point obtained in step S230 as the spatial point to be confirmed. In another example of the present application, as shown in fig. 2, in order to delete some error points and improve the planar expansion precision, the step S200 of the sparse point cloud-based planar expansion method of the present application may further include the steps of:
s240: and carrying out re-projection error verification on the initial space points to delete the space points with the errors exceeding a threshold value, and taking the remaining initial space points as the space points to be confirmed.
Exemplarily, in the step S240 of the sparse point cloud-based plane expansion method of the present application, spatial points under a left camera coordinate system are projected to a right camera coordinate system and compared with observation coordinates tracked by a right camera, and points with errors exceeding a threshold are deleted, so that the error point selection by re-projection error verification is completed.
According to the above embodiment of the present application, as shown in fig. 3, the step S300 of the sparse point cloud-based planar expansion method of the present application may include the steps of:
s310: deleting the spatial points to be confirmed, the spatial distances of which are greater than a first distance threshold value, according to the spatial distance between the spatial points to be confirmed and the current plane, so as to obtain the spatial points after 3D deletion;
s320: projecting points tracked on the current frame image and not belonging to the current plane on the previous frame image to the current frame image through a homography matrix to solve the plane distance between the projection point and the corresponding observation point, and deleting the space projection point of which the plane distance is greater than a second distance threshold value to obtain 2D deleted space points; and
s330: and clustering according to all existing space points in the current plane, the space points after 3D deletion and the gray difference and distance between the space points after 2D deletion and the central point of the current plane so as to eliminate points which are not in the same class as the central point and obtain the space points to be updated which belong to the current plane.
Preferably, the homography matrix is calculated from points belonging to the current plane on the previous frame image tracked on the current frame image. It will be appreciated that the homography is that points in the physical space plane satisfy a 3 x 3 matrix constraint relationship if they are observed simultaneously at different times.
It is to be noted that, in the step S300 of the sparse point cloud-based plane expansion method of the present application, the 3D information is mainly used to delete the points belonging to the current plane from the triangulated series of landmark points, and the 2D information is used to delete all the points belonging to the current plane, including the deletion based on the homography matrix and the deletion based on the feature clustering. For example, in a specific implementation process, firstly, for a point which is tracked in a current frame image and does not belong to a current plane on a previous frame image, a homography matrix is calculated according to the point which is tracked in the current frame image and belongs to the current plane on the previous frame image, then the points are projected to the current frame image by using the homography matrix, if the distance between the projected point and an observed point of the projected point in the current frame image exceeds a certain threshold value, the point is considered not to belong to the current plane and is removed, otherwise, the point is considered to belong to the current plane and is retained; secondly, after 3D deletion and homography matrix deletion are completed, all points belonging to the current plane are clustered according to the gray difference and the distance between the points and a plane central point, plane points which are not of the same type as the central point are removed, and the rest points are used as the space points to be updated. It can be understood that the sparse point cloud-based plane expansion method of the present application uses 3D information generated by the distance from a spatial point to a plane and 2D information generated by a homography matrix calculated by points in the planes of previous and subsequent frames to constrain candidate points from a geometric space, so that more accurate interior point judgment can be obtained. In addition, for the spatial points which cannot be eliminated in the geometric space, the color and distance characteristics of the sparse points are utilized to further delete, and misjudgment of the plane points caused by the error of the triangulation result and other factors is reduced.
In the above embodiment of the present application, the step S400 of the sparse point cloud-based plane expansion method is mainly used to update the parameters of the current plane in the current frame image according to the spatial points to be updated. In general, the plane equation can be expressed in a form of ax + by + cz + d being 0, and parameters of the updated plane (including a plane normal vector, a central point, a singular value ratio, a covariance, a sum of singular values after SVD decomposition, a singular value minimum, a singular value median, a singular value maximum, and an included angle between a normal vector and a horizontal normal vector in an inertial coordinate system) can be obtained by solving the plane equation, so as to realize plane expansion.
It is worth mentioning that the plane expansion related to the present application is a sub-module in the SLAM framework, and is mainly responsible for updating the initial plane coordinate system along with the change of the physical space. For example, the relationship between the plane expansion submodule and the SLAM framework in the present application is shown in fig. 5, the SLAM framework mainly relates to two main modules, namely, SLAM and plane detection, and the plane expansion scheme is to calculate the current sparse point cloud through initial plane information given by the first plane detection and the key frame pose obtained by the SLAM at each time in the plane detection, so as to realize the update of the plane coordinate system.
It is noted that, although the above embodiments of the present application take the binocular vision inertial navigation SLAM framework (i.e. the sensors are mainly binocular cameras) as an example to illustrate the advantages and features of the sparse point cloud based plane expansion method of the present application, it can be combined with other non-vision sensors such as imu and the like to accomplish the positioning and tracking effects. In particular, but the application scope of the sparse point cloud-based planar expansion method of the present application is not limited thereto, and the method can also be used for monocular inertial navigation SLAM framework and the like.
Illustrative System
Referring to FIG. 6 of the drawings, a sparse point cloud based planar expansion system according to an embodiment of the present invention is illustrated. Specifically, as shown in fig. 6, the sparse point cloud based planar expansion system 1 may include: a local ROI extraction module 10, configured to extract a local region of interest on a current frame image according to geometric centers of points projected on the current frame image by all existing spatial points belonging to a current plane in a previous frame image; a spatial point extraction module 20, configured to perform spatial point extraction on the current frame image according to the local region of interest to obtain a spatial point to be confirmed; a spatial point deleting module 30, configured to delete all existing spatial points in the current plane and the spatial points to be confirmed, so as to obtain spatial points to be updated belonging to the current plane; and a plane expansion module 40, configured to perform plane information recalculation on the current plane based on the spatial point to be updated, so as to obtain a new expansion plane.
It should be noted that the local region of interest is a rectangular region centered on the geometric center and having a size smaller than that of the current frame image.
In an example of the present application, as shown in fig. 6, the spatial point extraction module 20 includes a feature point extraction module 21, a feature point matching module 22, and a triangulation processing module 23, which are communicably connected to each other, wherein the feature point extraction module 21 is configured to perform corner point detection on the local region of interest of the current frame image to extract sparse feature points from the local region of interest; the feature point matching module 22 is configured to solve feature points on the left eye image that are matched with feature points on the right eye image by using a left-right camera optical flow tracking method, so as to obtain matched left-right eye feature point pairs; the triangulation processing module 23 is configured to perform triangulation calculation on the left and right target feature point pairs according to the normalized coordinates and the relative poses corresponding to the left and right cameras, so as to solve space coordinates of the landmark points as initial space points.
In an example of the present application, as shown in fig. 6, the spatial point extraction module 20 further includes a re-projection error verification module 24, configured to perform re-projection error verification on the initial spatial points, so as to delete spatial points with errors exceeding a threshold, and use the remaining initial spatial points as the spatial points to be confirmed.
In an example of the present application, as shown in fig. 6, the spatial point deleting module 30 includes a 3D deleting module 31, a 2D deleting module 32, and a cluster deleting module 33, which are communicably connected to each other, wherein the 3D deleting module 31 is configured to delete the spatial point to be confirmed, whose spatial distance is greater than a first distance threshold, according to the spatial distance between the spatial point to be confirmed and the current plane, so as to obtain a 3D deleted spatial point; the 2D deleting module 32 is configured to project, through a homography matrix, a point, which is tracked on the current frame image and does not belong to the current plane, on the previous frame image to the current frame image to solve a planar distance between a projection point and a corresponding observation point, and delete a projection point of which the planar distance is greater than a second distance threshold value to obtain a 2D deleted spatial point; the cluster deleting module 33 is configured to perform cluster processing according to all the existing spatial points in the current plane, the 3D deleted spatial points, and the gray difference and distance between the 2D deleted spatial points and the central point of the current plane, so as to eliminate points that do not belong to the same class as the central point, and obtain the spatial point to be updated that belongs to the current plane.
Illustrative electronic device
Next, an electronic apparatus according to an embodiment of the present invention is described with reference to fig. 7. As shown in fig. 7, the electronic device 90 includes one or more processors 91 and memory 92.
The processor 91 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 90 to perform desired functions. In other words, the processor 91 comprises one or more physical devices configured to execute instructions. For example, the processor 91 may be configured to execute instructions that are part of: one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, implement a technical effect, or otherwise arrive at a desired result.
The processor 91 may include one or more processors configured to execute software instructions. Additionally or alternatively, the processor 91 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the processor 91 may be single core or multicore, and the instructions executed thereon may be configured for serial, parallel, and/or distributed processing. The various components of the processor 91 may optionally be distributed over two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the processor 91 may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
The memory 92 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 11 to implement some or all of the steps of the above-described exemplary methods of the present invention described above, and/or other desired functions.
In other words, the memory 92 comprises one or more physical devices configured to hold machine-readable instructions executable by the processor 91 to implement the methods and processes described herein. In implementing these methods and processes, the state of the memory 92 may be transformed (e.g., to hold different data). The memory 92 may include removable and/or built-in devices. The memory 92 may include optical memory (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The memory 92 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It is understood that the memory 92 comprises one or more physical devices. However, aspects of the instructions described herein may alternatively be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a limited period of time. Aspects of the processor 91 and the memory 92 may be integrated together into one or more hardware logic components. These hardware logic components may include, for example, Field Programmable Gate Arrays (FPGAs), program and application specific integrated circuits (PASIC/ASIC), program and application specific standard products (PSSP/ASSP), system on a chip (SOC), and Complex Programmable Logic Devices (CPLDs).
In one example, as shown in FIG. 7, the electronic device 90 may also include an input device 93 and an output device 94, which may be interconnected via a bus system and/or other form of connection mechanism (not shown). For example, the input device 93 may be, for example, a camera module for capturing image data or video data, or the like. As another example, the input device 93 may include or interface with one or more user input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input device 93 may include or interface with a selected Natural User Input (NUI) component. Such component parts may be integrated or peripheral and the transduction and/or processing of input actions may be processed on-board or off-board. Example NUI components may include a microphone for speech and/or voice recognition; infrared, color, stereo display and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer and/or gyroscope for motion detection and/or intent recognition; and an electric field sensing component for assessing brain activity and/or body movement; and/or any other suitable sensor.
The output device 94 may output various information including the classification result, etc. to the outside. The output devices 94 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, the electronic device 90 may further comprise the communication means, wherein the communication means may be configured to communicatively couple the electronic device 90 with one or more other computer devices. The communication means may comprise wired and/or wireless communication devices compatible with one or more different communication protocols. As a non-limiting example, the communication subsystem may be configured for communication via a wireless telephone network or a wired or wireless local or wide area network. In some embodiments, the communications device may allow the electronic device 90 to send and/or receive messages to and/or from other devices via a network such as the internet.
It will be appreciated that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Also, the order of the above-described processes may be changed.
Of course, for simplicity, only some of the components of the electronic device 90 relevant to the present invention are shown in fig. 7, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 90 may include any other suitable components, depending on the particular application.
It should also be noted that in the apparatus, devices and methods of the present invention, the components or steps may be broken down and/or re-combined. These decompositions and/or recombinations are to be regarded as equivalents of the present invention.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.

Claims (12)

1. A sparse point cloud-based plane expansion method is characterized by comprising the following steps:
selecting a local region of interest on a current frame image according to the geometric centers of the points projected on the current frame image by all existing space points in the current plane in the previous frame image;
extracting a space point of the current frame image according to the local region of interest to obtain a space point to be confirmed;
deleting all existing space points and the space points to be confirmed in the current plane to obtain space points to be updated which belong to the current plane; and
and carrying out plane information recalculation on the current plane based on the spatial point to be updated so as to obtain a new expansion plane.
2. The sparse point cloud-based planar expansion method of claim 1, wherein the local region of interest is a rectangular region centered on the geometric center and smaller in size than the current frame image.
3. The sparse point cloud-based planar expansion method of claim 2, wherein the step of extracting spatial points from the current frame image according to the local region of interest to obtain spatial points to be confirmed comprises the steps of:
carrying out corner point detection on the local region of interest of the current frame image so as to extract sparse characteristic points from the local region of interest;
solving feature points on a left eye image and matched feature points on a right eye image by a left and right camera optical flow tracking method to obtain matched left and right eye feature point pairs; and
and performing triangularization calculation on the left and right target characteristic point pairs according to the corresponding normalized coordinates and relative poses of the left and right cameras so as to solve the space coordinates of the landmark points as initial space points.
4. The sparse point cloud-based planar expansion method according to claim 3, wherein the step of extracting spatial points from the current frame image according to the local region of interest to obtain spatial points to be confirmed further comprises the steps of:
and carrying out re-projection error verification on the initial space points to delete the space points with the errors exceeding a threshold value, and taking the remaining initial space points as the space points to be confirmed.
5. The sparse point cloud-based planar expansion method of any one of claims 1 to 4, wherein the step of deleting all existing spatial points and the spatial points to be confirmed in the current plane to obtain spatial points to be updated belonging to the current plane comprises the steps of:
deleting the spatial points to be confirmed, the spatial distances of which are greater than a first distance threshold value, according to the spatial distance between the spatial points to be confirmed and the current plane, so as to obtain the spatial points after 3D deletion;
projecting the points tracked on the current frame image and not belonging to the current plane on the previous frame image to the current frame image through a homography matrix so as to solve the plane distance between the projection point and the corresponding observation point, and deleting the projection points of which the plane distance is greater than a second distance threshold value so as to obtain 2D deleted space points; and
and performing clustering processing according to all the existing space points in the current plane, the 3D deleted space points and the gray difference and distance between the 2D deleted space points and the central point of the current plane to eliminate points which are not in the same class as the central point and obtain the space points to be updated which belong to the current plane.
6. The sparse point cloud based planar expansion method of claim 5, wherein said homography matrix is calculated by tracking points on said current frame image that belong to said current plane on said previous frame image.
7. A sparse point cloud-based planar expansion system is characterized by comprising the following components in communication connection with each other:
the local ROI extraction module is used for extracting a local region of interest on the current frame image according to the geometric centers of the points projected on the current frame image by all existing space points in the previous frame image, wherein the existing space points belong to the current plane;
a spatial point extraction module, configured to perform spatial point extraction on the current frame image according to the local region of interest to obtain a spatial point to be confirmed;
a spatial point deleting module, configured to delete all existing spatial points in the current plane and the spatial point to be confirmed, so as to obtain a spatial point to be updated belonging to the current plane; and
and the plane expansion module is used for recalculating plane information of the current plane based on the spatial point to be updated so as to obtain a new expansion plane.
8. The sparse point cloud-based planar expansion system of claim 7, wherein the local region of interest is a rectangular region centered on the geometric center and having a size smaller than that of the current frame image.
9. The sparse point cloud based planar expansion system of claim 8, wherein the spatial point extraction module comprises a feature point extraction module, a feature point matching module and a triangulation processing module which are communicably connected to each other, wherein the feature point extraction module is configured to perform corner point detection on the local region of interest of the current frame image to extract sparse feature points from the local region of interest; the characteristic point matching module is used for solving characteristic points on a left eye image matched with characteristic points on a right eye image through a left and right camera optical flow tracking method so as to obtain matched left and right eye characteristic point pairs; the triangulation processing module is used for triangulating the left and right target characteristic point pairs according to the corresponding normalized coordinates and relative poses of the left and right cameras so as to solve the space coordinates of the road mark points as initial space points.
10. The sparse point cloud-based planar expansion system of claim 9, wherein the spatial point extraction module further comprises a re-projection error verification module, configured to perform re-projection error verification on the initial spatial points, to delete spatial points with errors exceeding a threshold, and to use the remaining initial spatial points as the spatial points to be confirmed.
11. The sparse point cloud based planar expansion system of any one of claims 7 to 10, wherein the spatial point deletion module comprises a 3D deletion module, a 2D deletion module and a cluster deletion module communicatively connected to each other, wherein the 3D deletion module is configured to delete the spatial point to be confirmed having a spatial distance greater than a first distance threshold according to a spatial distance between the spatial point to be confirmed and the current plane to obtain a 3D deleted spatial point; the 2D deleting module is used for projecting points which are tracked on the current frame image and do not belong to the current plane on the previous frame image to the current frame image through a homography matrix so as to solve the plane distance between the projection point and the corresponding observation point, and deleting the projection points of which the plane distance is greater than a second distance threshold value so as to obtain the space points after 2D deletion; the cluster deleting module is used for performing cluster processing according to all the existing space points in the current plane, the 3D deleted space points and the gray difference and distance between the 2D deleted space points and the central point of the current plane so as to eliminate points which are not in the same class as the central point and obtain the space points to be updated which belong to the current plane.
12. An electronic device, comprising:
at least one processor configured to execute instructions; and
a memory communicatively coupled to the at least one processor, wherein the memory has at least one instruction, wherein the instruction is executable by the at least one processor to cause the at least one processor to perform some or all of the steps of a sparse point cloud based tessellation method, wherein the sparse point cloud based tessellation method comprises the steps of:
selecting a local region of interest on a current frame image according to the geometric centers of the points projected on the current frame image by all existing space points in the current plane in the previous frame image;
extracting a space point of the current frame image according to the local region of interest to obtain a space point to be confirmed;
deleting all existing space points and the space points to be confirmed in the current plane to obtain space points to be updated which belong to the current plane; and
and carrying out plane information recalculation on the current plane based on the spatial point to be updated so as to obtain a new expansion plane.
CN202011015591.1A 2020-09-24 2020-09-24 Sparse point cloud-based plane expansion method and system and electronic equipment Pending CN114332448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011015591.1A CN114332448A (en) 2020-09-24 2020-09-24 Sparse point cloud-based plane expansion method and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011015591.1A CN114332448A (en) 2020-09-24 2020-09-24 Sparse point cloud-based plane expansion method and system and electronic equipment

Publications (1)

Publication Number Publication Date
CN114332448A true CN114332448A (en) 2022-04-12

Family

ID=81011650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011015591.1A Pending CN114332448A (en) 2020-09-24 2020-09-24 Sparse point cloud-based plane expansion method and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN114332448A (en)

Similar Documents

Publication Publication Date Title
US10948297B2 (en) Simultaneous location and mapping (SLAM) using dual event cameras
KR102647351B1 (en) Modeling method and modeling apparatus using 3d point cloud
US10366534B2 (en) Selective surface mesh regeneration for 3-dimensional renderings
CN110276317B (en) Object size detection method, object size detection device and mobile terminal
JP7173772B2 (en) Video processing method and apparatus using depth value estimation
US10282913B2 (en) Markerless augmented reality (AR) system
EP2992508B1 (en) Diminished and mediated reality effects from reconstruction
US10535160B2 (en) Markerless augmented reality (AR) system
US9286539B2 (en) Constructing contours from imagery
US11328481B2 (en) Multi-resolution voxel meshing
US10636190B2 (en) Methods and systems for exploiting per-pixel motion conflicts to extract primary and secondary motions in augmented reality systems
EP4224423A1 (en) Fusion of depth images into global volumes
CN111833447A (en) Three-dimensional map construction method, three-dimensional map construction device and terminal equipment
CN113689503B (en) Target object posture detection method, device, equipment and storage medium
US11748905B2 (en) Efficient localization based on multiple feature types
US10839541B2 (en) Hierarchical disparity hypothesis generation with slanted support windows
CN113129249B (en) Depth video-based space plane detection method and system and electronic equipment
US11189053B2 (en) Information processing apparatus, method of controlling information processing apparatus, and non-transitory computer-readable storage medium
US11651533B2 (en) Method and apparatus for generating a floor plan
CN111089579B (en) Heterogeneous binocular SLAM method and device and electronic equipment
WO2023015938A1 (en) Three-dimensional point detection method and apparatus, electronic device, and storage medium
US11640708B1 (en) Scene graph-based scene re-identification
CN115375836A (en) Point cloud fusion three-dimensional reconstruction method and system based on multivariate confidence filtering
CN114332448A (en) Sparse point cloud-based plane expansion method and system and electronic equipment
US20180001821A1 (en) Environment perception using a surrounding monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination