CN116704151A - Three-dimensional reconstruction method and device, and vehicle, equipment and medium based on three-dimensional reconstruction method and device - Google Patents

Three-dimensional reconstruction method and device, and vehicle, equipment and medium based on three-dimensional reconstruction method and device Download PDF

Info

Publication number
CN116704151A
CN116704151A CN202210183566.7A CN202210183566A CN116704151A CN 116704151 A CN116704151 A CN 116704151A CN 202210183566 A CN202210183566 A CN 202210183566A CN 116704151 A CN116704151 A CN 116704151A
Authority
CN
China
Prior art keywords
triangle
triangles
dimensional
insertion point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210183566.7A
Other languages
Chinese (zh)
Inventor
朱祖文
杨冬生
刘柯
王欢
梁开洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN202210183566.7A priority Critical patent/CN116704151A/en
Publication of CN116704151A publication Critical patent/CN116704151A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree

Abstract

The application discloses a three-dimensional reconstruction method, a three-dimensional reconstruction device, a vehicle, equipment and a medium based on the three-dimensional reconstruction method, the device and the medium, wherein after three-dimensional point cloud data of an object to be reconstructed are acquired, insertion points are determined based on the principle that the sum of angle variances corresponding to triangles is minimized, a target triangle grid with more abundant shape expression information is obtained, and then three-dimensional rendering is carried out according to the target triangle grid, so that a three-dimensional reconstruction model corresponding to the object to be reconstructed is obtained, the similarity between the three-dimensional model and the object to be reconstructed is effectively improved, and the visual effect of the three-dimensional model is improved.

Description

Three-dimensional reconstruction method and device, and vehicle, equipment and medium based on three-dimensional reconstruction method and device
Technical Field
The present disclosure relates generally to the field of data processing, and more particularly to the field of three-dimensional reconstruction, and more particularly to a three-dimensional reconstruction method and apparatus, and vehicles, devices, and media based thereon.
Background
In the related art, the fitting degree of a three-dimensional reconstruction model based on the three-dimensional point cloud is low, the image of the three-dimensional reconstructed object is far from that of the actual object, and the visibility of the three-dimensional reconstruction result is seriously influenced, so that the three-dimensional reconstruction mode needs to be improved to improve the visibility of the three-dimensional reconstruction result.
Disclosure of Invention
In view of the foregoing drawbacks or shortcomings in the prior art, it is desirable to provide a three-dimensional reconstruction method and apparatus, and a vehicle, device and medium based thereon, that effectively improve the visibility effect after three-dimensional reconstruction.
In a first aspect, an embodiment of the present application provides a three-dimensional reconstruction method, including:
acquiring three-dimensional point cloud data of an object to be reconstructed;
triangulating the two-dimensional points corresponding to the three-dimensional point cloud data to obtain a first triangular grid;
determining an insertion point in at least one first triangle in the first triangle mesh, and constructing three second triangles with the three vertices of the first triangle respectively using the insertion point; wherein the position of the insertion point is determined based on the principle of minimizing the sum of the angular variances corresponding to the three second triangles;
respectively obtaining convex quadrilaterals composed of the three second triangles and adjacent first triangles thereof, respectively calculating the sum of angle variances of the triangles, which are obtained by the convex quadrilaterals based on different diagonals, aiming at any convex quadrilaterals, and taking two triangles with smaller sum of the angle variances as new first triangles;
Adding the new first triangle into the first triangle mesh, and deleting the first triangle which is overlapped with the new first triangle in the first triangle mesh to obtain a target triangle mesh;
and performing three-dimensional rendering based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed.
In some embodiments, the performing three-dimensional rendering based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed includes:
if the quantization error of the target triangle mesh and the target polygon is smaller than the preset error, three-dimensional rendering is performed based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed, wherein the target polygon is a contour graph of the object to be reconstructed, which is determined according to the image of the object to be reconstructed.
In some embodiments, if the quantization error of the target triangle mesh and the target polygon is greater than or equal to a preset error, taking the target triangle mesh as a new first triangle mesh, and returning to execute the step of determining the insertion point in at least one first triangle in the first triangle mesh and the subsequent steps until the quantization error of the obtained target triangle mesh and the target polygon is less than the preset error.
In some embodiments, the means for determining the location of the insertion point comprises:
traversing each candidate insertion point in the first triangle, and respectively connecting the candidate insertion points with three vertexes of the first triangle to form three second triangles;
acquiring an inner angle of each second triangle, and determining an angle variance corresponding to each second triangle and a sum of angle variances corresponding to the three second triangles based on the inner angle of the second triangle;
and determining the position of the candidate insertion point with the minimum sum of the angular variances corresponding to the three second triangles as the position of the insertion point.
In some embodiments, the means for determining the location of the insertion point comprises:
sequentially calculating the sum of the angle variances corresponding to the three second triangles corresponding to each candidate insertion point in the first triangle;
and when the sum of the angle variances is identified to be smaller than a preset threshold value, taking the corresponding candidate insertion point with the sum of the angle variances smaller than the threshold value as the insertion point.
In some embodiments, a first triangle corresponding to a history insertion point and three first triangles adjacent to the first triangle of the history insertion point are obtained and marked as unusable triangles;
A first triangle for inserting a current insertion point is selected from the first triangles other than the unavailable triangle in the first triangle mesh.
In some embodiments, the flag of the unavailable triangle in the first triangle mesh is cleared when the other first triangle is not present in the first triangle mesh.
In a second aspect, an embodiment of the present application provides a three-dimensional reconstruction apparatus, including:
the acquisition module is used for acquiring three-dimensional point cloud data of the object to be reconstructed;
the triangulation module is used for triangulating the two-dimensional points corresponding to the three-dimensional point cloud data to obtain a first triangular grid;
a first determining module, configured to determine an insertion point in at least one first triangle in the first triangle mesh, and construct three second triangles with three vertices of the first triangle using the insertion point, respectively; wherein the position of the insertion point is determined based on the principle of minimizing the sum of the angular variances corresponding to the three second triangles;
the second determining module is used for respectively acquiring convex quadrilaterals formed by the three second triangles and the adjacent first triangles, and respectively calculating the sum of the angular variances of the triangles, which are obtained by the convex quadrilaterals based on different diagonal lines, for any convex quadrilaterals, and taking the two triangles with smaller sum of the angular variances as new first triangles;
A third determining module, configured to add the new first triangle to the first triangle mesh, and delete a first triangle in the first triangle mesh that coincides with the new first triangle position, to obtain a target triangle mesh;
and the rendering module is used for performing three-dimensional rendering based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed.
In a third aspect, an embodiment of the present application provides a vehicle, where the vehicle is provided with a plurality of groups of cameras, and the plurality of groups of cameras are disposed on front and rear sides, left and right sides, and a roof of the vehicle or front and rear sides, left and right sides, and a vehicle bottom of the vehicle, where each group of cameras includes two fisheye cameras, and the vehicle further includes: three-dimensional reconstruction device implementing the method described in the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide an electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing a method as described in embodiments of the present application when the program is executed by the processor.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method as described in embodiments of the present application.
According to the three-dimensional reconstruction method provided by the embodiment of the application, after the three-dimensional point cloud data of the object to be reconstructed is obtained, the insertion points are determined based on the principle that the sum of the angle variances corresponding to the triangles is minimized, so that the target triangle grids with more abundant shape expression information are obtained, and then the three-dimensional reconstruction model corresponding to the object to be reconstructed is obtained according to the target triangle grids, thereby effectively improving the similarity between the three-dimensional model and the object to be reconstructed and improving the visual effect of the three-dimensional model.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a three-dimensional reconstruction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a plurality of cameras according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating selection of an insertion point according to an embodiment of the present application;
FIGS. 4-5 are schematic diagrams of a minimum variance criterion according to an embodiment of the present application;
FIG. 6 is a flow chart of a three-dimensional reconstruction method according to another embodiment of the present application;
FIG. 7 is a diagram illustrating the effect of quantization error according to an embodiment of the present application;
FIG. 8 is a block diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present application;
FIG. 9 is a block schematic diagram of a vehicle according to an embodiment of the present application;
fig. 10 is a schematic diagram of a computer system suitable for use in implementing an electronic device or server of an embodiment of the application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
Intelligent driving is accompanied by technological changes, and road traffic safety is also a social concern. The current mature panoramic looking-around system collects surrounding environment information of a vehicle through cameras around the vehicle body, performs distortion correction and visual transformation processing on image information collected by each camera, and then splices the image information into a panoramic video containing the vehicle and surrounding environment, and displays the panoramic video on a central control display screen so as to assist a driver to better operate in driving, reversing and parking processes and reduce the possibility of accidents. The panoramic scheme has two dimensions and three dimensions, and the two-dimensional panoramic system generates panoramic video with a bird's eye view, so that visual auxiliary driving can be provided for a driver; the system can also provide reliable blind area display auxiliary functions for drivers under the working conditions of parking in a parking lot, narrow or crowded road sections and the like. The three-dimensional panoramic reconstruction can assist a driver to observe the automobile running environment from multiple angles, so that driving experience is improved, safety guarantee performance is good, and the three-dimensional panoramic reconstruction is a trend of a visual enhancement type safety system.
However, the conventional fish-eye camera-based looking around system has problems of camera distortion, object shape distortion and poor display effect. In addition, a single static fish-eye camera does not have the ranging capability, cannot judge the distance between the vehicle and surrounding objects, and is unfavorable for reminding a driver of safe driving. In order to solve the problem of object distortion in panoramic images, some students obtain relative pose transformation of cameras by using an optical flow technology, then obtain depth images of a single view angle by using a planar scanning method, and finally correct the panoramic images by using the depth images. The method is complex in calculation, real-time performance is difficult to ensure, and the error of the depth image generated in the complex environment is large. To obtain the relative position of objects in panoramic images, some researchers have employed methods of multi-sensor fusion, such as fusion of a lidar with a fisheye camera, fusion of an ultrasonic sensor with a fisheye camera. However, lidar is expensive and is difficult to popularize and apply in short term. The ultrasonic sensor cannot provide point cloud information of an object, and it is difficult to correct deformation of the object in the panoramic image.
Based on the above, the application provides a three-dimensional reconstruction method and device, and a vehicle, equipment and medium based on the three-dimensional reconstruction method and device, so as to solve the problems.
Fig. 1 is a flowchart of a three-dimensional reconstruction method according to an embodiment of the present application.
The main execution body of the three-dimensional reconstruction method in this embodiment is a three-dimensional reconstruction device, and the three-dimensional reconstruction device in this embodiment may be implemented in software and/or hardware, and may be configured in an electronic device or in a server for controlling the electronic device, where the server communicates with the electronic device to control the electronic device.
The electronic device in this embodiment may include, but is not limited to, a personal computer, a platform computer, a smart phone, a vehicle-mounted terminal, and the like, and the embodiment is not particularly limited to the electronic device.
As shown in fig. 1, the three-dimensional reconstruction method provided by the embodiment of the application comprises the following steps:
step 101, acquiring three-dimensional point cloud data of an object to be reconstructed.
It should be noted that, in the embodiment of the present application, the object to be reconstructed may be an object in a calibration image acquired by a plurality of fisheye cameras disposed on a vehicle. Specifically, the content measuring device is provided with five groups of cameras, wherein the five groups of cameras are arranged on the front side, the rear side, the left side and the right side of a vehicle and a vehicle roof or a vehicle bottom, and each group of cameras comprises two fisheye cameras.
For example, as shown in fig. 2, a vehicle may be provided with four sets of looking-around cameras and a set of top cameras or a set of bottom cameras. The top view is five groups of cameras consisting of four groups of looking around cameras and one group of top cameras, and the bottom view is five groups of cameras consisting of four groups of looking around cameras and one group of bottom cameras.
It should be appreciated that by providing five sets of cameras on the vehicle, the overall effect in five directions of the vehicle, i.e., the combined effect of vehicle body looking around and roof conditions, or the combined effect of vehicle body looking around and floor conditions, can be obtained through three-dimensional reconstruction. In addition, when the vehicle is provided with the vehicle roof camera and the vehicle bottom camera at the same time, the switching display of the combined effect can be carried out according to the selection of a user, and the visibility of the three-dimensional scene is effectively improved.
Further, the view field angle of each fisheye camera is 190 degrees, the distance between two cameras in each group is a base line length, and the base line length can be determined according to the requirements of the range of the scene to be three-dimensionally reconstructed and the modeling precision. Thus, any object in space can be observed by two or more cameras at the same time, and the accuracy of image data for three-dimensional reconstruction is further improved.
In one or more embodiments, each camera provided on the vehicle needs to be calibrated to determine the internal and external parameters of each camera before determining the three-dimensional point cloud data from the calibration image.
Specifically, the calibration process of the camera is a process of determining a projection matrix of the camera. Setting coordinates (u) in an internal pole coordinate pixel coordinate system (u, v) corresponding to the camera 0 ,v 0 ) The physical size of each pixel in the x-axis and y-axis directions of the image coordinate system is (dx, dy), and a certain conversion relationship exists between the pixel coordinates (u, v) and the physical coordinates (x, y). Since the camera may be generating images of a scene in the world coordinate system from any angle and position, the relationship between the camera coordinate system and the world coordinate system may be described by a rotation matrix R and a translation vector t. The coordinates P (x) of the point P in the world coordinate system w ,y w ,z w ) With any point P (x) c ,y c ,z c ) The transformation relation between the two can be:
wherein R is a 3×3 orthogonal rotation matrix, t is a translation vector, M 2 Is a 4 x 4 matrix. Then, normalization processing is performed, and any point P (x c ,y c ,z c ) The imaging position P (x, y) on the image plane is the intersection of the line of the optical center and the P point with the image plane. Wherein f is the focal length of the camera, z c Is the depth coordinate of the point P in the camera coordinate system. Therefore, a certain mapping relation is satisfied between the camera coordinates and the image coordinates of the object. By combining the above relations, a point P (x w ,y w ,z w ) Pixel coordinates (u, v) on the camera imaging plane:
wherein M is 1 Is an internal reference matrix of the camera, f/dx, f/dy, u 0 ,v 0 Only related to the internal structure of the camera is the internal parameter of the camera. M is M 2 Depending onThe selection of the world coordinate system and the placement mode of the camera are external parameters of the camera. Wherein, note m=m 1 M 2 The M is called an imaging transformation matrix, and then there are:
it should be understood that the calibration of the camera is to determine the values of 3×4=12 components in the matrix M, and this process can be performed by man-machine interaction and least square method. Specifically, firstly, a standard test object is selected, a world coordinate system is selected, a plurality of test points are marked on the standard test object, and the three-dimensional coordinates P (x) of each test point are determined w ,y w ,z w ) And determining pixel coordinates (u, v) of each test point in the image by a man-machine interaction method, substituting the pixel coordinates into the equation to obtain 12 components in the matrix M, and solving the components in the matrix by using a least square method to finish the calibration of the camera.
Further, in the application, for any group of cameras, the detected object can be subjected to stereo matching through the calibration images acquired by the two cameras.
Specifically, assuming that images acquired by two cameras in any group of cameras are a left image and a right image, one pixel point p is specified in the left image 1 (u 1 ,v 1 ) Finding out the dual pixel point p in the right image 2 (u 2 ,v 2 ) And forming a stereoscopic pixel point pair. Ultimately mapped to the original spatial point, which is also the core of binocular vision theory. If pixel point p 1 And p is as follows 2 Forms a three-dimensional point pair corresponding to the same spatial point P (x w ,y w ,z w ) The brightness and color of the two pixels are approximately equal, if the pixel p 1 (u 1 ,v 1 ) Is a characteristic point of the left image, p 2 (u 2 ,v 2 ) And also corresponds to the same class of feature points in the right image. Often using stereo point pairs p in stereo matching 1 (u 1 ,v 1 ) And p 2 (u 2 ,v 2 ) Is limited by the limit constraint of (2). The geometrical interpretation of the limit constraints is: spatial point P (x w ,y w ,z w ) Pixel point p in left image 1 And pixel point p in right image 2 Necessarily in the respective camera plane and plane PO 1 O 2 These two intersecting lines are called epipolar lines. Imaging transformation matrixes with left and right cameras are respectively M 1 And M is as follows 2 Denoted as M 1 =[M 11 m 1 ]And M 2 =[M 21 m 2 ]。
For spatial midpoint P (x w ,y w ,z w ) The image coordinates in the left image and the existing image are respectively: c1 U 1 =M 1 P w =[M 11 m 1 ]P w and z c2 U 2 =M 2 P w =[M 21 m 2 ]P w
Wherein u= (U, v, 1) T The limit equation is obtained from the two formulas:
according to the limit constraint, at a known p 1 (u 1 ,v 1 ) After that, the dual point p is searched in the whole right image 2 (u 2 ,v 2 ) By combining the physical constraints, the stereo matching can be more accurately completed.
Further, as can be seen from the imaging transformation formula, given the two-dimensional image coordinates of the spatial point, the point P (x w ,y w ,z w ) The pixel coordinate formula on the camera imaging plane is related to x w ,y w ,z w ,z c For two cameras, the relation x can be obtained w ,y w ,z w ,z c Is to cancel z c The limit intersection point can be obtained later and is the space point P (x w ,y w ,z w ). Set up the formation of two cameras of left and right sidesThe image formulas are respectively as follows:
then, the spatial three-dimensional coordinates P (x w ,y w ,z w )。
It should be understood that the calibration of the camera is to obtain the internal and external parameters of the camera, which is the basis for three-dimensional reconstruction, and the stereo matching is to link two images obtained by the binocular camera, and analyze the similarity of elements in the two images. And further, according to the matching points and the three-dimensional coordinate points of the actual scene points of the two rays coming into the ball determined by the two optical centers. And according to the internal and external parameters of each camera obtained by calibration, performing stereo matching by using the calibrated images shot by a pair of fisheye cameras facing each other to obtain a dense pixel depth image, wherein the value of the dense pixel depth image represents the distance between each pixel in the graph and the optical center of the camera, and each region obtained by splicing is synthesized into a three-dimensional space.
According to the scheme, the object space position can be restrained by fully utilizing the restraint of the multi-camera, and the scene dense three-dimensional structure is obtained. Acquiring three-dimensional point cloud information of different angles in a scene through dense three-dimensional point cloud data of different angles of the scene, smoothing Jing Wudian cloud data by utilizing a bilateral filter, removing noise points, distortion points and in-vitro isolated points in the point cloud data of the scene, and carrying out matching registration fusion on the three-dimensional point cloud information of different angles of the scene to acquire all-around point cloud information of the scene.
In order to improve the efficiency of subsequent processing, the point cloud simplifying algorithm based on the reserved edge points of the octree can be further utilized to simplify the Jing Wudian cloud.
The method comprises the steps of determining the subdivision layer number corresponding to original three-dimensional point cloud data according to the original three-dimensional point cloud data, dividing the three-dimensional point cloud data layer by layer according to the subdivision layer number, calculating the coding value of each octree sub-node, and storing the sub-nodes obtained by dividing the octree according to the sequence of the coding values from small to large, wherein the point cloud data with the same coding value are stored in a unified linked list, and data points closest to the central point of a cube corresponding to the linked list are reserved, so that the three-dimensional point cloud data after the girls are reduced are filed.
It should be noted that, the three-dimensional point cloud data used in the embodiment of the present application is three-dimensional point cloud data after being reduced.
And 102, triangulating two-dimensional points corresponding to the three-dimensional point cloud data to obtain a first triangular grid.
Wherein the first triangle mesh comprises a plurality of first triangles.
In the embodiment of the application, three-dimensional point cloud data are projected into a two-dimensional plane to obtain two-dimensional points corresponding to the two-dimensional point cloud data, and triangulation is performed on the two-dimensional points by adopting an empty circumcircle rule (Delaunay rule, dirony rule). Wherein the Delaunay subdivision is that if one triangulation T of the point set V contains only Delaunay edges, then that triangulation becomes the Delaunay subdivision. Delaunay edge: assuming one edge E (two endpoints are a, b) in set E, E is called Delaunay edge if the following condition is satisfied: there is a circle passing through the points a and b, and the circle (note that the circle is the circle, and the circle is the same with the three points at most) does not contain any other points in the point set V, which is also called an empty circle characteristic.
That is, the present application determines Delaunay sides based on the empty circle characteristics for points in the three-dimensional point cloud data, and then the mesh data composed of these sides is the first triangular mesh data. The first grid data is recorded through a convex hull linked list. The convex hull linked list records the positions and connection relations of each point in the two-dimensional points according to a preset rule, wherein triangle information consisting of the two-dimensional points can be determined based on the positions and connection relations of each point.
Step 103, determining an insertion point in at least one first triangle in the first triangle mesh, and constructing three second triangles by using the insertion point and three vertexes of the first triangle respectively; wherein the position of the insertion point is determined based on a principle of minimizing the sum of the angular variances corresponding to the three second triangles.
The insertion points may be determined from candidate insertion points in the first triangle, where the candidate insertion points may be two-dimensional point data in the first triangle that is not used by the first triangle mesh, or may be randomly selected points in the first triangle.
The square sum of the differences between the three interior angles of the second triangle and the reference angles can be calculated, then the average value is calculated, and the average value is taken as the angle variance of the second triangle. Alternatively, the reference angle is 60 °.
In one or more embodiments, the manner in which the location of the insertion point is determined includes: traversing each candidate insertion point in the first triangle, and connecting the candidate insertion points with three vertexes of the first triangle respectively to form three second triangles; acquiring the inner angle of each second triangle, and determining the angle variance corresponding to each second triangle and the sum of the angle variances corresponding to the three second triangles based on the inner angle of the second triangle; the position of the candidate insertion point that minimizes the sum of the angular variances corresponding to the three second triangles is determined as the position of the insertion point.
Specifically, all candidate insertion points in any first triangle are firstly obtained, then three second triangles corresponding to the candidate insertion points are constructed for each candidate insertion point, the angle variance of each second triangle is calculated, the sum of the angle variances of the three second triangles corresponding to each candidate insertion point is further obtained, the sum of the angle variances is sequenced, and the candidate insertion point corresponding to the sum of the minimum angle variances is determined to be the insertion point.
For example, as shown in fig. 3, triangle ABC is a first triangle selected, when determining the insertion point P in triangle ABC, the angles of the inner angles of the second triangle PAC, the second triangle PBC and the second triangle PAB are obtained respectively, that is, three inner angles ++pac, ++pca, ++cap of the second triangle PAC, three inner angles ++pbc, ++pcb, ++cpb and three inner angles ++pab of the second triangle PBC, angle PBA and angle BPA of the second triangle PBC are obtained respectively, then the angles variance corresponding to the second triangle PAC, the second triangle PBC and the second triangle PAB are calculated respectively by using the angle variance formula, and then the sum of the angles variances of the second triangle PAC, the second triangle PBC and the second triangle PAB is taken as the sum of the angles variances corresponding to the point P. And traversing candidate insertion points in the first triangle ABC, and inserting the angle variance and the minimum P point into the first triangle ABC as insertion points.
Taking the first triangle ABC as an example, the algorithm of the triangle angle variance is as follows:
it should be understood that, in the embodiment of the present application, the insertion point determined based on the principle of minimizing the sum of the angular variances corresponding to the three second triangles can make the three second triangles formed after the insertion of the insertion point approach to the equilateral triangle, so that the surface of the reconstructed object can be smoother in the later three-dimensional rendering.
In one or more embodiments, in order to reduce the calculation amount of traversing the candidate insertion points, a threshold corresponding to the sum of angle variances may be preset in advance, so that the sum of angle variances is compared with the preset threshold when the sum of angle variances corresponding to the candidate insertion points is calculated in turn, and when the sum of angle variances is smaller than the preset threshold, the candidate insertion point corresponding to the sum of angle variances is directly used as the insertion point, without calculating the sum of angle variances for the candidate insertion points for which the calculation of the sum of angle variances has not been performed.
Specifically, candidate insertion points in the first triangle are obtained, the candidate insertion points are ordered according to a preset order, for example, a coordinate rule order, then the sum of the angular variances of three second triangles corresponding to each candidate insertion point is calculated in sequence according to the ordering result, the sum of the angular variances is directly compared with a preset threshold after the sum of the angular variances is calculated, if the sum of the angular variances is smaller than the preset threshold, the current candidate insertion point is determined as the insertion point, a subsequent operation is performed, if the sum of the angular variances is larger than or equal to the preset threshold, the next candidate insertion point is determined from the candidate insertion point sequence, and the sum of the angular variances of the three second triangles corresponding to the next candidate insertion point is calculated until the sum of the angular variances of the three second triangles corresponding to the candidate insertion point is smaller than the preset threshold, or the sum of the angular variances of the three second triangles which do not exist in the candidate insertion point sequence is smaller than the preset threshold, and at this time, the candidate insertion point with the smallest sum of the angular variances corresponding to the candidate insertion point sequence is determined as the insertion point.
Step 104, respectively obtaining convex quadrilaterals composed of three second triangles and adjacent first triangles, respectively calculating the sum of angle variances of the triangles obtained by the convex quadrilaterals based on different diagonals aiming at any convex quadrilaterals, and taking two triangles with smaller sum of angle variances as new first triangles.
It should be noted that, the process of determining the triangle in the convex quadrangle may be: two adjacent triangles forming a convex quadrangle in a two-dimensional plane are subjected to the sum of angular variances S1, and after diagonal lines of the convex quadrangle are exchanged, the new sum of angular variances S2 of the two adjacent triangles is obtained. If S1> S2, then the diagonal lines need to be swapped, otherwise, not.
Based on the above, after determining the insertion point, three convex quadrilaterals formed by three second triangles corresponding to the insertion point and three adjacent first triangles thereof are obtained, at this time, the second triangles and the adjacent first triangles thereof are two triangles corresponding to the convex quadrilaterals, respectively calculating the angular variances of the current two triangles to obtain angular variances and sums, then changing diagonal lines in the convex quadrilaterals from the vertex connection of the second triangles to the vertex connection of the insertion point and the adjacent first triangles, at this time, the two triangles with the vertex connection of the insertion point are two triangles corresponding to the convex quadrilaterals, respectively calculating the angular variances of the current two triangles to obtain angular variances and sums, judging the sizes of the two groups of angular variances and sums, and taking the two triangles corresponding to the combination mode of the angular variances and smaller angular variances as new first triangles.
Therefore, by fine adjustment of the convex quadrangle formed by the second triangle based on the sum of the angle variances, triangle optimization after the insertion point is determined is realized, the angle variances of the triangle formed by the points related to the insertion point are further minimized, and the surface of the reconstructed object can be smoother in the later three-dimensional rendering.
For example, as shown in fig. 4, after determining the insertion point P in the first triangle ABC, three first triangles adjacent to the sides AC, AB, and BC in the plane, that is, the first triangle AEC, the second triangle AFB, and the first triangle BCD, respectively, are determined, thereby forming the convex quadrangle AECP, the convex quadrangle AFBP, and the convex quadrangle PCDB.
Then, the sum of variances of the triangle ACE and triangle APC when AC is diagonal and the sum of variances of the triangle AEP and triangle EPC when EP is diagonal and S11 and S12 are calculated for the convex quadrilateral AECP, respectively, if S11> S12, EP is a target diagonal for slicing the convex quadrilateral AECP, and if S11< S12, AC is a target diagonal for slicing the convex quadrilateral AECP. Similarly, the sum of variances of triangle AFB and triangle APB when AB is diagonal and the sum of variances of triangle AFP and triangle FBP when FP is diagonal and S22 are calculated for the convex quadrilateral AFBP, respectively, FP is the target diagonal for slicing the convex quadrilateral AFBP if S21> S22, and AB is the target diagonal for slicing the convex quadrilateral AFBP if S21< S22. Similarly, the sum of variances of the triangle BCD and the triangle PBC when BC is diagonal and the sum of variances of the triangle PCD and the triangle PBD when PD is diagonal and S32 are calculated for the convex quadrilateral PCDB, respectively, if S31> S32, PD is a target diagonal for cutting the convex quadrilateral PCDB, and if S31< S32, BC is a target diagonal for cutting the convex quadrilateral PCDB.
Calculated, as shown in fig. 5, AC, AB, and PD are target diagonals of convex quadrangle AECP, convex quadrangle AFBP, and convex quadrangle PCDB, respectively. Then, at least one triangle which is cut by the target diagonal line and is taken as a convex edge point in the two triangles which are cut by the target diagonal line is obtained, namely, in the convex quadrilateral AECP, a triangle which is cut by the target diagonal line AC and is taken as a convex edge point is taken as a triangle ACP, in the convex quadrilateral AFBP, a triangle which is cut by the target diagonal line AB and is taken as a convex edge point is taken as a triangle ABP, and in the convex quadrilateral PCDB, a triangle which is cut by the target diagonal line PD and is taken as a convex edge point is taken as a triangle PCD and a triangle PBD, and therefore, the triangle ACP, the triangle ABP, the triangle PCD and the triangle PBD are taken as updated first triangles.
Step 105, adding the new first triangle to the first triangle mesh, and deleting the first triangle which is overlapped with the new first triangle in the first triangle mesh to obtain the target triangle mesh.
That is, in the process of data processing by the computer, the triangle edge data gradually increases along with the calculation, that is, when the sum of the triangle angle variances in the convex quadrangle is calculated, the triangle data generated in the calculation process is repeated with the triangle data used for updating the triangle mesh, at this time, in order to reduce the cached data amount, the replaced first triangle data needs to be deleted, so that the data amount is effectively reduced.
For example, taking fig. 5 as an example, after determining that the triangle AEC, triangle AFB, triangle ACP, triangle ABP, triangle PCD and triangle PBD in the polygon AECDBF are the new first triangle, the area covered by the original polygon AECDBF is replaced with the triangle AEC, triangle AFB, triangle ACP, triangle ABP, triangle PCD and triangle PBD. In other words, the area covered by the original triangle ABC, triangle AEC, triangle AFB and triangle BCD will be replaced with triangle AEC, triangle, AFB triangle ACP, triangle ABP, triangle PCD and triangle PBD.
And 106, performing three-dimensional rendering based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed.
Therefore, according to the three-dimensional reconstruction method provided by the embodiment of the application, after the three-dimensional point cloud data of the object to be reconstructed is obtained, the insertion points are determined based on the principle that the sum of the angle variances corresponding to the triangles is minimized, the target triangle grids with more abundant shape expression information are obtained, and then the three-dimensional reconstruction model corresponding to the object to be reconstructed is obtained according to the target triangle grids, so that the similarity between the three-dimensional model and the object to be reconstructed is effectively improved, and the visual effect of the three-dimensional model is improved.
In one or more embodiments, three-dimensional rendering is performed based on a target triangle mesh to obtain a three-dimensional reconstruction model of an object to be reconstructed, including:
if the quantization error of the target triangle mesh and the target polygon is smaller than the preset error, performing three-dimensional rendering based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed, wherein the target polygon is a contour figure of the object to be reconstructed determined according to the image of the object to be reconstructed, and if the quantization error of the target triangle mesh and the target polygon is larger than or equal to the preset error, taking the target triangle mesh as a new first triangle mesh, and returning to execute the step and the subsequent step of determining the insertion point in at least one first triangle in the first triangle mesh until the quantization error of the obtained target triangle mesh and the target polygon is smaller than the preset error.
The target polygon may be the same set of images as the images for generating the three-dimensional point cloud data, for example, images captured by a plurality of sets of fisheye cameras arranged on the vehicle, and three-dimensional point cloud data of the object to be reconstructed in a three-dimensional environment and a contour graph in a two-dimensional plane are obtained based on the images.
Wherein the quantization error may be an error value between an edge contour formed by the target triangle mesh and the target polygon.
Optionally, edge recognition can be performed on the calibration image corresponding to the object to be reconstructed to obtain the target polygon corresponding to the object to be reconstructed. For example, the calibration image may be input into a trained edge recognition model to obtain a target polygon corresponding to the object to be reconstructed using the edge recognition model, where the edge recognition model may be trained using road environment data such as traffic facilities, lane information, label images of buildings, etc., so that the trained edge recognition model is more sensitive to the object in the driving environment.
Specifically, as shown in fig. 6, after the target triangle mesh is obtained based on step 105, a step of determining whether the quantization error between the target triangle mesh and the target polygon is smaller than the preset error is performed, if yes, step 106 is performed to perform three-dimensional rendering based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed, if no, step 102 is returned to, the target triangle mesh is used as a new first triangle mesh to continuously determine a new insertion point, a new target triangle mesh is obtained based on the new insertion point, and further, a step of determining whether the quantization error between the target triangle mesh and the target polygon is smaller than the preset error is performed continuously until the quantization error between the obtained target triangle mesh and the target polygon is smaller than the preset error.
It should be understood that if the quantization error between the target triangle mesh and the target polygon is still greater than or equal to the preset error after all the candidate insertion points generated based on the three-dimensional point cloud data are determined as the insertion points, the points may be randomly determined as the candidate insertion points from any one of the first triangles in the first triangle mesh.
Thus, as the quantization error of the target triangle mesh and the target polygon gradually decreases, the visibility of the triangle mesh gradually increases, i.e., the expression effect of the triangle mesh on the three-dimensional scene gradually increases.
For example, as shown in fig. 7, the first row is a triangle mesh change that results from gradual interpolation of points into the triangle mesh, and the second row is a quantization error effect of the triangle mesh mapping into the target polygon, which in the embodiment of fig. 3 is a circular-like polygon. Thus, the quantization error is gradually reduced through the insertion points in the initial triangle mesh, and the target triangle mesh with the quantization error of 0.4% is finally obtained, wherein the similarity between the pattern effect formed by the target triangle with the quantization error of 0.4% and the target polygon is far higher than that of the initial triangle mesh.
In one or more embodiments, a first triangle corresponding to a history insertion point and three first triangles adjacent to the first triangle of the history insertion point are obtained and marked as unusable triangles; the first triangle used for inserting the current insertion point is selected from the first triangles except the unavailable triangle in the first triangle mesh.
And clearing the mark of the unavailable triangle in the first triangle mesh when no other first triangle exists in the first triangle mesh.
That is, since the first triangle and the adjacent first triangle thereof, which have determined the insertion points, are optimized based on the sum of the angular variances once through step 104 in the process of obtaining the target triangle mesh, in order to avoid uneven distribution of the triangle mesh caused by excessive concentration of the insertion points determined for multiple times, that is, dense partial area triangles and sparse partial area triangles of the target triangle mesh, thereby affecting the overall effect of triangle rendering, the application further performs unusable marking on the triangle which has been optimized by the sum of the angular variances, thereby avoiding repeated determination of the insertion points in the same round of insertion process, ensuring that the triangle in the target triangle mesh is distributed uniformly as much as possible, and improving the smoothness degree of triangle rendering.
Specifically, after the first triangle mesh is obtained, after each time the insertion point is determined, determining that the first triangle of the insertion point and the adjacent first triangles thereof are all marked as unavailable triangles, then acquiring the unavailable triangles marked by history, selecting the first triangle used for determining the current insertion point from other first triangle meshes except for the unavailable triangles in the first triangle mesh until the first triangles in the first triangle mesh are all marked as unavailable triangles, indicating that all the first triangles in the current first triangle mesh are subjected to optimization based on the sum of the angle variances once, and if the insertion point still needs to be determined continuously at this time, clearing the unavailable marks in the current first triangle mesh, namely, performing optimization based on the sum of the angle variances for the second time on the first triangles in the first triangle mesh, and performing the same way until a target polygon with the quantization error smaller than a preset threshold value is obtained.
In summary, in the three-dimensional reconstruction method provided by the embodiment of the application, after the three-dimensional point cloud data of the object to be reconstructed is obtained, the insertion points are determined based on the principle that the sum of the angle variances corresponding to the triangles is minimized, so that the target triangle mesh with more abundant shape expression information is obtained, and then the three-dimensional reconstruction model corresponding to the object to be reconstructed is obtained according to the target triangle mesh for three-dimensional rendering, thereby effectively improving the similarity between the three-dimensional model and the object to be reconstructed and improving the visual effect of the three-dimensional model.
Fig. 8 is a block diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present application.
As shown in fig. 8, a three-dimensional reconstruction apparatus 10 according to an embodiment of the present application includes:
an acquisition module 11, configured to acquire three-dimensional point cloud data of an object to be reconstructed;
the triangulation module 12 is configured to triangulate a two-dimensional point corresponding to the three-dimensional point cloud data to obtain a first triangle mesh;
a first determining module 13, configured to determine an insertion point in at least one first triangle in the first triangle mesh, and construct three second triangles with the insertion point and three vertices of the first triangle respectively; wherein the position of the insertion point is determined based on the principle of minimizing the sum of the angular variances corresponding to the three second triangles;
a second determining module 14, configured to obtain convex quadrilaterals composed of three second triangles and first triangles adjacent to the three second triangles, respectively, and for any convex quadrilaterals, calculate a sum of angular variances of the triangles obtained by the convex quadrilaterals based on different diagonals, and take two triangles with smaller sum of angular variances as new first triangles;
a third determining module 15, configured to add a new first triangle to the first triangle mesh, and delete a first triangle in the first triangle mesh that coincides with the new first triangle position, so as to obtain a target triangle mesh;
The rendering module 16 is configured to perform three-dimensional rendering based on the target triangle mesh, so as to obtain a three-dimensional reconstruction model of the object to be reconstructed.
In some embodiments, rendering module 16 is further to:
if the quantization error of the target triangle mesh and the target polygon is smaller than the preset error, three-dimensional rendering is performed based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed, and the target polygon is a contour graph of the object to be reconstructed determined according to the image of the object to be reconstructed.
In some embodiments, rendering module 16 is further to:
if the quantization error of the target triangle mesh and the target polygon is greater than or equal to the preset error, taking the target triangle mesh as a new first triangle mesh, and returning to the step of determining the insertion point in at least one first triangle in the first triangle mesh and the subsequent steps until the obtained quantization error of the target triangle mesh and the target polygon is less than the preset error.
In some embodiments, the first determining module 13 is further configured to:
traversing each candidate insertion point in the first triangle, and connecting the candidate insertion points with three vertexes of the first triangle respectively to form three second triangles;
Acquiring the inner angle of each second triangle, and determining the angle variance corresponding to each second triangle and the sum of the angle variances corresponding to the three second triangles based on the inner angle of the second triangle;
the position of the candidate insertion point that minimizes the sum of the angular variances corresponding to the three second triangles is determined as the position of the insertion point.
In some embodiments, the first determining module 13 is further configured to:
sequentially calculating the sum of angle variances corresponding to three second triangles corresponding to each candidate insertion point in the first triangle;
and when the sum of the angle variances is identified to be smaller than a preset threshold value, taking the corresponding candidate insertion point with the sum of the angle variances smaller than the threshold value as the insertion point.
In some embodiments, the first determining module 13 is further configured to:
acquiring a first triangle corresponding to the history insertion point and three first triangles adjacent to the first triangle of the history insertion point, and marking the first triangle as an unavailable triangle;
the first triangle used for determining the current insertion point is selected from the first triangles except the unavailable triangle in the first triangle mesh.
In some embodiments, the first determining module 13 is further configured to:
when no other first triangles exist in the first triangle mesh, the marks of the unavailable triangles in the first triangle mesh are cleared.
It should be understood that the units or modules described in the three-dimensional reconstruction apparatus 10 correspond to the individual steps in the method described with reference to fig. 1. Thus, the operations and features described above with respect to the method are equally applicable to the three-dimensional reconstruction device 10 and the units contained therein, and are not described in detail herein. The three-dimensional reconstruction device 10 may be implemented in a browser of an electronic device or other security application in advance, or may be loaded into the browser of the electronic device or its security application by downloading or the like. The corresponding units in the three-dimensional reconstruction apparatus 10 may cooperate with units in an electronic device to implement aspects of embodiments of the present application.
The division of the modules or units mentioned in the above detailed description is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In summary, after the three-dimensional point cloud data of the object to be reconstructed is obtained, the three-dimensional reconstruction device provided by the embodiment of the application determines the insertion points based on the principle that the sum of the angle variances corresponding to the triangles is minimized, so as to obtain the target triangle mesh with more abundant shape expression information, and then performs three-dimensional rendering according to the target triangle mesh, so as to obtain the three-dimensional reconstruction model corresponding to the object to be reconstructed, thereby effectively improving the similarity between the three-dimensional model and the object to be reconstructed, and improving the visual effect of the three-dimensional model.
Fig. 9 is a block schematic diagram of a vehicle according to an embodiment of the present application. As shown in fig. 9, the vehicle 100 is provided with a plurality of groups of cameras 110, the plurality of groups of cameras 110 are disposed on front and rear sides, left and right sides, and a roof or front and rear sides, left and right sides, and a floor of the vehicle, wherein each group of cameras includes two fisheye cameras, and the vehicle further includes: a three-dimensional reconstruction device 10 implementing the method as described in the embodiments of the present application.
Referring now to fig. 10, fig. 10 shows a schematic diagram of a computer system suitable for use in implementing an electronic device or server of an embodiment of the application,
as shown in fig. 10, the computer system includes a Central Processing Unit (CPU) 1001 that can execute various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM1003, various programs and data required for operation instructions of the system are also stored. The CPU1001, ROM1002, and RAM1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005; an input section 1006 including a keyboard, a mouse, and the like; an output portion 1007 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), etc., and a speaker, etc.; a storage portion 1008 including a hard disk or the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The drive 1010 is also connected to the I/O interface 1005 as needed. A removable medium 1011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in the drive 1010, so that a computer program read out therefrom is installed as needed in the storage section 1008.
In particular, the process described above with reference to flowchart fig. 2 may be implemented as a computer software program according to an embodiment of the application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program contains program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1009, and/or installed from the removable medium 1011. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 1001.
The computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation instructions of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, blocks shown in two separate connections may in fact be performed substantially in parallel, or they may sometimes be performed in the reverse order, depending on the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules involved in the embodiments of the present application may be implemented in software or in hardware. The described units or modules may also be provided in a processor, for example, as: a processor includes an acquisition module, a subdivision module, an optimization module, and a rendering module. The names of these units or modules do not in some cases limit the units or modules themselves, for example, the acquisition module may also be described as "acquiring three-dimensional point cloud data determined based on calibration images acquired by the camera".
As another aspect, the present application also provides a computer-readable storage medium that may be included in the electronic device described in the above embodiment or may exist alone without being incorporated in the electronic device. The computer readable storage medium stores one or more programs that when executed by one or more processors perform the three-dimensional reconstruction method described in the present application.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in the present application is not limited to the specific combinations of technical features described above, but also covers other technical features which may be formed by any combination of the technical features described above or their equivalents without departing from the spirit of the disclosure. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (11)

1. A three-dimensional reconstruction method, comprising:
acquiring three-dimensional point cloud data of an object to be reconstructed;
Triangulating the two-dimensional points corresponding to the three-dimensional point cloud data to obtain a first triangular grid;
determining an insertion point in at least one first triangle in the first triangle mesh, and constructing three second triangles with the three vertices of the first triangle respectively using the insertion point; wherein the position of the insertion point is determined based on the principle of minimizing the sum of the angular variances corresponding to the three second triangles;
respectively obtaining convex quadrilaterals composed of the three second triangles and adjacent first triangles thereof, respectively calculating the sum of angle variances of the triangles, which are obtained by the convex quadrilaterals based on different diagonals, aiming at any convex quadrilaterals, and taking two triangles with smaller sum of the angle variances as new first triangles;
adding the new first triangle into the first triangle mesh, and deleting the first triangle which is overlapped with the new first triangle in the first triangle mesh to obtain a target triangle mesh;
and performing three-dimensional rendering based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed.
2. The method according to claim 1, wherein the performing three-dimensional rendering based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed comprises:
if the quantization error of the target triangle mesh and the target polygon is smaller than the preset error, three-dimensional rendering is performed based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed, wherein the target polygon is a contour graph of the object to be reconstructed, which is determined according to the image of the object to be reconstructed.
3. The method as recited in claim 2, further comprising:
and if the quantization errors of the target triangle mesh and the target polygon are larger than or equal to a preset error, taking the target triangle mesh as a new first triangle mesh, and returning to the step of determining the insertion point in at least one first triangle in the first triangle mesh and the subsequent step until the obtained quantization errors of the target triangle mesh and the target polygon are smaller than the preset error.
4. The method of claim 1, wherein determining the location of the insertion point comprises:
Traversing each candidate insertion point in the first triangle, and respectively connecting the candidate insertion points with three vertexes of the first triangle to form three second triangles;
acquiring an inner angle of each second triangle, and determining an angle variance corresponding to each second triangle and a sum of angle variances corresponding to the three second triangles based on the inner angle of the second triangle;
and determining the position of the candidate insertion point with the minimum sum of the angular variances corresponding to the three second triangles as the position of the insertion point.
5. The method of claim 1, wherein determining the location of the insertion point comprises:
sequentially calculating the sum of the angle variances corresponding to the three second triangles corresponding to each candidate insertion point in the first triangle;
and when the sum of the angle variances is identified to be smaller than a preset threshold value, taking the corresponding candidate insertion point with the sum of the angle variances smaller than the threshold value as the insertion point.
6. A method according to claim 3, further comprising:
acquiring a first triangle corresponding to a history insertion point and three first triangles adjacent to the first triangle of the history insertion point, and marking the first triangle as an unavailable triangle;
And selecting a first triangle used for determining the current insertion point from other first triangles except the unavailable triangle in the first triangle mesh.
7. The method as recited in claim 6, further comprising:
when the other first triangles are not present in the first triangle mesh, the marks of the unavailable triangles in the first triangle mesh are cleared.
8. A three-dimensional reconstruction apparatus, comprising:
the acquisition module is used for acquiring three-dimensional point cloud data of the object to be reconstructed;
the triangulation module is used for triangulating the two-dimensional points corresponding to the three-dimensional point cloud data to obtain a first triangular grid;
a first determining module, configured to determine an insertion point in at least one first triangle in the first triangle mesh, and construct three second triangles with three vertices of the first triangle using the insertion point, respectively; wherein the position of the insertion point is determined based on the principle of minimizing the sum of the angular variances corresponding to the three second triangles;
the second determining module is used for respectively acquiring convex quadrilaterals formed by the three second triangles and the adjacent first triangles, and respectively calculating the sum of the angular variances of the triangles, which are obtained by the convex quadrilaterals based on different diagonal lines, for any convex quadrilaterals, and taking the two triangles with smaller sum of the angular variances as new first triangles;
A third determining module, configured to add the new first triangle to the first triangle mesh, and delete a first triangle in the first triangle mesh that coincides with the new first triangle position, to obtain a target triangle mesh;
and the rendering module is used for performing three-dimensional rendering based on the target triangle mesh to obtain a three-dimensional reconstruction model of the object to be reconstructed.
9. The utility model provides a vehicle, its characterized in that, the vehicle is provided with multiunit camera, multiunit camera set up in both sides, left and right sides and roof around the vehicle or both sides, left and right sides and the bottom of the vehicle around the vehicle, wherein, every group camera includes two fisheye cameras, the vehicle still includes:
three-dimensional reconstruction device performing the method of any one of claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the three-dimensional reconstruction method according to any one of claims 1-7 when executing the program.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the three-dimensional reconstruction method as claimed in any one of claims 1-7.
CN202210183566.7A 2022-02-25 2022-02-25 Three-dimensional reconstruction method and device, and vehicle, equipment and medium based on three-dimensional reconstruction method and device Pending CN116704151A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210183566.7A CN116704151A (en) 2022-02-25 2022-02-25 Three-dimensional reconstruction method and device, and vehicle, equipment and medium based on three-dimensional reconstruction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210183566.7A CN116704151A (en) 2022-02-25 2022-02-25 Three-dimensional reconstruction method and device, and vehicle, equipment and medium based on three-dimensional reconstruction method and device

Publications (1)

Publication Number Publication Date
CN116704151A true CN116704151A (en) 2023-09-05

Family

ID=87839800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210183566.7A Pending CN116704151A (en) 2022-02-25 2022-02-25 Three-dimensional reconstruction method and device, and vehicle, equipment and medium based on three-dimensional reconstruction method and device

Country Status (1)

Country Link
CN (1) CN116704151A (en)

Similar Documents

Publication Publication Date Title
CN110163930B (en) Lane line generation method, device, equipment, system and readable storage medium
CN108520536B (en) Disparity map generation method and device and terminal
CN111640180B (en) Three-dimensional reconstruction method and device and terminal equipment
JP2003519421A (en) Method for processing passive volume image of arbitrary aspect
CN112465970B (en) Navigation map construction method, device, system, electronic device and storage medium
US9147279B1 (en) Systems and methods for merging textures
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN108399631B (en) Scale invariance oblique image multi-view dense matching method
CN110619674B (en) Three-dimensional augmented reality equipment and method for accident and alarm scene restoration
Kuschk Large scale urban reconstruction from remote sensing imagery
CN112102489B (en) Navigation interface display method and device, computing equipment and storage medium
CN111932627B (en) Marker drawing method and system
CN112967344A (en) Method, apparatus, storage medium, and program product for camera external reference calibration
CN110148173B (en) Method and device for positioning target in vehicle-road cooperation, electronic equipment and medium
CN111881985A (en) Stereo matching method, device, terminal and storage medium
CN113421217A (en) Method and device for detecting travelable area
CN116468870B (en) Three-dimensional visual modeling method and system for urban road
CN113566793A (en) True orthoimage generation method and device based on unmanned aerial vehicle oblique image
CN112639822B (en) Data processing method and device
CN116012805B (en) Target perception method, device, computer equipment and storage medium
CN113658262A (en) Camera external parameter calibration method, device, system and storage medium
CN116704151A (en) Three-dimensional reconstruction method and device, and vehicle, equipment and medium based on three-dimensional reconstruction method and device
CN114494582A (en) Three-dimensional model dynamic updating method based on visual perception
CN113850293A (en) Positioning method based on multi-source data and direction prior joint optimization
Habib et al. Integration of lidar and airborne imagery for realistic visualization of 3d urban environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination