CN115984456A - Texture mapping method and device, electronic equipment and storage medium - Google Patents

Texture mapping method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115984456A
CN115984456A CN202211520566.8A CN202211520566A CN115984456A CN 115984456 A CN115984456 A CN 115984456A CN 202211520566 A CN202211520566 A CN 202211520566A CN 115984456 A CN115984456 A CN 115984456A
Authority
CN
China
Prior art keywords
image
pose
target
ground surface
target ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211520566.8A
Other languages
Chinese (zh)
Inventor
郭帅威
丁文东
高航
万国伟
白宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211520566.8A priority Critical patent/CN115984456A/en
Publication of CN115984456A publication Critical patent/CN115984456A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The disclosure provides a texture mapping method and device, electronic equipment and a storage medium, and relates to the field of image processing, in particular to the technical field of three-dimensional reconstruction. The specific implementation scheme is as follows: and obtaining a target pose of a corresponding projection point of the target ground surface patch in the image according to the first line pose of the image, determining the projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the target pose corresponding to the target ground surface patch, and performing texture mapping on the target ground surface patch based on the projection point coordinate. By applying the embodiment of the disclosure, the target ground surface patch is projected based on the image pose of the line where the projection point corresponding to the target ground surface patch is located, the accuracy of the obtained pose corresponding to the target ground surface patch is improved, the pixel drift caused by line-by-line exposure and exposure delay of a rolling shutter is corrected, and the calculation precision of the projection point is improved, so that the texture mapping precision is improved, and the map element labeling accuracy is improved.

Description

Texture mapping method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to the field of three-dimensional reconstruction technologies.
Background
Three-dimensional reconstruction has wide application in various fields, such as high-precision map construction, cultural relic reconstruction, scene reconstruction and the like. Texture mapping is an important step in three-dimensional reconstruction, and means that texture information of a two-dimensional image is projected to a three-dimensional scene or object corresponding to fused point cloud data to obtain a texture map of the scene or object.
Disclosure of Invention
The disclosure provides a texture mapping method, a texture mapping device, an electronic device and a storage medium, which are used for improving the accuracy of texture mapping.
According to an aspect of the present disclosure, there is provided a method of texture mapping, including:
acquiring a target ground surface patch space coordinate based on the fused point cloud data, wherein the ground surface patch is obtained by dividing the ground corresponding to the fused point cloud data based on a preset resolution;
acquiring an image comprising the target ground surface patch based on the spatial coordinates of the target ground surface patch;
acquiring an image first line pose of the image, wherein the image first line pose is the pose when a camera acquires the image;
estimating the target pose of the projection point of the target ground surface patch in the image based on the first line pose of the image;
determining target projection points of the target ground surface patches in the image based on the space coordinates of the target ground surface patches and the target poses;
texture mapping is performed on the target ground patch based on a target projection point of the target ground patch in the image.
According to another aspect of the present disclosure, there is provided an apparatus for texture mapping, including:
the space coordinate acquisition module is used for acquiring space coordinates of a target ground surface patch based on the fused point cloud data, wherein the ground surface patch is obtained by dividing the ground corresponding to the fused point cloud data based on a preset resolution;
the image acquisition module is used for acquiring an image comprising the target ground surface patch based on the spatial coordinates of the target ground surface patch;
the first-line pose acquisition module is used for acquiring an image first-line pose of the image, wherein the image first-line pose is the pose when the camera acquires the image;
the target pose estimation module is used for estimating the target pose of the projection point of the target ground surface patch in the image based on the pose of the first line of the image;
the target projection point determining module is used for determining target projection points of the target ground surface patch in the image based on the space coordinates of the target ground surface patch and the target pose;
and the texture mapping module is used for performing texture mapping on the target ground surface patch based on the target projection point of the target ground surface patch in the image.
According to an aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the methods of texture mapping described above.
According to an aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of texture mapping as described in any one of the above.
According to an aspect of the disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of texture mapping as described in any one of the above.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic view of a first embodiment of an apparatus for texture mapping provided in accordance with the present disclosure;
FIG. 2 is a schematic diagram of the calculation of target ground patch coordinates in the method of texture mapping provided by the present disclosure;
FIG. 3a is a schematic flow chart of constructing a height mesh of a target point cloud in the texture mapping method provided by the present disclosure;
FIG. 3b is a schematic diagram of a triangular mesh;
FIG. 4 is a schematic diagram of a rolling shutter camera exposure mechanism;
FIG. 5 is a schematic flow chart of obtaining a pose of a target corresponding to a ground patch of the target in the method of texture mapping provided by the present disclosure;
FIG. 6 is a block diagram of a method of texture mapping provided in accordance with the present disclosure;
FIG. 7 is a schematic view of a first embodiment of an apparatus for texture mapping provided in accordance with the present disclosure;
FIG. 8 is a block diagram of an electronic device for implementing a method of texture mapping in accordance with an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The three-dimensional reconstruction process generally includes fusing collected point cloud data of a scene and an object, extracting texture information from the point cloud data, and mapping the scene or the object corresponding to the point cloud data to obtain a final reconstruction result. Taking the construction of a high-precision map as an example, the process of constructing the high-precision map generally includes fusing point cloud data acquired for a road, extracting road surface elements such as lane lines, traffic lights, road edges and the like from the point cloud data, and forming a road texture map to map the fused road, thereby obtaining the high-precision map.
When extracting the road surface elements from the point cloud data, the road surface elements are generally labeled automatically or manually by using the point cloud reflection value information. However, in some situations or scenes, the point cloud reflection value is not stable, so that it is difficult to extract clear road surface elements. Illustratively, when the lane line is worn, the difference between the lane line reflection value and the ground reflection value is small, and it is difficult to extract a clear lane line element. And abundant texture information can be collected by the image, and even if the lane lines are worn, the image still can be clearly seen. Therefore, in the related art, the image texture information is projected to the road texture map corresponding to the fused point cloud data by a texture mapping method, so as to improve the labeling accuracy of the high-precision map elements.
A Global shutter (Global shutter) camera or a Rolling shutter (Rolling shutter) camera is generally used to capture an image. Because the rolling shutter camera acquires images in a line-by-line exposure mode through the sensor when acquiring the images, the exposure time of pixels in different lines in one acquired image is different, and the camera poses (referred to as image poses herein) corresponding to the pixels in different lines in the same image are also different. In the related art, when texture mapping is performed on an image acquired based on a rolling shutter camera, an image pose error caused by a rolling shutter exposure mode is generally ignored, so that the error is transmitted to a texture map, a map element of the texture mapping drifts, the texture mapping precision is poor, and if a lane line on the texture map cannot be aligned with point cloud data, the map element labeling accuracy is poor.
In order to improve the texture mapping precision, the disclosure provides a texture mapping method, a texture mapping device, an electronic device and a storage medium. The following first exemplifies the method of texture mapping provided by the present disclosure:
the texture mapping method provided by the disclosure can be applied to any electronic equipment with texture mapping. The electronic device may be a server, a computer, a mobile terminal, and the like.
As shown in fig. 1, fig. 1 is a schematic flow diagram of a first embodiment of a method for texture mapping according to the present disclosure, which may specifically include the following steps:
s101, acquiring a target ground surface patch space coordinate based on the fused point cloud data, wherein the ground surface patch is obtained by dividing the ground corresponding to the fused point cloud data based on a preset resolution;
step S102, acquiring an image comprising the target ground surface patch based on the space coordinate of the target ground surface patch;
s103, acquiring an image first-line pose of the image, wherein the image first-line pose is the pose when the camera acquires the image;
s104, estimating the target pose of the projection point of the target ground surface patch in the image based on the image head line pose;
step S105, determining a target projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the target pose;
and S106, performing texture mapping on the target ground surface patch based on the target projection point of the target ground surface patch in the image.
By applying the embodiment of the disclosure, the image pose of the line where the projection point corresponding to the target ground surface patch is located is estimated based on the image first line pose of the image containing the target ground surface patch, and the image pose of the line where the projection point corresponding to the target ground surface patch is located is projected based on the image pose of the line where the projection point corresponding to the target ground surface patch is located, so that the image pose error caused by line-by-line exposure of the rolling shutter is considered, the accuracy of the acquired image pose corresponding to the target ground surface patch is improved, the pixel drift caused by line-by-line exposure and exposure delay of the rolling shutter is corrected, and the calculation accuracy of the projection point is improved, so that the texture mapping accuracy is improved, and the map element labeling accuracy is further improved.
The following is an exemplary description of the above steps S101-S106:
in step S101, the ground corresponding to the fused point cloud data may be divided according to a preset resolution to obtain each ground patch. As described above, texture mapping refers to projecting texture information of an image onto a three-dimensional scene or object corresponding to the point cloud data after fusion to obtain a texture map of the scene or object. The predetermined resolution may represent a floor area represented by one pixel grid in the texture map.
The preset resolution can be set according to actual needs. For example, the resolution may be 0.03125m, that is, each pixel grid in the texture map obtained by texture mapping the ground represents a ground patch having a size of 0.03125m by 0.03125m. Since the area of each ground patch is small, in the embodiment of the present disclosure, each ground patch may be regarded as a point.
The fused point cloud data may be ground point cloud data after fusion. And the ground corresponding to the fused point cloud data is the actual road corresponding to the fused ground point cloud data. The fused ground point cloud data is obtained by fusing the ground point cloud data. The ground point cloud data is point cloud data representing the ground, and for example, a horizontal plane may be identified from the point cloud data and marked as the ground, so as to obtain the ground point cloud data. The ground point cloud data may also be segmented using Principal Component Analysis (PCA), which is not specifically limited by the present disclosure.
As an embodiment, when the ground corresponding to the fused point cloud data is divided according to the preset resolution, the ground corresponding to the fused ground point cloud data within the preset range may be divided according to the preset resolution. For example, the ground corresponding to the fused point cloud data within range of 16m × 16m may be divided according to the above resolution of 0.03125m to obtain 512 × 512 pixel grids, where each pixel grid represents a ground patch with a size of 0.03125m × 0.03125m.
Then, each ground patch may be used as a target ground patch, and the above steps S101 to S106 are performed to perform texture mapping on each ground patch, so as to obtain a texture map of each ground patch.
Of course, the whole ground coordinates may also be divided in advance according to the preset resolution to obtain a plurality of ground patches, where the whole ground is larger than the ground corresponding to the fused ground point cloud data. And respectively taking the ground surface patches corresponding to the cloud data of each point marked as the ground in the fused point cloud data as target ground surface patches.
The target ground surface patch is obtained by dividing the ground corresponding to the fused point cloud data based on the preset resolution, namely by dividing the ground corresponding to the fused point cloud data according to a regular grid. Therefore, the ground coordinates of the target ground patch can be calculated based on the ground coordinates and the preset resolution, and the ground coordinates are two-dimensional plane coordinates.
In a high-precision map construction scene, in the fused point cloud data, the cloud data coordinates of each point correspond to the road coordinates in the actual space one by one. Therefore, as a specific implementation manner, the ground coordinates corresponding to any point cloud data in the preset range may be obtained, and the coordinates of each ground patch in the preset range may be calculated based on the ground coordinates and the preset resolution. For example, for the ground corresponding to the point cloud data in the range of 16m × 16m, the ground coordinates corresponding to the point cloud data at the four vertices may be obtained. And then, calculating the coordinates of the target ground patch based on the four vertex ground coordinates according to the position of the target ground patch. As shown in fig. 2, in the predefined coordinate system, if the coordinates of the four ground vertices are (0m, 0m), (0m, 16m), (16m, 0m), and (16m, 116m), if the pixel position of the ground texture map of the target ground patch within the preset range is (100pt ), each pixel grid represents 0.03125m × 0.03125m ground, and therefore, the coordinates of the target ground patch is (3.125m ). The predefined coordinate system, that is, the coordinate system used by the fused point cloud data, may be an ENU (east is an X axis, north is a Y axis, and height is a Z axis) coordinate system, and may be specifically set according to actual needs.
As described above, in texture mapping, points in a three-dimensional object or scene are projected into a two-dimensional image to obtain information of the three-dimensional object or scene from the image. Therefore, the spatial coordinates of the target ground patch obtained in step S101 should be three-dimensional coordinates. And the two-dimensional plane coordinates of the target ground surface patch can be obtained through the process. Therefore, the elevation of the road position conforming to the two-dimensional coordinates can be acquired from the elevation information of the point cloud data after fusion based on the two-dimensional plane coordinates of the target ground surface patch, and the two-dimensional plane coordinates of the target ground surface patch and the elevation information form the space coordinates of the target ground surface patch.
As a specific embodiment, as shown in fig. 3a, the elevation information of the point cloud data after fusion may be obtained by the following steps:
s301, aiming at each point cloud data acquisition point, acquiring a plane formed by point cloud data in a preset range of the point cloud data acquisition point;
step S302, obtaining an initial point cloud elevation grid based on the elevation of each point cloud data acquisition point and each plane;
and S303, optimizing the initial point cloud elevation grid according to a plane constraint function and a smooth constraint function to obtain a target point cloud elevation grid, wherein the plane constraint function is used for aligning each plane with the elevation difference of corresponding point cloud data acquisition points smaller than a preset threshold value, and the smooth constraint is used for enabling each plane to be connected smoothly.
The following exemplifies the above steps S301 to S303:
in practical applications, the acquired point cloud data generally includes three-dimensional coordinates of the point cloud data: the X-axis coordinate, the Y-axis coordinate, the Z-axis coordinate, the acquisition time, the acquisition pose and other information. The acquisition pose refers to the pose of a point cloud data acquisition device, such as a radar, a binocular camera and the like, when point cloud data are acquired, and includes the position and the pose of the point cloud data acquisition device, and the position of the point cloud data acquisition device is an acquisition point of the point cloud data. Therefore, in step S301, point cloud data acquisition point information may be obtained from the point cloud data, and a plurality of planes may be obtained by performing plane construction on the point cloud data in the second preset range of the acquisition point. The second preset range can be set according to actual needs. Such as a range of 1m, 2m from the point cloud data acquisition points.
After the planes are obtained, the planes can be merged and layered according to the elevation of the corresponding acquisition point when the planes are constructed. For example, planes with an elevation difference smaller than a preset threshold may be merged into the same layer, planes with an elevation difference larger than the preset threshold may be divided into planes of different layers, and the point cloud data may be gridded to obtain an initial elevation grid. The initial elevation mesh may be a triangular mesh or a quadrilateral mesh, which is not specifically limited by the present disclosure. The preset threshold value can be set according to actual needs.
And then, optimizing the initial point cloud elevation grid based on a preset plane constraint function and a smooth constraint function to obtain a more accurate height grid, so that the accuracy of subsequent elevation acquisition is improved. The plane constraint function is used for aligning each plane with the elevation difference of the corresponding point cloud data acquisition points smaller than the preset threshold, and the smooth constraint is used for enabling each plane to be connected smoothly.
The above optimization process is exemplified by a triangular mesh as an example. A triangular mesh, i.e. each cell in the mesh, is divided by a lattice point and a center point into 4 triangular patches, as shown in fig. 3 b. Each triangular patch is a mesh optimization unit.
The plane constraint function and the smooth constraint function can be set according to actual needs. For example, the plane constraint function cost _ plane and the smooth constraint function cost _ smooth may be:
Figure BDA0003973690350000081
Figure BDA0003973690350000082
in the plane constraint function, k is the number of point clouds in a triangular patch, n is a normal vector of the triangular patch, and x is obtained by calculating the coordinates of three vertexes v1, v2 and v3 of the triangular patch of the current height grid i And the coordinates of the ith point cloud data in the triangular panel are obtained. In the smooth constraint function, m is the number of elevations to be optimized, z i Elevation, z, for grid points or center points of the grid currently to be optimized mean And 4 point average elevations in a third preset range of the current point cloud data to be optimized. The third preset range may be set according to actual needs, which is not specifically limited in the present disclosure.
And optimizing the initial point cloud height grid according to the plane constraint function and the smooth constraint function, namely minimizing the plane constraint function and the smooth constraint function, and thus obtaining the target point cloud height grid. The target point cloud height grid can store the elevations of grid points and center points, and the elevations of corresponding positions can be accessed in the height grid through any plane coordinates (x, y).
Therefore, the elevation data of the target ground surface patch can be obtained from the target point cloud height grid based on the two-dimensional coordinates of the target ground surface patch, and the two-dimensional coordinates and the elevation data of the target ground surface patch form the space coordinates of the target ground surface patch.
In a high-precision map scene, because of the existence of multi-layer structures such as overpasses, multiple elevation data may be acquired for the same two-dimensional coordinate, that is, the two-dimensional coordinate of the target ground surface patch may mark multiple road positions. Therefore, the two-dimensional coordinates of the target ground patch and the corresponding elevations of the target ground patch can be respectively used as the space coordinates of the target ground patch for subsequent calculation.
Of course, the road surface corresponding to each elevation in the point cloud data after fusion can be divided according to the preset resolution to obtain the ground surface patches of each layer of road surface, and the three-dimensional coordinates of each ground surface patch can be obtained.
In practical application, the point cloud data acquisition and the image acquisition are performed simultaneously by using the same unmanned aerial vehicle, the same acquisition vehicle and other equipment for carrying the point cloud data acquisition and the image acquisition. The point cloud data acquisition device may be a laser radar (Lidar), a binocular camera, or the like. The image capturing device is a rolling shutter camera. The pose transformation relationship between the point cloud data acquisition device and the image acquisition device is usually determined in advance. Specifically, the pose of the image acquisition equipment at the same moment can be obtained by carrying out pose transformation on the pose of the cloud data acquisition equipment based on the camera external reference points. The above-mentioned camera external parameters are typically determined prior to use.
The texture mapping method provided by the present disclosure is exemplarily explained below by taking a point cloud data acquisition device as a laser radar and an image acquisition device as an airborne rolling shutter camera.
After determining the target ground patch spatial coordinates of the to-be-mapped image, an image containing the target ground patch spatial coordinates may be determined. The image containing the target ground patch may be one or more, and the disclosure is not limited thereto.
And then, acquiring the corresponding image poses of the images, namely the poses of the roller shutter camera when acquiring the images. Specifically, the pose of the point cloud data acquisition equipment corresponding to the acquisition time can be acquired based on the acquisition time of the image; and carrying out pose transformation on the pose of the point cloud data acquisition equipment according to the camera external parameters for acquiring the image to obtain the image pose of the image.
Illustratively, if the acquisition time of the image is t1, the pose of the radar acquiring the point cloud data at the time t1 can be acquired, and the pose is subjected to pose transformation according to the camera external parameter matrix to obtain the image pose of the image.
As a specific implementation mode, the radar pose can be corrected, and the corrected radar pose corresponding to the acquisition time is obtained, so that the projection precision is further improved.
Because the rolling shutter camera collects images in a line-by-line exposure mode, when an unmanned aerial vehicle or a collection vehicle carries the rolling shutter camera to collect image data in a moving mode, the pose of each line of the camera in the obtained image is different, as shown in fig. 4, fig. 4 shows an exposure mechanism of the rolling shutter:
the time for actually acquiring one frame of image by the rolling shutter camera includes an exposure time and an image output time. Specifically, when a frame of image is acquired by the rolling shutter camera, the frame of image is acquired in a line-by-line exposure mode, and the exposure time of each line of pixels in the image is different from the image output time.
However, the time recorded by the rolling shutter camera for acquiring one frame of image refers to the time between the end of exposure of the first row of pixels of the previous frame of image and the end of exposure of the first row of pixels of the current frame of image. The difference between the image output time of the last line of pixels of the previous frame image and the exposure end time of the first line of pixels of the current frame image is the acquisition delay time between the two frame images. That is, the image acquisition time recorded in the rolling shutter camera is actually the acquisition time of the first row of pixels in the image.
Therefore, after the pose of the radar is subjected to external parameter transformation by using the external parameters of the camera, the obtained image pose is the image pose corresponding to the first row of pixels in the image, and therefore, the image pose of the image is called as the first row of image pose in the disclosure. Because the pose of the point cloud data acquisition equipment and the acquisition time of the image can be directly acquired from the point cloud data and the image, the accurate image first-line pose can be obtained through the steps. However, the projection point corresponding to the target ground surface patch does not necessarily fall on the first line of the image, so the line position posture of the projection point of the target ground surface patch in the image can be calculated and obtained based on the position posture of the first line of the image.
As a specific implementation manner, for each image, the pose of the projection line of the target ground surface patch in the image may be estimated in an iterative manner. Specifically, as shown in fig. 5, the method may include the following steps:
s501, determining an initial projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the first line pose of the image;
step S502, determining an image pose corresponding to the initial projection point as a candidate image pose;
step S503, determining the current projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the candidate image pose;
and step S504, judging whether a preset convergence condition is reached. If the preset convergence condition is not reached, performing step S505, and if the preset convergence condition is reached, performing step S506;
step S505, determining an image pose corresponding to the current projection point as a new candidate image pose, and returning to step S503;
and S506, determining the image pose corresponding to the current projection point as a target pose.
The following exemplifies the above steps S501 to S506:
as a specific implementation manner, in step S501, the initial projective point coordinates may be obtained according to the following formula:
x = pi (K [ R1 t1] X) formula 1
In formula 1, X, x are the space coordinates of the target ground patch and the coordinates of the projection points corresponding to the target ground patch, K is a camera internal reference matrix, pi is a parameter reflecting lens distortion, and [ R1 t1] is the pose of the first line of the image, which can be expressed by a quaternion method. The above projection point coordinates are uv coordinates. Any pixel on the image can be located by two-dimensional uv coordinates. Where u is a column coordinate in the horizontal direction and v is a row coordinate in the vertical direction.
And then, calculating the image pose corresponding to the initial projection point coordinates.
As an implementation manner, feature point coordinates may be extracted from an image, a transformation matrix may be constructed based on the image feature point coordinates and corresponding spatial coordinates of a ground surface patch of a target, and an image pose corresponding to the projection point coordinates may be obtained based on the transformation matrix. However, this method is affected by the feature point selection method, and is low in accuracy and flexibility.
As another specific embodiment, the radar and the rolling shutter camera may be assumed to move at a constant speed in the process of acquiring point cloud data and images in the present disclosure. Therefore, if the image pose of each image when the acquisition is started and the image pose of each image when the acquisition is finished are known, the image pose corresponding to any pixel in the image can be estimated by interpolating the pose of the first line of the image and the pose of the last line of the image, so that the pose of any pixel in the acquired image of the rolling shutter camera can be estimated, and the accuracy and the flexibility of acquiring the corresponding coordinate of the projection point are improved.
Illustratively, the candidate image poses corresponding to the projection point coordinates can be obtained by the following steps:
s1, acquiring the line coordinate of the current projection point.
S2, carrying out interpolation calculation on the first line pose and the last line pose of the image to obtain an image pose corresponding to the current projection point, and using the image pose as a new candidate image pose.
The position and posture of the tail line of the image can be calculated based on the position and posture of the head line of the image. As a specific embodiment, the step S2 may include:
s21, acquiring the tail line acquisition time of the image based on the acquisition time of the image and the exposure time of the camera;
s22, acquiring the tail row acquisition time of the image based on the acquisition time of the image and the exposure time of the camera;
and S23, acquiring the pose of the tail line of the image based on the acquisition time of the tail line of the image.
The above-mentioned camera exposure time can be derived from camera parameters. And the image tail line acquisition time is the sum t plus delta t of the image acquisition time and the camera exposure time.
And then, acquiring the radar pose at the time based on the image tail line acquisition time, and carrying out external parameter transformation on the radar pose through the camera external parameters to obtain the image tail line pose. Therefore, the pose of the image tail line is calculated based on the pose of the image head line and the exposure time of the camera, which can be accurately obtained, and the pose of the tail line can be accurately obtained, so that an accurate interpolation result is obtained.
The movement of radar and rolling shutter cameras can typically be broken down into rotation and translation. Therefore, the interpolation of the pose of the first line of the image and the pose of the last line of the image can comprise the interpolation in the rotating direction and the interpolation in the translation direction, so that the pose of the line where the projection point is located is obtained more accurately.
As a specific implementation, the first line pose and the last line pose of the image can be interpolated by the following formulas.
Figure BDA0003973690350000121
In the formula, (q) last_ ,t last_ ) The pose of the head line under the tail line coordinate system can be obtained based on the pose of the head line of the image and the pose of the tail line of the image. (q) a last_ ,t last_ ) And the pose of the v-th row in a tail row coordinate system, r is a row coefficient corresponding to the projection point coordinate, specifically, r = v/h, and h is the image height. Of course, the line coefficient may be a product of a ratio of the projection point line coordinate v to the image height h and a preset coefficient. The present disclosure is not particularly limited thereto. q1 and t1 are preset initial poses, q 1 =(1,0,0,0),t 1 = (0,0,0). And then the pose of the v-th line in the world coordinate system can be obtained through coordinate system transformation.
After the candidate image pose is obtained, the target ground surface patch can be re-projected according to the formula 1 based on the candidate image pose and the space coordinates of the target ground surface patch to obtain a new current projection point coordinate, and the line pose of the current projection point is re-calculated according to the formula 2 based on the line coordinate v in the projection point coordinate.
And repeating the process until a preset convergence condition is reached. The preset convergence condition may be set according to actual needs, for example, a difference between two acquired projection point row coordinates v may be smaller than a preset threshold, and the preset threshold may be set according to actual needs, for example, may be 0.5 pixel. The convergence condition may be that the pose of the image obtained twice is smaller than a preset pose threshold value, and the like. The present disclosure is not particularly limited thereto.
And when the preset convergence condition is reached, taking the image pose corresponding to the current projection point as the target pose. By using an interpolation method to acquire the image pose of the projection point coordinates of the target ground surface patch in the image based on the first line pose and the last line pose of the image as the target pose, the accuracy of the acquired image pose corresponding to the target ground surface patch is improved, and the subsequent projection precision is improved.
After the target pose of the row where the projection point corresponding to the target ground surface patch is located is obtained, the target projection point coordinate corresponding to the target ground surface patch in the image can be calculated according to the formula 1 based on the space coordinate of the target ground surface patch and the target pose.
After the coordinates of the target projection points are obtained, the texture information of the pixels of the corresponding pose can be copied to the target ground surface patch according to the coordinates of the target projection points.
Since there may be a block of the ground elements by vehicles, pedestrians, etc. in the image collected by the rolling shutter camera, as a specific embodiment, the target ground surface patch may be mapped by the following steps:
step S601, determining a target image of which the target projection point is a ground element from each of the images.
For example, target detection may be performed on each of the images to determine whether a target projection point of the target ground patch in each image has an occlusion of a ground element by a vehicle, a pedestrian, or the like. Any image without ground element occlusion can then be selected as the target image. Of course, an image with the definition higher than a preset definition threshold value can be selected as the target image on the basis, so that the texture mapping precision is further improved.
Step S602, projecting the texture information at the target projection point in the target image to the target ground patch.
In step S602, the RGB values of the projection point of the target ground patch in the target image may be assigned to the pixel where the target ground patch is located in the texture map. And (4) after texture mapping is carried out on all the ground surface patches, a texture map of the ground corresponding to the fused ground point cloud data can be obtained.
As shown in fig. 6, fig. 6 is an execution framework diagram of a texture mapping method provided according to the present disclosure. The method mainly comprises three parts, namely elevation grid construction, rolling shutter model interpolation and ground texture projection.
The height grid construction specifically comprises: 1. and constructing a track plane based on the corrected radar pose and the fused ground point cloud data. Namely, a plane 2 is constructed according to the elevations of the cloud data acquisition points of each ground point, and the planes are merged and layered to obtain an initial point cloud elevation grid. 3. And optimizing the initial point cloud elevation grid to obtain a target elevation grid.
The rolling shutter model interpolation specifically includes: and acquiring the pose of the first line of the image and the pose of the last line of the image based on the corrected radar pose, the images comprising the target ground surface patch, the timestamps of the images and the camera parameters. The camera parameters include exposure time, a camera external parameter matrix, a camera internal parameter matrix and the like. And interpolating the first line pose and the last line pose of the image to obtain the image pose of the line where the target ground surface patch projection point in each image is located.
The ground texture projection section includes: 1. dividing the ground patches, namely dividing the ground patches according to a preset respective rate. 2. And acquiring the target ground surface patch elevation from the target elevation grid to obtain the space coordinate of the target ground surface patch. 3. And projecting the target ground surface patch to the image, namely projecting the target ground surface patch based on the space coordinate of the target ground surface patch and the image pose of the projection point of the target ground surface patch, so as to obtain the projection point of the target ground surface patch in each image. The candidate image includes the image of the target ground patch. 4. The image is filtered. That is, the best image is selected from the images, such as the image with high definition and without the ground being blocked by vehicles, pedestrians and the like. And assigning the pixel RGB value corresponding to the projection point coordinate in the optimal image to the pixel position of the target ground surface patch in the texture map.
By applying the embodiment of the disclosure, the rolling shutter imaging model is constructed based on the ground texture mapping frame of the rolling shutter camera model, and ground texture mapping is carried out by utilizing the model, so that the texture mapping drift caused by rolling shutter exposure is effectively corrected.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
According to an embodiment of the present disclosure, there is also provided an apparatus for texture mapping, as shown in fig. 7, the apparatus may include:
a spatial coordinate obtaining module 701, configured to obtain spatial coordinates of a target ground surface patch based on the fused point cloud data, where the ground surface patch is obtained by dividing a ground corresponding to the fused point cloud data based on a preset resolution;
an image obtaining module 702, configured to obtain an image including the target ground patch based on the spatial coordinates of the target ground patch;
a first line pose acquiring module 703, configured to acquire an image first line pose of the image, where the image first line pose is a pose when the camera acquires the image;
a target pose estimation module 704, configured to estimate a target pose of a line where a projection point of the target ground patch in the image is located based on the pose of the first line of the image;
a target projection point determining module 705, configured to determine a target projection point of the target ground surface patch in the image based on the spatial coordinates of the target ground surface patch and the target pose;
a texture mapping module 706 configured to texture map the target ground patch based on a target projection point of the target ground patch in the image.
In a possible embodiment, the target pose estimation module is configured to determine an initial projection point of the target ground patch in the image based on the spatial coordinates of the target ground patch and the pose of the first line of the image;
determining an image pose corresponding to the initial projection point as a candidate image pose;
determining a current projection point of the target ground surface patch in the image based on the spatial coordinates of the target ground surface patch and the candidate image pose;
determining an image pose corresponding to the current projection point as a new candidate image pose, returning to the step of determining the current projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the candidate image pose until a preset convergence condition is reached;
and determining the image pose corresponding to the current projection point as a target pose.
In a possible embodiment, the determining the image pose corresponding to the current projection point as a new candidate image pose includes:
acquiring the line coordinate of the current projection point;
carrying out interpolation calculation on the first line pose and the last line pose of the image to obtain the image pose corresponding to the current projection point, and using the image pose as a new candidate image pose
In a possible embodiment, the performing interpolation calculation on the first line pose and the last line pose of the image to obtain the image pose corresponding to the current projection point includes:
and performing rotary interpolation and linear interpolation on the first line pose and the last line pose of the image to obtain the image pose corresponding to the current projection point.
In a possible embodiment, the target pose estimation module is configured to obtain an image tail line acquisition time based on the acquisition time of the image and the exposure time of the camera;
and acquiring the tail line pose of the image based on the tail line acquisition time of the image.
In a possible embodiment, the acquiring the image first line pose of the image includes:
acquiring the pose of the point cloud data acquisition equipment corresponding to the acquisition time based on the acquisition time of the image;
and carrying out pose transformation on the pose of the point cloud data acquisition equipment according to the camera external parameters for acquiring the image to obtain the image first line pose of the image.
In a possible embodiment, the apparatus may further include:
the point cloud data fusion module is used for acquiring a plane formed by point cloud data in a preset range of point cloud data acquisition points aiming at each point cloud data acquisition point;
obtaining an initial point cloud elevation grid based on the elevation of each point cloud data acquisition point and each plane;
optimizing the initial point cloud elevation grid according to a plane constraint function and a smooth constraint function to obtain a target point cloud elevation grid, wherein the plane constraint function is used for aligning each plane with the elevation difference of corresponding point cloud data acquisition points smaller than a preset threshold value, and the smooth constraint is used for enabling each plane to be connected smoothly;
the method for acquiring the space coordinates of the target ground surface patch based on the fused point cloud data comprises the following steps:
acquiring a plane coordinate of a target ground surface patch;
and acquiring the elevation data of the target ground surface patch from the target point cloud elevation grid based on the planar coordinate of the target ground surface patch to obtain the spatial coordinate of the target ground surface patch.
In a possible embodiment, there are a plurality of said images comprising said target ground patch;
texture mapping the target ground patch based on a target projection point of the target ground patch in the image, comprising:
determining a target image of which the target projection point is a ground element from each image;
and projecting the texture information at the target projection point in the target image to the target ground patch.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 performs the respective methods and processes described above, such as the method of texture mapping. For example, in some embodiments, the method of texture mapping may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more steps of the method of texture mapping described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method of texture mapping by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (19)

1. A method of texture mapping, comprising:
acquiring a target ground surface patch space coordinate based on the fused point cloud data, wherein the ground surface patch is obtained by dividing the ground corresponding to the fused point cloud data based on a preset resolution;
acquiring an image comprising the target ground surface patch based on the spatial coordinates of the target ground surface patch;
acquiring an image first line pose of the image, wherein the image first line pose is the pose when a camera acquires the image;
estimating the target pose of the projection point of the target ground surface patch in the image based on the first line pose of the image;
determining a target projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the target pose;
texture mapping is performed on the target ground patch based on a target projection point of the target ground patch in the image.
2. The method of claim 1, wherein the estimating the target pose for the row of projection points of the target ground patch in the image based on the image top row pose comprises:
determining an initial projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the first line pose of the image;
determining an image pose corresponding to the initial projection point as a candidate image pose;
determining a current projection point of the target ground surface patch in the image based on the spatial coordinates of the target ground surface patch and the candidate image pose;
determining an image pose corresponding to the current projection point as a new candidate image pose, returning to the step of determining the current projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the candidate image pose until a preset convergence condition is reached;
and determining the image pose corresponding to the current projection point as a target pose.
3. The method of claim 2, wherein the determining the image pose corresponding to the current projection point as a new candidate image pose comprises:
acquiring the line coordinate of the current projection point;
and carrying out interpolation calculation on the first line pose and the last line pose of the image to obtain the image pose corresponding to the current projection point, and taking the image pose as a new candidate image pose.
4. The method according to claim 3, wherein the interpolating the image head pose and the image tail pose to obtain the image pose corresponding to the current projection point comprises:
and performing rotary interpolation and linear interpolation on the first line pose and the last line pose of the image to obtain the image pose corresponding to the current projection point.
5. The method of claim 3, further comprising:
acquiring the tail line acquisition time of the image based on the acquisition time of the image and the exposure time of the camera;
and acquiring the tail line pose of the image based on the tail line acquisition time of the image.
6. The method of claim 1, wherein said acquiring an image leader pose of the image comprises:
acquiring the pose of point cloud data acquisition equipment corresponding to the acquisition time based on the acquisition time of the image;
and carrying out pose transformation on the pose of the point cloud data acquisition equipment according to the camera external parameters for acquiring the image to obtain the image first line pose of the image.
7. The method of claim 1, further comprising:
aiming at each point cloud data acquisition point, acquiring a plane formed by point cloud data in a preset range of the point cloud data acquisition point;
obtaining an initial point cloud elevation grid based on the elevation of each point cloud data acquisition point and each plane;
optimizing the initial point cloud elevation grid according to a plane constraint function and a smooth constraint function to obtain a target point cloud elevation grid, wherein the plane constraint function is used for aligning each plane with the elevation difference of corresponding point cloud data acquisition points smaller than a preset threshold value, and the smooth constraint is used for enabling each plane to be connected smoothly;
the method for acquiring the space coordinates of the target ground surface patch based on the fused point cloud data comprises the following steps:
acquiring a plane coordinate of a target ground surface patch;
and acquiring the elevation data of the target ground surface patch from the target point cloud elevation grid based on the planar coordinate of the target ground surface patch to obtain the spatial coordinate of the target ground surface patch.
8. The method of claim 1, wherein there are a plurality of said images comprising said target ground patch;
texture mapping the target ground patch based on a target projection point of the target ground patch in the image, comprising:
determining a target image of which the target projection point is a ground element from each image;
and projecting the texture information at the target projection point in the target image to the target ground patch.
9. An apparatus for texture mapping, comprising:
the space coordinate acquisition module is used for acquiring space coordinates of a target ground surface patch based on the fused point cloud data, wherein the ground surface patch is obtained by dividing the ground corresponding to the fused point cloud data based on a preset resolution;
the image acquisition module is used for acquiring an image comprising the target ground surface patch based on the spatial coordinates of the target ground surface patch;
the first line pose acquisition module is used for acquiring an image first line pose of the image, wherein the image first line pose is a pose when the camera acquires the image;
the target pose estimation module is used for estimating the target pose of the projection point of the target ground surface patch in the image based on the pose of the first line of the image;
the target projection point determining module is used for determining a target projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the target pose;
and the texture mapping module is used for performing texture mapping on the target ground surface patch based on the target projection point of the target ground surface patch in the image.
10. The apparatus of claim 9, wherein the target pose estimation module is configured to determine an initial projection point of the target ground patch in the image based on the spatial coordinates of the target ground patch and the image top line pose;
determining an image pose corresponding to the initial projection point as a candidate image pose;
determining a current projection point of the target ground surface patch in the image based on the spatial coordinates of the target ground surface patch and the candidate image pose;
determining an image pose corresponding to the current projection point as a new candidate image pose, returning to the step of determining the current projection point of the target ground surface patch in the image based on the space coordinate of the target ground surface patch and the candidate image pose until a preset convergence condition is reached;
and determining the image pose corresponding to the current projection point as a target pose.
11. The apparatus of claim 10, wherein the determining the image pose corresponding to the current projection point as a new candidate image pose comprises:
acquiring the line coordinate of the current projection point;
and carrying out interpolation calculation on the first line pose and the last line pose of the image to obtain the image pose corresponding to the current projection point as a new candidate image pose.
12. The apparatus of claim 11, wherein the interpolating the image head pose and the image tail pose to obtain the image pose corresponding to the current projection point comprises:
and performing rotary interpolation and linear interpolation on the first line pose and the last line pose of the image to obtain the image pose corresponding to the current projection point.
13. The apparatus according to claim 11, wherein the target pose estimation module is configured to obtain an image tail line acquisition time based on the acquisition time of the image and the exposure time of the camera;
and acquiring the tail line pose of the image based on the tail line acquisition time of the image.
14. The apparatus of claim 9, wherein said acquiring an image leader pose for the image comprises:
acquiring the pose of the point cloud data acquisition equipment corresponding to the acquisition time based on the acquisition time of the image;
and carrying out pose transformation on the pose of the point cloud data acquisition equipment according to the camera external parameters for acquiring the image to obtain the image first-line pose of the image.
15. The apparatus of claim 9, further comprising:
the point cloud data fusion module is used for acquiring a plane formed by point cloud data in a preset range of point cloud data acquisition points aiming at each point cloud data acquisition point;
obtaining an initial point cloud elevation grid based on the elevation of each point cloud data acquisition point and each plane;
optimizing the initial point cloud elevation grid according to a plane constraint function and a smooth constraint function to obtain a target point cloud elevation grid, wherein the plane constraint function is used for aligning each plane with the elevation difference of corresponding point cloud data acquisition points smaller than a preset threshold value, and the smooth constraint is used for enabling each plane to be connected smoothly;
the method for acquiring the space coordinates of the target ground surface patch based on the fused point cloud data comprises the following steps:
acquiring a plane coordinate of a target ground surface patch;
and acquiring the elevation data of the target ground surface patch from the target point cloud elevation grid based on the planar coordinate of the target ground surface patch to obtain the spatial coordinate of the target ground surface patch.
16. The apparatus of claim 9, wherein there are a plurality of said images comprising said target ground patch;
texture mapping the target ground patch based on a target projection point of the target ground patch in the image, comprising:
determining a target image of which the target projection point is a ground element from each image;
and projecting the texture information at the target projection point in the target image to the target ground patch.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
CN202211520566.8A 2022-11-30 2022-11-30 Texture mapping method and device, electronic equipment and storage medium Pending CN115984456A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211520566.8A CN115984456A (en) 2022-11-30 2022-11-30 Texture mapping method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211520566.8A CN115984456A (en) 2022-11-30 2022-11-30 Texture mapping method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115984456A true CN115984456A (en) 2023-04-18

Family

ID=85963697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211520566.8A Pending CN115984456A (en) 2022-11-30 2022-11-30 Texture mapping method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115984456A (en)

Similar Documents

Publication Publication Date Title
CN110160502B (en) Map element extraction method, device and server
CN108319655B (en) Method and device for generating grid map
US8437501B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
US8861893B2 (en) Enhancing video using super-resolution
US10477178B2 (en) High-speed and tunable scene reconstruction systems and methods using stereo imagery
KR20190042187A (en) Method and apparatus of estimating depth value
CN113409459B (en) Method, device and equipment for producing high-precision map and computer storage medium
CN111968229A (en) High-precision map making method and device
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
US20220148219A1 (en) Method and system for visual localization
CN109255808B (en) Building texture extraction method and device based on oblique images
JP7422105B2 (en) Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device
KR20200075727A (en) Method and apparatus for calculating depth map
CN113989450A (en) Image processing method, image processing apparatus, electronic device, and medium
CN112967345B (en) External parameter calibration method, device and system of fish-eye camera
CN112085849A (en) Real-time iterative three-dimensional modeling method and system based on aerial video stream and readable medium
CN112967344A (en) Method, apparatus, storage medium, and program product for camera external reference calibration
CN112700486A (en) Method and device for estimating depth of road lane line in image
JP7351892B2 (en) Obstacle detection method, electronic equipment, roadside equipment, and cloud control platform
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN114299242A (en) Method, device and equipment for processing images in high-precision map and storage medium
CN114662587A (en) Three-dimensional target sensing method, device and system based on laser radar
CN117232499A (en) Multi-sensor fusion point cloud map construction method, device, equipment and medium
CN115790621A (en) High-precision map updating method and device and electronic equipment
US9852542B1 (en) Methods and apparatus related to georeferenced pose of 3D models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination