CN115205461A - Scene reconstruction method and device, readable storage medium and vehicle - Google Patents

Scene reconstruction method and device, readable storage medium and vehicle Download PDF

Info

Publication number
CN115205461A
CN115205461A CN202210837777.8A CN202210837777A CN115205461A CN 115205461 A CN115205461 A CN 115205461A CN 202210837777 A CN202210837777 A CN 202210837777A CN 115205461 A CN115205461 A CN 115205461A
Authority
CN
China
Prior art keywords
image
scene
point cloud
cloud data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210837777.8A
Other languages
Chinese (zh)
Other versions
CN115205461B (en
Inventor
叶郁冲
俞昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210837777.8A priority Critical patent/CN115205461B/en
Publication of CN115205461A publication Critical patent/CN115205461A/en
Application granted granted Critical
Publication of CN115205461B publication Critical patent/CN115205461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention relates to the technical field of automatic driving, in particular to a scene reconstruction method, a device, a readable storage medium and a vehicle, wherein the method comprises the steps of acquiring a plurality of scene images of a target scene, wherein different scene images are acquired by an image acquisition device under different acquisition visual angles; acquiring point cloud data corresponding to each scene image to obtain a plurality of point cloud data sets; acquiring a two-dimensional image corresponding to each point cloud data set; acquiring a pose error between every two-dimensional images, and adjusting every two-dimensional images according to the pose error to obtain an adjusted target image; and reconstructing the target scene according to the adjusted target image.

Description

Scene reconstruction method and device, readable storage medium and vehicle
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a scene reconstruction method, an apparatus, a readable storage medium, and a vehicle.
Background
The high-precision map is an important component of an automatic driving technology, is the basis of path planning and decision control of an automatic driving vehicle, and is a prerequisite for high-precision map making based on high-precision scene reconstruction of a vehicle-mounted laser radar.
Under the condition of reconstructing an outdoor scene, a vehicle-mounted laser radar is generally used for acquiring point cloud data of the scene, and then a method for constructing a constraint relation based on point cloud data frame matching is used for reconstructing a target scene, but in the method, radar point clouds at different moments and different visual angles need to be spliced together, so that the radar point clouds have strong dependence on the laser radar data, and in the process of reconstructing the scene by using the point cloud splicing, due to the fact that the point cloud data volume is too large, the scene reconstruction time is long, and the complexity of a three-dimensional reconstruction process is high.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a scene reconstruction method, apparatus, readable storage medium, and vehicle.
According to a first aspect of the embodiments of the present disclosure, there is provided a scene reconstruction method, including:
acquiring a plurality of scene images of a target scene, wherein different scene images are acquired by an image acquisition device under different acquisition visual angles;
acquiring point cloud data corresponding to each scene image to obtain a plurality of point cloud data sets;
acquiring a two-dimensional image corresponding to each point cloud data set;
acquiring a pose error between every two-dimensional images, and adjusting every two-dimensional images according to the pose error to obtain an adjusted target image;
and reconstructing the target scene according to the adjusted target image.
Optionally, the obtaining a pose error between every two-dimensional images, and adjusting every two-dimensional images according to the pose error to obtain an adjusted target image includes:
determining a model according to every two-dimensional images and a pre-trained error determination model, and determining a corresponding pose error of every two-dimensional images;
and adjusting every two-dimensional images according to the pose error to obtain an adjusted target image.
Optionally, the determining, according to each two dimensional images and a pre-trained error determination model, a pose error corresponding to each two dimensional images includes:
and regarding every two-dimensional images, taking the two-dimensional images as the input of the error determination model to obtain the pose errors corresponding to the two-dimensional images output by the error determination model.
Optionally, the adjusting every two-dimensional images according to the pose error to obtain an adjusted target image includes:
taking a first image of the two-dimensional images as a reference image, and adjusting a second image of the two-dimensional images according to the pose error to obtain a third image;
and determining a target image according to the third image and the first image.
Optionally, the pose error includes a relative displacement of the two-dimensional images along a preset direction, and a relative deflection angle along the preset direction,
adjusting a second image of the two-dimensional images according to the pose error to obtain a third image comprises:
determining a displacement value and a displacement direction corresponding to the relative displacement according to the relative displacement;
moving the second image according to the displacement direction to obtain a moved second image;
deflecting the moved second image according to the relative deflection angle to obtain a deflected second image;
and under the condition that the pixels in the deflected second image and the pixels in the first image are overlapped at the same position, removing the pixels overlapped with the first image in the deflected second image to obtain a third image.
Optionally, the reconstructing the target scene according to the adjusted target image includes:
determining the target point cloud data according to the target image;
and reconstructing the target scene according to the target point cloud data.
Optionally, the obtaining point cloud data corresponding to each scene image to obtain a plurality of point cloud data sets includes:
according to the scene images, obtaining a depth map corresponding to each scene image output by a depth estimation model through a preset depth estimation model;
and determining a plurality of point cloud data sets corresponding to the target scene according to the plurality of depth maps.
Optionally, the acquiring a two-dimensional image corresponding to each point cloud data set includes:
dividing each point cloud data set into a plurality of point cloud data blocks according to the number of preset point clouds;
for each point cloud data block, obtaining a target point cloud data block through downsampling processing, wherein the target point cloud data block comprises a preset number of point cloud data;
and projecting the target point cloud data blocks in the height direction according to the height direction of the target scene to obtain the two-dimensional image corresponding to each scene image.
Optionally, the error determination model is trained by:
acquiring a training sample, wherein the training sample comprises a plurality of sample two-dimensional images of a training scene and sample errors corresponding to every two sample two-dimensional images;
and training a preset error determination model according to the training sample to obtain the error determination model.
According to a second aspect of the embodiments of the present disclosure, there is provided a scene reconstruction apparatus, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire a plurality of scene images of a target scene, and different scene images are acquired by an image acquisition device under different acquisition visual angles;
the second acquisition module is configured to acquire point cloud data corresponding to each scene image to obtain a plurality of point cloud data sets;
a third acquisition module configured to acquire a two-dimensional image corresponding to each point cloud data set;
the adjusting module is configured to acquire a pose error between every two-dimensional images and adjust every two-dimensional images according to the pose error to obtain an adjusted target image;
a reconstruction module configured to reconstruct the target scene from the adjusted target image.
Optionally, the adjusting module includes:
the first determining submodule is configured to determine a pose error corresponding to each two-dimensional images according to each two-dimensional images and a pre-trained error determining model;
and the second determining submodule is configured to adjust each two-dimensional images according to the pose error to obtain an adjusted target image.
Optionally, the first determining submodule is configured to, for each two-dimensional images, use the two-dimensional images as inputs of the error determination model, and obtain the pose errors corresponding to the two-dimensional images output by the error determination model.
Optionally, the second determining submodule is configured to adjust a second image of the two-dimensional images according to the pose error by using a first image of the two-dimensional images as a reference image, so as to obtain a third image; and determining a target image according to the third image and the first image.
Optionally, the pose error includes a relative displacement of the two-dimensional images along a preset direction and a relative deflection angle along the preset direction, and the second determining sub-module is configured to, according to the pose error, adjust a second image of the two-dimensional images, and obtain a third image, where the adjusting includes: determining a displacement value and a displacement direction corresponding to the relative displacement according to the relative displacement; moving the second image according to the displacement direction to obtain a moved second image; deflecting the moved second image according to the relative deflection angle to obtain a deflected second image; and under the condition that the pixels in the deflected second image and the first image are overlapped at the same position, removing the pixels overlapped with the first image in the deflected second image to obtain a third image.
Optionally, the reconstruction module is configured to determine the target point cloud data from the target image; and reconstructing the target scene according to the target point cloud data.
Optionally, the second obtaining module includes:
the first obtaining sub-module is configured to obtain a depth map corresponding to each scene image output by the depth estimation model through a preset depth estimation model according to the scene images;
a second obtaining sub-module configured to determine a plurality of point cloud data sets corresponding to the target scene according to the plurality of depth maps.
Optionally, the third obtaining module is configured to divide each point cloud data set into a plurality of point cloud data blocks according to a preset point cloud number; for each point cloud data block, obtaining a target point cloud data block through downsampling processing, wherein the target point cloud data block comprises a preset number of point cloud data; and projecting the target point cloud data blocks in the height direction according to the height direction of the target scene to obtain the two-dimensional image corresponding to each scene image.
Optionally, the error determination model is trained by: acquiring a training sample, wherein the training sample comprises a plurality of sample two-dimensional images of a training scene and sample errors corresponding to every two sample two-dimensional images; and training a preset error determination model according to the training sample to obtain the error determination model.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the first aspects of the embodiments of the present disclosure.
According to a fourth aspect of the embodiments of the present disclosure, the present disclosure provides a vehicle including: a memory having a computer program stored thereon; a processor for executing the computer program in the memory to implement the steps of the method of any one of the first aspect of the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
acquiring a plurality of scene images of a target scene, wherein different scene images are acquired by an image acquisition device at different acquisition visual angles; acquiring point cloud data corresponding to each scene image to obtain a plurality of point cloud data sets; acquiring a two-dimensional image corresponding to each point cloud data set; acquiring a pose error between every two-dimensional images, and adjusting every two-dimensional images according to the pose error to obtain an adjusted target image; and reconstructing the target scene according to the adjusted target image. Therefore, data acquired by the laser radar can be converted into a plurality of different two-dimensional images of a target scene, the two-dimensional images are adjusted according to the pose error between every two-dimensional images, the target scene is finally reconstructed according to the adjusted two-dimensional images, the problem that the time for reconstructing the scene is long in the process of reconstructing the scene by point cloud splicing due to the fact that the amount of point cloud data is too large can be solved, the three-dimensional scene reconstruction process can be simplified, and the three-dimensional scene reconstruction efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating a method of scene reconstruction in accordance with an exemplary embodiment.
Fig. 2 is a flow chart illustrating another scene reconstruction method according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating a scene reconstruction apparatus according to an exemplary embodiment.
Fig. 4 is a block diagram of an adjustment module according to the embodiment shown in fig. 3.
FIG. 5 is a block diagram of a second acquisition module according to the embodiment shown in FIG. 3.
FIG. 6 is a functional block diagram schematic of a vehicle, shown in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Before describing in detail the embodiments of the present disclosure, an application scenario of the present disclosure will be described first. At present, the automatic driving technology is more and more mature, a high-precision map is an important component of the automatic driving technology, is the basis of path planning and decision control of an automatic driving vehicle, and high-precision scene reconstruction based on a vehicle-mounted laser radar is a prerequisite condition for high-precision map manufacturing, so that high-precision scene reconstruction needs to be carried out on a three-dimensional scene.
In the case of reconstruction of outdoor scenes, commonly used methods include building a three-dimensional model of the scene based on satellite-borne images or airborne images, and distance-based methods.
However, the satellite-borne image or airborne image-based method requires satellite shooting or aerial shooting, is high in implementation cost, can only obtain top data of a scene, cannot obtain ground data, is limited in observation accuracy, and is difficult to implement modeling of a specific building in the scene; and the method for constructing the constraint relation based on the interframe matching (ICP) utilizes the vehicle-mounted laser radar to obtain the three-dimensional geometric and texture information of the scenery in the scene, and can more accurately obtain the dense point cloud of the three-dimensional scene. However, for measurement and modeling of a large-scale scene, radar point clouds at different moments and different viewing angles need to be spliced together in such a mode, strong dependence is provided for laser radar data, and high complexity of a three-dimensional reconstruction process can be caused by overlarge point cloud data amount and long scene reconstruction time in a process of reconstructing a scene by utilizing point cloud splicing.
In order to overcome the technical problems in the related art, the present disclosure provides a scene reconstruction method, apparatus, readable storage medium, and vehicle. The method can convert data acquired by the laser radar into a plurality of different two-dimensional images of a target scene, adjust the two-dimensional images according to pose errors between every two-dimensional images, and finally reconstruct the target scene according to the adjusted two-dimensional images, can avoid the problem of long time for reconstructing the scene in the process of reconstructing the scene by point cloud splicing due to overlarge point cloud data volume, can simplify the reconstruction process of the three-dimensional scene, and improve the reconstruction efficiency of the three-dimensional scene.
The present disclosure is described below with reference to specific examples.
Fig. 1 is a flowchart illustrating a scene reconstruction method according to an exemplary embodiment, and as shown in fig. 1, the method may include:
in step S101, a plurality of scene images of a target scene are acquired.
Wherein, different scene images are gathered by image acquisition device under different collection visual angles, and the collection entity that is provided with this image acquisition device can be in this target scene, gathers the scene image of this target scene under different collection visual angles, and specifically, this collection entity can be the vehicle, and this image acquisition device can be collection devices such as laser radar or binocular camera.
For example, the image capturing device may be mounted on a vehicle, and during the capturing of the scene image, as the vehicle travels in the target scene, the image capturing device may capture scene images of a plurality of the target scenes at different capturing perspectives, and the scene object may include a tree or a building, etc.
For example, in the case where the target scene is an intersection, the scene image of the target scene may be collected from west to east at a position away from one of the intersections by a predetermined distance, and then the scene image of the target scene may be collected from east to west at a position away from the intersection opposite to the intersection by the predetermined distance.
In step S102, point cloud data corresponding to each scene image is obtained to obtain a plurality of point cloud data sets.
The point cloud data in the plurality of point cloud data sets can be used for representing the target scene, one of the point cloud data sets can correspond to a subset of the target scene, and an intersection exists among the plurality of point cloud data sets and is used for representing an overlapping part among the point cloud data sets.
In this step, after obtaining a plurality of scene images of a target scene, a depth map corresponding to the scene image output by a depth estimation model may be determined according to a preset depth estimation model for each scene image, where a pixel value of the depth map may be used to reflect a distance from a scene object in the target scene to a camera, and illustratively, a plurality of depth maps corresponding to the scene image may be obtained through the preset depth estimation model according to the plurality of scene images at different acquisition perspectives; and then determining a plurality of point cloud data sets corresponding to the target scene according to the plurality of depth maps.
For example, the depth information of the depth map may be first determined according to each depth map, where the depth information includes location information of each pixel point on the depth map and depth information corresponding to the location information, then three-dimensional coordinates of a plurality of spatial points are determined according to a correspondence between the depth information and the spatial points, and finally one point cloud data set corresponding to the depth map is determined according to the three-dimensional coordinates of the plurality of spatial points, and after the point cloud data sets corresponding to the plurality of depth maps are determined, the plurality of point cloud data sets corresponding to the target scene may be obtained.
In step S103, a two-dimensional image corresponding to each point cloud data set is obtained.
In this step, each point cloud data set may be divided into a plurality of point cloud data blocks according to a preset number of point clouds; then, for each point cloud data block, obtaining a target point cloud data block through downsampling processing, wherein the target point cloud data block comprises a preset number of point cloud data; and then projecting the target point cloud data blocks according to the height direction of the target scene to obtain the two-dimensional image corresponding to each scene image.
In some embodiments, in a case where a plurality of target point cloud data blocks are projected to the height direction of a target scene by a point cloud in a point cloud data set according to the height direction of the target scene, considering that the number of point clouds is dense, which may result in a large calculation amount, each point cloud data set may be divided into a plurality of point cloud data blocks, and the target point cloud data blocks may be obtained through a downsampling process, so that the target point cloud data blocks include a preset number of point cloud data.
For example, after the target point cloud data blocks are obtained through the down-sampling process, each target point cloud data block may include an average point cloud data, which may be represented as (a, b, c), where a, b, c are respectively used to represent an average value of the point cloud data in the point cloud data block on each coordinate axis; the point cloud data set comprises n target point cloud data blocks in the height direction of the position of the target point cloud data block, information of the position of the target point cloud data block can be obtained after projection, similarly, after all the target point cloud data blocks in the point cloud data set are projected in the height direction, a two-dimensional image corresponding to the point cloud data set can be obtained, and the pixel value of each pixel point on the two-dimensional image can be the numerical value of the target point cloud data of the position of the pixel point.
In step S104, a pose error between each two-dimensional images is obtained, and each two-dimensional image is adjusted according to the pose error to obtain an adjusted target image.
In this step, a model can be determined according to every two-dimensional images and a pre-trained error, and a pose error corresponding to every two-dimensional images is determined; and then adjusting every two-dimensional images according to the pose error to obtain an adjusted target image.
For example, for every two-dimensional images, two-dimensional images may be used as the input of the error determination model, and the pose error corresponding to the two-dimensional images output by the error determination model is obtained.
In step S105, the target scene is reconstructed from the adjusted target image.
In this step, the target point cloud data may be first determined according to the target image, and then the target scene may be reconstructed according to the target point cloud data.
By adopting the method, a plurality of scene images of the target scene are obtained, wherein different scene images are acquired by the image acquisition device at different acquisition visual angles; acquiring point cloud data corresponding to each scene image to obtain a plurality of point cloud data sets; acquiring a two-dimensional image corresponding to each point cloud data set; acquiring a pose error between every two-dimensional images, and adjusting every two-dimensional images according to the pose error to obtain an adjusted target image; and reconstructing the target scene according to the adjusted target image. Therefore, data acquired by the laser radar can be converted into a plurality of different two-dimensional images of a target scene, the two-dimensional images are adjusted according to the pose error between every two-dimensional images, the target scene is reconstructed according to the adjusted two-dimensional images, the problem that due to the fact that the point cloud data volume is too large, the time for reconstructing the scene is long in the process of reconstructing the scene by point cloud splicing can be solved, the three-dimensional scene reconstruction process can be simplified, and the three-dimensional scene reconstruction efficiency is improved.
Fig. 2 is a flowchart illustrating another scene reconstruction method according to an exemplary embodiment, and as shown in fig. 2, the method may include:
in step S201, a plurality of scene images of a target scene are acquired.
Wherein, different scene images are gathered by image acquisition device under different collection visual angles, and the collection entity that is provided with this image acquisition device can gather the scene image of this target scene under different collection visual angles in this target scene, specifically, this collection entity can be the vehicle, and this image acquisition device can be collection devices such as laser radar or binocular camera.
For example, the image capturing device may be mounted on a vehicle, and during the process of capturing the scene image, as the vehicle travels in the target scene, the image capturing device may capture scene images of a plurality of target scenes at different capturing perspectives, where the scene object may include a tree or a building, etc.
For example, when the target scene is an intersection, the scene image of the target scene may be collected from west to east at a position away from one of the intersections by a predetermined distance, and then the scene image of the target scene may be collected from east to west at a position away from the opposite intersection by the predetermined distance.
In step S202, according to the scene image, a depth map corresponding to each scene image output by the depth estimation model is obtained through a preset depth estimation model.
Wherein the pixel values of the depth map may be used to reflect the distance of a scene object in the target scene to the image acquisition device.
In this step, a plurality of scene images acquired by the image acquisition device under different acquisition visual angles may be used, where each scene image includes two cameras spaced at a certain distance to simultaneously acquire two images, corresponding image pixel points in the two images are determined by a preset depth estimation model, and parallax information in the two images is calculated according to a trigonometric principle, and the parallax information may be used to represent depth information of a scene object in the scene image by conversion; under the condition that a plurality of scene images are obtained, a depth map corresponding to each scene image can be determined through a preset depth estimation model according to the plurality of scene images.
In step S203, a plurality of point cloud data sets corresponding to the target scene are determined according to a plurality of depth maps.
The point cloud data in the plurality of point cloud data sets can be used for representing the target scene, one of the point cloud data sets can correspond to a subset of the target scene, and an intersection exists among the plurality of point cloud data sets and is used for representing an overlapping part among the point cloud data sets.
In this step, the depth information of the depth map may be determined according to each depth map, where the depth information includes location information of each pixel point on the depth map and depth information corresponding to the location information, then three-dimensional coordinates of a plurality of spatial points are determined according to a correspondence between the depth information and the spatial points, and finally one point cloud data set corresponding to the depth map is determined according to the three-dimensional coordinates of the plurality of spatial points, and after the point cloud data sets corresponding to the plurality of depth maps are determined, the plurality of point cloud data sets corresponding to the target scene may be obtained.
Specifically, the depth information of the depth map may be represented as (x ', y', D), where x ', y' is a position coordinate of each pixel point, and D is a depth value of the pixel point; the correspondence between the depth information and the spatial points can be determined by the following formula:
Figure BDA0003749380360000121
Figure BDA0003749380360000122
D=z×s;
wherein, f x 、f y Refers to the focal length of the image acquisition device on the x and y axes, c x 、c y The aperture center of the camera, s the zoom factor of the depth map, x, y, z the spatial pointsThree-dimensional coordinates of (a).
In step S204, a two-dimensional image corresponding to each point cloud data set is obtained;
in this step, each point cloud data set may be divided into a plurality of point cloud data blocks according to a preset number of point clouds; then, for each point cloud data block, obtaining a target point cloud data block through downsampling processing, wherein the target point cloud data block comprises a preset number of point cloud data; and then projecting the target point cloud data blocks in the height direction according to the height direction of the target scene to obtain the two-dimensional image corresponding to each scene image.
In some embodiments, in a case where a plurality of target point cloud data blocks are projected to the height direction of a target scene by a point cloud in a point cloud data set according to the height direction of the target scene, considering that the number of point clouds is dense, which may result in a large calculation amount, each point cloud data set may be divided into a plurality of point cloud data blocks, and the target point cloud data blocks may be obtained through a downsampling process, so that the target point cloud data blocks include a preset number of point cloud data.
For example, after the target point cloud data blocks are obtained through the down-sampling process, each target point cloud data block may include an average point cloud data, which may be represented as (a, b, c), where a, b, c are respectively used to represent an average value of the point cloud data in the point cloud data block on each coordinate axis; the method includes the steps that n target point cloud data blocks are arranged in the height direction of the position of the target point cloud data block, information of the position of the target point cloud data block can be obtained after projection, similarly, after all the target point cloud data blocks in a point cloud data set are projected in the height direction, a two-dimensional image corresponding to the point cloud data set can be obtained, and the pixel value of each pixel point on the two-dimensional image can be the value of the target point cloud data of the position of the pixel point.
In step S205, a pose error corresponding to each two-dimensional images is determined according to each two-dimensional images and a pre-trained error determination model.
Wherein the pose error includes the scene images collected under different collection visual angles, and the error of the whole deviation of the scene images caused by the image collection device, for example, the three-dimensional space data of a certain object when collected at the a visual angle can be expressed as (x) i ,y i ,z i ) The three-dimensional space data of an object acquired at the b view angle can be expressed as (x) j ,y j ,z j ) Under the same three-dimensional space coordinate system, the coordinate of the same position point of the object deviates to the preset direction and/or the preset direction, and in this case, the deviation degree to the preset direction and/or the preset direction can be used as the pose error.
Wherein, the error determination model can be obtained by training in the following way:
acquiring a training sample, wherein the training sample comprises a plurality of sample two-dimensional images of a training scene and sample errors corresponding to every two sample two-dimensional images;
and training a preset error determination model according to the training sample to obtain the error determination model.
In step S206, a first image of the two-dimensional images is used as a reference image, and a second image of the two-dimensional images is adjusted according to the pose error to obtain a third image.
The pose error comprises relative displacement of the two-dimensional images along a preset direction and a relative deflection angle along the preset direction.
In some embodiments, a first image of the two-dimensional images may be used as a reference image, and a second image of the two-dimensional images may be adjusted according to the pose error to obtain a third image; then determining a target image according to the third image and the first image; and determining the target point cloud data according to the target image.
For example, a displacement value and a displacement direction corresponding to the relative displacement may be determined according to the relative displacement; then, the second image is moved according to the displacement direction to obtain a moved second image; deflecting the moved second image according to the relative deflection angle to obtain an adjusted second image; and under the condition that the pixel points in the adjusted second image and the pixel points in the first image are overlapped at the same position, removing the pixel points overlapped with the first image in the adjusted second image to obtain a third image.
In step S207, a target image is determined according to the third image and the first image.
For example, the third image may be superimposed with the first image to obtain the target image.
In step S208, the target scene is reconstructed from the adjusted target image.
After determining the target image, the target point cloud data may first be determined from the target image; and then reconstructing the target scene according to the target point cloud data.
By adopting the method, the two-dimensional images are determined for different subsets of the target scene, and the target image corresponding to the target scene is determined according to the information of the overlapped positions of the two-dimensional images, so that the target scenes of different local areas can be fused and aligned, the target scene in a full range can be reconstructed, the target scene in the full range can be reconstructed according to the local acquisition result, the reconstruction range is large, and the realization cost is low.
Fig. 3 is a block diagram illustrating a scene reconstruction apparatus 300 according to an exemplary embodiment. Referring to fig. 3, the apparatus includes a first obtaining module 301, a second obtaining module 302, a third obtaining module 303, an adjusting module 304 and a reconstructing module 305.
A first acquiring module 301 configured to acquire a plurality of scene images of a target scene, wherein different scene images are acquired by an image acquiring apparatus at different acquiring viewing angles;
a second obtaining module 302, configured to obtain point cloud data corresponding to each scene image, resulting in a plurality of point cloud data sets;
a third obtaining module 303 configured to obtain a two-dimensional image corresponding to each point cloud data set;
an adjusting module 304, configured to acquire a pose error between every two-dimensional images, and adjust every two-dimensional images according to the pose error to obtain an adjusted target image;
a reconstruction module 305 configured to reconstruct the target scene from the adjusted target image.
Fig. 4 is a block diagram of an adjustment module shown in accordance with the embodiment shown in fig. 3. Referring to fig. 4, the adjusting module 304 includes:
a first determining submodule 3041, configured to determine a pose error corresponding to each two-dimensional images according to each two-dimensional images and a pre-trained error determination model;
a second determining submodule 3042 configured to adjust each two-dimensional images according to the pose error, so as to obtain an adjusted target image.
Optionally, the first determining submodule 3041 is configured to, for each two-dimensional images, use the two-dimensional images as input of the error determination model, and obtain the pose errors corresponding to the two-dimensional images output by the error determination model.
Optionally, the second determining submodule 3042 is configured to, with a first image of the two-dimensional images as a reference image, adjust a second image of the two-dimensional images according to the pose error to obtain a third image; and determining a target image according to the third image and the first image.
Optionally, the pose error includes a relative displacement of the two-dimensional images along a preset direction and a relative deflection angle along the preset direction, and the second determining submodule 3042 is configured to, according to the pose error, adjust a second image of the two-dimensional images, and obtain a third image, where: determining a displacement value and a displacement direction corresponding to the relative displacement according to the relative displacement; moving the second image according to the displacement direction to obtain a moved second image; deflecting the moved second image according to the relative deflection angle to obtain a deflected second image; and under the condition that the pixels in the deflected second image and the first image are overlapped at the same position, removing the pixels overlapped with the first image in the deflected second image to obtain a third image.
Optionally, the reconstruction module 305 is configured to determine the target point cloud data from the target image; and reconstructing the target scene according to the target point cloud data.
FIG. 5 is a block diagram of a second acquisition module according to the embodiment shown in FIG. 3. Referring to fig. 5, the second obtaining module 302 includes:
a first obtaining submodule 3021 configured to obtain, according to the scene image, a depth map corresponding to each scene image output by the depth estimation model through a preset depth estimation model;
a second obtaining submodule 3022 configured to determine, according to a plurality of depth maps, a plurality of point cloud data sets corresponding to the target scene.
Optionally, the third obtaining module 303 is configured to divide each point cloud data set into a plurality of point cloud data blocks according to a preset number of point clouds; for each point cloud data block, obtaining a target point cloud data block through downsampling processing, wherein the target point cloud data block comprises a preset number of point cloud data; and projecting the target point cloud data blocks in the height direction according to the height direction of the target scene to obtain the two-dimensional image corresponding to each scene image.
Optionally, the error determination model is trained by: acquiring a training sample, wherein the training sample comprises a plurality of sample two-dimensional images of a training scene and sample errors corresponding to every two sample two-dimensional images; and training a preset error determination model according to the training sample to obtain the error determination model.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the scene reconstruction method provided by the present disclosure.
Referring to fig. 6, fig. 6 is a functional block diagram of a vehicle 600 according to an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 600 may acquire environmental information of its surroundings through the sensing system 620 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement full automatic driving, or present the analysis result to the user to implement partial automatic driving.
Vehicle 600 may include various subsystems such as infotainment system 610, perception system 620, decision control system 630, drive system 640, and computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 600 may be interconnected by wire or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system that may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 612 may include a display device, a microphone, and a sound box, and a user may listen to a broadcast in the car based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate the display equipment by touching the screen.
In some cases, the voice signal of the user may be acquired through a microphone, and certain control of the vehicle 600 by the user, such as adjusting the temperature in the vehicle, etc., may be implemented according to the analysis of the voice signal of the user. In other cases, music may be played to the user through a stereo.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a route of travel for the vehicle 600, and the navigation system 613 may be used in conjunction with a global positioning system 621 and an inertial measurement unit 622 of the vehicle. The map service provided by the map provider can be a two-dimensional map or a high-precision map.
The sensing system 620 may include several sensors that sense information about the environment surrounding the vehicle 600. For example, the sensing system 620 may include a global positioning system 621 (the global positioning system may be a GPS system, a beidou system or other positioning system), an Inertial Measurement Unit (IMU) 622, a laser radar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors of internal systems of the monitored vehicle 600 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
Global positioning system 621 is used to estimate the geographic location of vehicle 600.
The inertial measurement unit 622 is used to sense a pose change of the vehicle 600 based on the inertial acceleration. In some embodiments, the inertial measurement unit 622 may be a combination of an accelerometer and a gyroscope.
Lidar 623 utilizes laser light to sense objects in the environment in which vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, in addition to sensing objects, the millimeter-wave radar 624 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 625 may sense objects around the vehicle 600 using ultrasonic signals.
The camera 626 is used to capture image information of the surroundings of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, and the like, and the image information acquired by the image capturing device 626 may include still images or video stream information.
Decision control system 630 includes a computing system 631 that makes analytical decisions based on information acquired by sensing system 620, decision control system 630 further includes a vehicle control unit 632 that controls the powertrain of vehicle 600, and a steering system 633, throttle 634, and brake system 635 for controlling vehicle 600.
The computing system 631 may operate to process and analyze the various information acquired by the perception system 620 to identify objects, and/or features in the environment surrounding the vehicle 600. The target may comprise a pedestrian or an animal and the objects and/or features may comprise traffic signals, road boundaries and obstacles. The computing system 631 may use object recognition algorithms, structure From Motion (SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 631 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle controller 632 may be used to perform coordinated control on the power battery and the engine 641 of the vehicle to improve the power performance of the vehicle 600.
The steering system 633 is operable to adjust the heading of the vehicle 600. For example, in one embodiment, a steering wheel system.
The throttle 634 is used to control the operating speed of the engine 641 and thus the speed of the vehicle 600.
The brake system 635 is used to control the deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheel 644. In some embodiments, the braking system 635 may convert kinetic energy of the wheels 644 to electrical current. The braking system 635 may also take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered motion to the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transmit mechanical power from the engine 641 to the wheels 644. The transmission 643 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 643 may also include other devices, such as clutches. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functionality of the vehicle 600 is controlled by the computing platform 650. Computing platform 650 can include at least one processor 651, which processor 651 can execute instructions 653 stored in a non-transitory computer-readable medium, such as memory 652. In some embodiments, the computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 600 in a distributed manner.
The processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor 651 may also include a processor such as a Graphics Processor Unit (GPU), a Field Programmable Gate Array (FPGA), a System On Chip (SOC), an Application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 6 functionally illustrates a processor, memory, and other elements of a computer in the same block, those skilled in the art will appreciate that the processor, computer, or memory may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some of the components, such as the steering and deceleration components, may each have their own processor that performs only computations related to the component-specific functions.
In the disclosed embodiment, the processor 651 may execute the path planning method described above.
In various aspects described herein, the processor 651 can be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to perform a single maneuver.
In some embodiments, the memory 652 may contain instructions 653 (e.g., program logic), which instructions 653 may be executed by the processor 651 to perform various functions of the vehicle 600. Memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of infotainment system 610, perception system 620, decision control system 630, drive system 640.
In addition to instructions 653, memory 652 may also store data such as road maps, route information, the location, direction, speed, and other such vehicle data of the vehicle, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
Computing platform 650 may control functions of vehicle 600 based on inputs received from various subsystems (e.g., drive system 640, perception system 620, and decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by sensing system 620. In some embodiments, the computing platform 650 is operable to provide control over many aspects of the vehicle 600 and its subsystems.
Optionally, one or more of these components described above may be mounted separately from or associated with the vehicle 600. For example, the memory 652 may exist partially or completely separate from the vehicle 600. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 6 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a roadway, such as vehicle 600 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 600 or a sensory and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each identified object depends on the behavior of each other, so it is also possible to predict the behavior of a single identified object taking all identified objects together into account. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 to cause the autonomous vehicle to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 600 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the disclosed embodiment is not particularly limited.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the path planning method described above when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method for scene reconstruction, the method comprising:
acquiring a plurality of scene images of a target scene, wherein different scene images are acquired by an image acquisition device under different acquisition visual angles;
acquiring point cloud data corresponding to each scene image to obtain a plurality of point cloud data sets;
acquiring a two-dimensional image corresponding to each point cloud data set;
acquiring a pose error between every two-dimensional images, and adjusting every two-dimensional images according to the pose error to obtain an adjusted target image;
and reconstructing the target scene according to the adjusted target image.
2. The method of claim 1, wherein the obtaining a pose error between each two-dimensional image and adjusting each two-dimensional image according to the pose error to obtain an adjusted target image comprises:
determining a model according to every two-dimensional images and a pre-trained error determination model, and determining a corresponding pose error of every two-dimensional images;
and adjusting every two-dimensional images according to the pose error to obtain an adjusted target image.
3. The method according to claim 2, wherein the determining of the pose error corresponding to each two-dimensional images according to each two-dimensional images and a pre-trained error determination model comprises:
and aiming at every two-dimensional images, taking the two-dimensional images as the input of the error determination model to obtain the pose errors corresponding to the two-dimensional images output by the error determination model.
4. The method according to claim 2, wherein the adjusting each two-dimensional images according to the pose error to obtain adjusted target images comprises:
taking a first image of the two-dimensional images as a reference image, and adjusting a second image of the two-dimensional images according to the pose error to obtain a third image;
and determining a target image according to the third image and the first image.
5. The method according to claim 4, characterized in that the pose errors include relative displacement of the two-dimensional images along a preset direction and relative deflection angles along the preset direction,
the adjusting the second image of the two-dimensional images according to the pose error to obtain a third image comprises:
determining a displacement value and a displacement direction corresponding to the relative displacement according to the relative displacement;
moving the second image according to the displacement direction to obtain a moved second image;
deflecting the moved second image according to the relative deflection angle to obtain a deflected second image;
and under the condition that the pixels in the deflected second image and the pixels in the first image are overlapped at the same position, removing the pixels overlapped with the first image in the deflected second image to obtain a third image.
6. The method of claim 1, wherein reconstructing the target scene from the adjusted target image comprises:
determining the target point cloud data according to the target image;
and reconstructing the target scene according to the target point cloud data.
7. The method of claim 1, wherein obtaining point cloud data corresponding to each scene image to obtain a plurality of point cloud data sets comprises:
according to the scene images, obtaining a depth map corresponding to each scene image output by a depth estimation model through a preset depth estimation model;
and determining a plurality of point cloud data sets corresponding to the target scene according to the plurality of depth maps.
8. The method of claim 1, wherein the acquiring a corresponding two-dimensional image of each point cloud dataset comprises:
dividing each point cloud data set into a plurality of point cloud data blocks according to the number of preset point clouds;
for each point cloud data block, obtaining a target point cloud data block through downsampling processing, wherein the target point cloud data block comprises a preset number of point cloud data;
and projecting the target point cloud data blocks in the height direction according to the height direction of the target scene to obtain the two-dimensional image corresponding to each scene image.
9. The method according to any of claims 1-8, wherein the error determination model is trained by:
acquiring a training sample, wherein the training sample comprises a plurality of sample two-dimensional images of a training scene and sample errors corresponding to every two sample two-dimensional images;
and training a preset error determination model according to the training sample to obtain the error determination model.
10. A scene reconstruction apparatus, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire a plurality of scene images of a target scene, and different scene images are acquired by an image acquisition device under different acquisition visual angles;
the second acquisition module is configured to acquire point cloud data corresponding to each scene image to obtain a plurality of point cloud data sets;
a third acquisition module configured to acquire a two-dimensional image corresponding to each point cloud data set;
the adjusting module is configured to acquire a pose error between every two-dimensional images and adjust every two-dimensional images according to the pose error to obtain an adjusted target image;
a reconstruction module configured to reconstruct the target scene from the adjusted target image.
11. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 9.
12. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the steps of performing the method of any one of claims 1-9.
CN202210837777.8A 2022-07-15 2022-07-15 Scene reconstruction method and device, readable storage medium and vehicle Active CN115205461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210837777.8A CN115205461B (en) 2022-07-15 2022-07-15 Scene reconstruction method and device, readable storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210837777.8A CN115205461B (en) 2022-07-15 2022-07-15 Scene reconstruction method and device, readable storage medium and vehicle

Publications (2)

Publication Number Publication Date
CN115205461A true CN115205461A (en) 2022-10-18
CN115205461B CN115205461B (en) 2023-11-14

Family

ID=83581958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210837777.8A Active CN115205461B (en) 2022-07-15 2022-07-15 Scene reconstruction method and device, readable storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN115205461B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913489A (en) * 2016-04-19 2016-08-31 东北大学 Indoor three-dimensional scene reconstruction method employing plane characteristics
CN107833253A (en) * 2017-09-22 2018-03-23 北京航空航天大学青岛研究院 A kind of camera pose refinement method towards the generation of RGBD three-dimensional reconstructions texture
CN109003325A (en) * 2018-06-01 2018-12-14 网易(杭州)网络有限公司 A kind of method of three-dimensional reconstruction, medium, device and calculate equipment
CN109658449A (en) * 2018-12-03 2019-04-19 华中科技大学 A kind of indoor scene three-dimensional rebuilding method based on RGB-D image
CN111127524A (en) * 2018-10-31 2020-05-08 华为技术有限公司 Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113610702A (en) * 2021-08-09 2021-11-05 北京百度网讯科技有限公司 Picture construction method and device, electronic equipment and storage medium
CN113936085A (en) * 2021-12-17 2022-01-14 荣耀终端有限公司 Three-dimensional reconstruction method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913489A (en) * 2016-04-19 2016-08-31 东北大学 Indoor three-dimensional scene reconstruction method employing plane characteristics
CN107833253A (en) * 2017-09-22 2018-03-23 北京航空航天大学青岛研究院 A kind of camera pose refinement method towards the generation of RGBD three-dimensional reconstructions texture
CN109003325A (en) * 2018-06-01 2018-12-14 网易(杭州)网络有限公司 A kind of method of three-dimensional reconstruction, medium, device and calculate equipment
CN111127524A (en) * 2018-10-31 2020-05-08 华为技术有限公司 Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN109658449A (en) * 2018-12-03 2019-04-19 华中科技大学 A kind of indoor scene three-dimensional rebuilding method based on RGB-D image
CN113284240A (en) * 2021-06-18 2021-08-20 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN113610702A (en) * 2021-08-09 2021-11-05 北京百度网讯科技有限公司 Picture construction method and device, electronic equipment and storage medium
CN113936085A (en) * 2021-12-17 2022-01-14 荣耀终端有限公司 Three-dimensional reconstruction method and device

Also Published As

Publication number Publication date
CN115205461B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN115042821B (en) Vehicle control method, vehicle control device, vehicle and storage medium
CN115265561A (en) Vehicle positioning method, device, vehicle and medium
CN114842075B (en) Data labeling method and device, storage medium and vehicle
CN115330923B (en) Point cloud data rendering method and device, vehicle, readable storage medium and chip
CN115220449B (en) Path planning method, device, storage medium, chip and vehicle
CN115205365A (en) Vehicle distance detection method and device, vehicle, readable storage medium and chip
CN115100377A (en) Map construction method and device, vehicle, readable storage medium and chip
CN115222941A (en) Target detection method and device, vehicle, storage medium, chip and electronic equipment
CN115035494A (en) Image processing method, image processing device, vehicle, storage medium and chip
CN114782638B (en) Method and device for generating lane line, vehicle, storage medium and chip
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN114255275A (en) Map construction method and computing device
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115202234B (en) Simulation test method and device, storage medium and vehicle
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
CN115100630A (en) Obstacle detection method, obstacle detection device, vehicle, medium, and chip
CN115205179A (en) Image fusion method and device, vehicle and storage medium
CN115205461B (en) Scene reconstruction method and device, readable storage medium and vehicle
CN115056784A (en) Vehicle control method, device, vehicle, storage medium and chip
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN115082573B (en) Parameter calibration method and device, vehicle and storage medium
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle
CN115082772B (en) Location identification method, location identification device, vehicle, storage medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant