CN111275750A - Indoor space panoramic image generation method based on multi-sensor fusion - Google Patents
Indoor space panoramic image generation method based on multi-sensor fusion Download PDFInfo
- Publication number
- CN111275750A CN111275750A CN202010059963.4A CN202010059963A CN111275750A CN 111275750 A CN111275750 A CN 111275750A CN 202010059963 A CN202010059963 A CN 202010059963A CN 111275750 A CN111275750 A CN 111275750A
- Authority
- CN
- China
- Prior art keywords
- image
- images
- panoramic image
- station
- indoor space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Optics & Photonics (AREA)
- Image Processing (AREA)
Abstract
The invention provides an indoor space panoramic image generation method based on multi-sensor fusion, which can generate a panoramic image with depth information and clear image resolution, and comprises the following steps: moving the measuring vehicle based on a predetermined movement rule to acquire image or point cloud data of each station; fusing the image acquired by each image acquisition device with the point cloud data to generate a corresponding high-quality depth map; constructing a virtual image center, and performing spherical projection by taking the depth value of each depth map as a projection distance to obtain a corresponding registration image; performing bottom hole filling on the image of the current station according to the image of the previous station and the image of the next station; and splicing and fusing the overlapped areas of the registration images, and finally outputting a panoramic image with depth information and clear image resolution, wherein the panoramic image can realize the functions of measurement, three-dimensional reconstruction, visual positioning and the like.
Description
Technical Field
The invention belongs to the technical field of image mapping processing, and particularly relates to an indoor space panoramic image generation method based on multi-sensor fusion, which can generate a panoramic image with depth information and clear image resolution.
Background
The digital twin is a simulation process integrating multidisciplinary, multi-physical quantity, multi-scale and multi-probability by fully utilizing data such as a physical model, sensor updating, operation history and the like, and mapping is completed in a virtual space, so that the full life cycle process of corresponding entity equipment is reflected. The panoramic picture is key digital data of the indoor digital twin system; the panoramic image has rich color information, enables a viewer to experience stronger immersion, and plays a key role in monitoring, digital city, medical image analysis and the like of large public places such as airports, railway stations and the like.
However, most of panoramic images in the market at present can only be browsed, and depth information, namely scale information, does not exist, so that operations such as measurement, three-dimensional reconstruction and the like cannot be performed through panoramic images; moreover, because the indoor parallax is large, when the multi-camera platform carries out panoramic stitching, if accurate depth information is not available, the stitching part is easy to have wrong layers and double images, and the quality of the panoramic image is low.
Disclosure of Invention
The present invention has been made to solve the above-described problems, and an object of the present invention is to provide a method for generating an indoor space panoramic image by multi-sensor fusion, which is capable of generating a panoramic image having depth information and a clear image resolution.
The invention provides a method for generating an indoor space panoramic image based on multi-sensor fusion, which is characterized by comprising the following steps of: an image acquisition step: moving a measuring vehicle having a plurality of sensors and a plurality of image collectors based on a predetermined movement rule to thereby perform acquisition of image or point cloud data of each station; a depth map generation step: unifying the calibration relations between the plurality of sensors and the plurality of image collectors to a coordinate system, and further fusing the images collected by each image collector with the point cloud data to generate a corresponding depth map; an image registration step: constructing virtual image centers of the plurality of image collectors, and further performing spherical projection by taking the depth value of each depth image as a projection distance to realize registration processing so as to obtain corresponding registration images; bottom image filling step: performing bottom hole filling on the image of the current station according to the image of the previous station and the image of the next station acquired by the movement of the measuring vehicle; and image splicing and fusing: and splicing and fusing the overlapped areas of all the registration images of each station to obtain a panoramic image with depth information.
The indoor space panoramic image generation method based on multi-sensor fusion provided by the invention can also be characterized in that the plurality of sensors comprise an inertial measurement unit and three laser radars, the plurality of image collectors comprise six fisheye cameras, one fisheye camera carries out image acquisition towards the top, five fisheye cameras are uniformly distributed along the horizontal direction in a circular shape, in the image acquisition step, the inertial measurement unit and the laser radars autonomously sense the motion state of the measuring vehicle, the geographic position and the accurate pose information of the measuring vehicle are autonomously determined in the environment without GNSS signals, and the fisheye cameras take pictures in the process of traveling to obtain the image information of multiple stations.
In the method for generating an indoor space panoramic image based on multi-sensor fusion provided by the invention, the method can also have the characteristic that in the step of generating the depth map, the relative poses of the inertial measurement unit, the laser radar and the fisheye camera are determined through calibration among the fisheye cameras, between the fisheye cameras and the laser radar, between the laser radar and the inertial measurement unit and among the laser radar, and then are unified to a coordinate system, so that the fusion of the image and the point cloud is realized, and the depth map corresponding to the image is obtained.
The method for generating the indoor space panoramic image based on the multi-sensor fusion, provided by the invention, can also have the characteristics that in the image registration step, a virtual image space coordinate system is constructed based on six fish-eye cameras, the converted depth value is used as the radius of a sphere, the corresponding image is projected onto a spherical surface, the spherical surface and the panoramic image establish a mapping relation, and each image is registered to a panoramic image area so as to obtain the corresponding registered image.
The method for generating the panoramic image of the indoor space based on the multi-sensor fusion provided by the invention can also have the characteristic that in the step of filling the bottom image, the distance from the bottom cavity to the virtual optical center is obtained through a trigonometric function based on the vertical distance from the virtual optical center of the virtual image space coordinate system to the ground, then the three-dimensional coordinate of the bottom cavity is obtained through calculation, and then the fish-eye cameras respectively projected to the previous station and the next station acquire corresponding images to fill the bottom.
The method for generating the panoramic image of the indoor space based on the multi-sensor fusion, provided by the invention, can further have the characteristics that in the image splicing and fusion step, an optimal splicing line acquisition step and an image fusion step are executed, the optimal splicing line acquisition step is used for establishing a graph cut model for the overlapped areas of a plurality of images, taking the color, the gradient and the texture of the images as energy factors, and acquiring a segmentation line with the lowest energy as an optimal splicing line, the image fusion step is used for carrying out image fusion near the optimal splicing line, respectively establishing laplacian pyramids of the images, merging the same layers of the corresponding pyramids aiming at the overlapped areas, and finally carrying out inverse laplacian transformation on the merged pyramids, so that the final fused image is obtained and taken as the panoramic image.
In the method for generating an indoor space panoramic image based on multi-sensor fusion provided by the invention, the method can also have the characteristics that in the step of acquiring the optimal splicing line, aiming at an overlapping area, firstly, the energy of each pixel is calculated by fusing color, gradient and texture information; secondly, constructing a graph cut model, wherein each pixel is regarded as a node, and each pixel is assumed to have four adjacent pixels which are pixels in four directions, namely the upper, the lower, the left and the right, of the pixel; and finally, optimizing the constructed graph based on a graph cut model, and searching a cutting line with the lowest energy as the optimal splicing line.
Action and Effect of the invention
According to the indoor space panoramic image generation method based on multi-sensor fusion, firstly, a measuring vehicle is moved based on a preset movement rule so as to acquire image or point cloud data of each station, then depth map generation, image registration, bottom image filling and image splicing fusion processing are sequentially carried out on a plurality of images acquired by each station, fusion of the images and the point cloud is realized, a high-quality depth map is output, spherical projection is carried out by using a depth value as a projection distance, and finally a panoramic image with depth information and clear image resolution is output.
Drawings
Fig. 1 is a schematic flow chart of an indoor space panoramic image generation method based on multi-sensor fusion in an embodiment of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
< example >
Fig. 1 is a schematic flow chart of an indoor space panoramic image generation method based on multi-sensor fusion in an embodiment of the present invention.
As shown in fig. 1, in the present embodiment, the indoor space panoramic image generation method based on multi-sensor fusion is a method of generating a measurable panoramic image in combination with indoor space digital twin data of multi-sensor fusion, where the measurable effect is that the panoramic image generated by the method has depth information and the image resolution is very clear. The method comprises the following steps:
s1, image acquisition: the measuring vehicle is moved based on a predetermined movement rule to thereby perform acquisition of image or point cloud data of each station. The measuring vehicle used here integrates multiple sensors, moves indoors, and can provide data sources such as images and point clouds, including images (2D) and three-dimensional laser scanning point clouds (3D). The measuring vehicle is provided with a plurality of sensors and a plurality of image collectors.
In the present embodiment, the plurality of sensors includes an Inertial Measurement Unit (IMU) and three LiDAR (LiDAR); the plurality of image collectors comprise six fisheye cameras, one fisheye camera collects images towards the indoor top, and the other five fisheye cameras are uniformly distributed in a circular shape along the horizontal direction and used for collecting images of each side face in the room. Before image acquisition, calibrating internal parameters of a camera, and relative position and attitude information between the camera and the camera, between the camera and a laser radar, between the laser radar and an IMU. And in the image acquisition process, the mobile measuring vehicle acquires images and prompts to take pictures to acquire image data every other meter or so.
In the image acquisition step S1, the principle and method of multi-source data fusion, such as the simultaneous localization and mapping SLAM method, is employed. The inertial measurement unit and the laser radar autonomously sense the motion state of the measuring vehicle, the geographic position and the accurate pose information of the measuring vehicle can be autonomously determined in the environment without GNSS signals, and the fisheye camera shoots in the process of traveling to obtain multi-station image information.
S2 depth map generation step: and unifying the calibration relations between the plurality of sensors and the plurality of image collectors to a coordinate system, and further fusing the image collected by each image collector with the point cloud data to generate a corresponding depth map.
In a depth map generating step S2, the relative positions and orientations of the plurality of sensors and the plurality of image collectors, i.e. the calibration of the positions and orientations of the sensors and the image collectors, are accurately determined. Specifically, the relative poses of the inertial measurement unit and the laser radar and the fisheye camera are determined through calibration among fisheye cameras, between the fisheye cameras and the laser radar, between the laser radar and the inertial measurement unit and between the laser radars, and then unified to a coordinate system, so that multi-source data fusion and data processing calculation are facilitated, fusion of images and point clouds is realized, and a high-quality depth map corresponding to the images is obtained. Here, the number of depth maps corresponds to the number of fisheye cameras, i.e., six depth maps.
In step S2, the point cloud and the image are fused to generate a high-quality depth map corresponding to the image, and the point cloud needs to be processed for denoising and triangulation processing. Eliminating noise points through data of different frames of slam, and eliminating the influence of noise point obstacles on the generation of the depth map; in addition, the point cloud is triangulated, on one hand, the influence of inaccurate depth caused by the fact that the later points pass through gaps of the front points is eliminated, and on the other hand, interpolation is facilitated due to the fact that the point cloud and the image are different in resolution.
S3, image registration: and constructing virtual image centers of a plurality of image collectors, and further performing spherical projection by taking the depth value of each depth image as a projection distance to realize registration processing, so that a corresponding registration image is obtained, the image has depth information, and measurement, three-dimensional reconstruction and the like can be performed.
In the image registration step S3, a virtual image space coordinate system is constructed based on six fisheye cameras, the converted depth values are used as the radius of a sphere, the corresponding images are projected onto the spherical surface, and the spherical surface and the panoramic image establish a mapping relationship, so that each image can be registered onto the panoramic image, that is, each image is registered onto the panoramic image area to obtain a corresponding registered image. Here, the number of registered images is six.
S4, bottom image filling step: and performing bottom hole filling on the image of the current station according to the image of the previous station and the image of the next station acquired by the movement of the measuring vehicle.
In this embodiment, the bottom filling is performed by using image information of the previous station and the next station because there is no camera at the bottom for image acquisition.
In the bottom image filling step S4, based on the vertical distance from the virtual optical center of the virtual image space coordinate system to the ground, the distance from the bottom cavity to the virtual optical center is obtained through a trigonometric function, and then the three-dimensional coordinates of the bottom cavity are obtained through calculation, and then the three-dimensional coordinates are projected to the fisheye cameras of the previous station and the next station respectively to obtain corresponding images to fill the bottom.
S5, image splicing and fusing: and splicing and fusing the overlapped areas of all the registration images of each station to obtain a panoramic image with depth information.
In the image stitching and fusing step S5, the optimal stitching line acquisition step S51 and the image fusing step S52 are performed.
And S51, an optimal splicing line obtaining step, wherein after six registered panoramic images are obtained, an optimal splicing line needs to be searched for the overlapping area, one side of the line only selects the image part of the side, and the other side of the line only selects the image part of the side, so that the image transition is natural, and the staggered layer and the double image are reduced. A graph cut model is established for the overlapped areas of a plurality of images, the color, the gradient and the texture of the images are used as energy factors, and a cut line with the lowest energy is obtained and used as an optimal splicing line.
In the optimal stitching line obtaining step S51, for within the overlapping region,
first, the energy of each pixel is calculated by fusing color, gradient and texture information.
Secondly, a graph cut model is constructed, each pixel is regarded as a node, and each pixel is assumed to have four adjacent pixels, namely pixels in four directions of the upper, lower, left and right of each pixel, an edge is connected between each node, and the energy of the edge is positioned as the energy sum of two pixels connected with the edge.
And finally, optimizing the constructed graph based on a graph cut model, and searching a partition line with the lowest energy as an optimal splicing line.
And S52, image fusion, namely performing image fusion near the optimal splicing line, so that the transition between the images is natural and is not abrupt. Specifically, laplacian pyramids of the images are respectively established, then the same layers of the corresponding pyramids are combined according to the overlapping area, and finally the combined pyramids are subjected to inverse laplacian transformation, so that a final fusion image is obtained and used as a panoramic image.
Effects and effects of the embodiments
According to the indoor space panoramic image generation method based on multi-sensor fusion, a measuring vehicle moving indoors is adopted for collecting images of each station, the measuring vehicle integrates sensors such as an inertial navigation sensor, a fisheye camera and a laser radar LiDAR, and calibration is performed among the multiple sensors, such as: the camera and the camera, the camera and the laser radar, the laser radar and the IMU and the laser radar are calibrated, the relative pose information of the camera and the camera, the camera and the laser radar, the laser radar and the IMU and the laser radar can be unified under a coordinate system, the point cloud and the image are fused, a high-quality depth map is generated, the depth value is used as a projection distance to carry out spherical projection registration on a panoramic image, bottom hole filling is carried out by utilizing the images of the previous station and the next station, an optimal splicing line is searched for an image overlapping area to reduce staggered layers and double images, image pyramid fusion is carried out near the splicing line, and the generated panoramic image is high in splicing quality and has depth information and accurate pose information.
According to the indoor space panoramic image generation method based on multi-sensor fusion, indoor multi-station panoramic images can be output, the distance between every two adjacent panoramic images is 1m, the angle resolution is less than 0.3 degrees, the image resolution is more than 3 million, the panoramic images have three-dimensional information, the relative three-dimensional coordinates of object points can be inversely calculated, functions of measurement, three-dimensional reconstruction and the like can be realized, and data are provided for indoor visual positioning.
The present embodiment combines the advantages of both image and LiDAR point cloud data. The image can obtain the color information of an object, the LiDAR point cloud data reflects the coordinate information of the object, the LiDAR point cloud data and the coordinate information of the object are registered, namely the LiDAR point cloud data and the coordinate information are fused, so that the LiDAR point cloud data and the coordinate information are complemented, and more complete and useful information is obtained. The point cloud can provide depth information, and spherical projection can be performed by using the depth information as a projection distance, so that the generated panoramic image has the depth information.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.
Claims (7)
1. A method for generating an indoor space panoramic image based on multi-sensor fusion is characterized by comprising the following steps:
an image acquisition step: moving a measuring vehicle having a plurality of sensors and a plurality of image collectors based on a predetermined movement rule to thereby perform acquisition of image or point cloud data of each station;
a depth map generation step: unifying the calibration relations between the plurality of sensors and the plurality of image collectors to a coordinate system, and further fusing the images collected by each image collector with the point cloud data to generate a corresponding depth map;
an image registration step: constructing virtual image centers of the plurality of image collectors, and further performing spherical projection by taking the depth value of each depth image as a projection distance to realize registration processing so as to obtain corresponding registration images;
bottom image filling step: performing bottom hole filling on the image of the current station according to the image of the previous station and the image of the next station acquired by the movement of the measuring vehicle; and
image splicing and fusing: and splicing and fusing the overlapped areas of all the registration images of each station to obtain a panoramic image with depth information.
2. The indoor space panoramic image generation method based on multi-sensor fusion of claim 1, wherein:
the plurality of sensors includes an inertial measurement unit and three lidar,
the plurality of image collectors comprise six fisheye cameras, one fisheye camera collects images towards the top, five fisheye cameras are uniformly distributed along the horizontal direction in a circular shape,
in the image acquisition step, the inertial measurement unit and the laser radar autonomously sense the motion state of the measuring vehicle, the geographic position and the accurate pose information of the measuring vehicle are autonomously determined in the environment without GNSS signals, and the fisheye camera takes pictures in the process of traveling to acquire multi-station image information.
3. The indoor space panoramic image generation method based on multi-sensor fusion of claim 2, wherein:
in the depth map generating step, the relative poses of the inertial measurement unit, the laser radar and the fisheye camera are determined through the calibration among the fisheye cameras, between the fisheye cameras and the laser radar, between the laser radar and the inertial measurement unit and between the laser radar and the fisheye cameras, and then are unified to a coordinate system to realize the fusion of the image and the point cloud, so that the depth map corresponding to the image is obtained.
4. The indoor space panoramic image generation method based on multi-sensor fusion of claim 3, wherein:
in the image registration step, a virtual image space coordinate system is established based on six fish-eye cameras, the converted depth value is used as the radius of a sphere, the corresponding image is projected onto the spherical surface, the spherical surface and the panoramic image establish a mapping relation, and each image is registered to the panoramic image area so as to obtain the corresponding registration image.
5. The indoor space panoramic image generation method based on multi-sensor fusion of claim 4, wherein:
in the bottom image filling step, based on the vertical distance from the virtual optical center of the virtual image space coordinate system to the ground, the distance from the bottom cavity to the virtual optical center is obtained through a trigonometric function, and then the three-dimensional coordinates of the bottom cavity are obtained through calculation, and then the three-dimensional coordinates are projected to the fisheye cameras of the previous station and the next station respectively to obtain corresponding images to fill the bottom.
6. The indoor space panoramic image generation method based on multi-sensor fusion of claim 5, wherein:
in the image stitching and fusing step, an optimal stitching line acquisition step and an image fusing step are executed,
the optimal splicing line obtaining step is to establish a graph cutting model for the overlapped area of a plurality of images, take the color, the gradient and the texture of the images as energy factors, obtain a cutting line with the lowest energy as an optimal splicing line,
and in the image fusion step, image fusion is carried out near the optimal splicing line, Laplacian pyramids of all the images are respectively established, then the same layers of the corresponding pyramids are merged aiming at the overlapped area, and finally the merged pyramids are subjected to inverse Laplacian transformation, so that the final fusion image is obtained and used as the panoramic image.
7. The indoor space panoramic image generation method based on multi-sensor fusion of claim 6, wherein:
in the optimal stitching line obtaining step, for the overlapping region,
firstly, fusing color, gradient and texture information to calculate the energy of each pixel;
secondly, constructing a graph cut model, wherein each pixel is regarded as a node, and each pixel is assumed to have four adjacent pixels which are pixels in four directions, namely the upper, the lower, the left and the right, of the pixel;
and finally, optimizing the constructed graph based on a graph cut model, and searching a cutting line with the lowest energy as the optimal splicing line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010059963.4A CN111275750B (en) | 2020-01-19 | 2020-01-19 | Indoor space panoramic image generation method based on multi-sensor fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010059963.4A CN111275750B (en) | 2020-01-19 | 2020-01-19 | Indoor space panoramic image generation method based on multi-sensor fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111275750A true CN111275750A (en) | 2020-06-12 |
CN111275750B CN111275750B (en) | 2022-05-13 |
Family
ID=71002009
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010059963.4A Active CN111275750B (en) | 2020-01-19 | 2020-01-19 | Indoor space panoramic image generation method based on multi-sensor fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111275750B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085653A (en) * | 2020-08-07 | 2020-12-15 | 四川九洲电器集团有限责任公司 | Parallax image splicing method based on depth of field compensation |
CN112308778A (en) * | 2020-10-16 | 2021-02-02 | 香港理工大学深圳研究院 | Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information |
CN112927281A (en) * | 2021-04-06 | 2021-06-08 | Oppo广东移动通信有限公司 | Depth detection method, depth detection device, storage medium, and electronic apparatus |
CN112990373A (en) * | 2021-04-28 | 2021-06-18 | 四川大学 | Convolution twin point network blade profile splicing system based on multi-scale feature fusion |
CN113240615A (en) * | 2021-05-20 | 2021-08-10 | 北京城市网邻信息技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113720852A (en) * | 2021-08-16 | 2021-11-30 | 中国飞机强度研究所 | Multi-camera image acquisition monitoring device |
CN113920270A (en) * | 2021-12-15 | 2022-01-11 | 深圳市其域创新科技有限公司 | Layout reconstruction method and system based on multi-view panorama |
CN114066723A (en) * | 2021-11-11 | 2022-02-18 | 贝壳找房(北京)科技有限公司 | Equipment detection method, device and storage medium |
CN114078325A (en) * | 2020-08-19 | 2022-02-22 | 北京万集科技股份有限公司 | Multi-perception system registration method and device, computer equipment and storage medium |
CN114079768A (en) * | 2020-08-18 | 2022-02-22 | 杭州海康汽车软件有限公司 | Image definition testing method and device |
WO2022083118A1 (en) * | 2020-10-23 | 2022-04-28 | 华为技术有限公司 | Data processing method and related device |
CN114757834A (en) * | 2022-06-16 | 2022-07-15 | 北京建筑大学 | Panoramic image processing method and panoramic image processing device |
CN114943940A (en) * | 2022-07-26 | 2022-08-26 | 山东金宇信息科技集团有限公司 | Method, equipment and storage medium for visually monitoring vehicles in tunnel |
CN115291767A (en) * | 2022-08-01 | 2022-11-04 | 北京奇岱松科技有限公司 | Control method and device of Internet of things equipment, electronic equipment and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130106849A1 (en) * | 2011-11-01 | 2013-05-02 | Samsung Electronics Co., Ltd. | Image processing apparatus and method |
US20140285486A1 (en) * | 2013-03-20 | 2014-09-25 | Siemens Product Lifecycle Management Software Inc. | Image-based 3d panorama |
CN105488775A (en) * | 2014-10-09 | 2016-04-13 | 东北大学 | Six-camera around looking-based cylindrical panoramic generation device and method |
CN105931234A (en) * | 2016-04-19 | 2016-09-07 | 东北林业大学 | Ground three-dimensional laser scanning point cloud and image fusion and registration method |
CN106647148A (en) * | 2017-01-25 | 2017-05-10 | 成都中信华瑞科技有限公司 | Device for obtaining panoramic picture and assembly method thereof |
CN106681330A (en) * | 2017-01-25 | 2017-05-17 | 北京航空航天大学 | Robot navigation method and device based on multi-sensor data fusion |
CN107292965A (en) * | 2017-08-03 | 2017-10-24 | 北京航空航天大学青岛研究院 | A kind of mutual occlusion processing method based on depth image data stream |
CN108198248A (en) * | 2018-01-18 | 2018-06-22 | 维森软件技术(上海)有限公司 | A kind of vehicle bottom image 3D display method |
CN108765475A (en) * | 2018-05-25 | 2018-11-06 | 厦门大学 | A kind of building three-dimensional point cloud method for registering based on deep learning |
CN109115186A (en) * | 2018-09-03 | 2019-01-01 | 山东科技大学 | A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method |
CN109360150A (en) * | 2018-09-27 | 2019-02-19 | 轻客小觅智能科技(北京)有限公司 | A kind of joining method and device of the panorama depth map based on depth camera |
CN109600556A (en) * | 2019-02-18 | 2019-04-09 | 武汉大学 | A kind of high quality precision omnidirectional imaging system and method based on slr camera |
CN110415342A (en) * | 2019-08-02 | 2019-11-05 | 深圳市唯特视科技有限公司 | A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors |
-
2020
- 2020-01-19 CN CN202010059963.4A patent/CN111275750B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130106849A1 (en) * | 2011-11-01 | 2013-05-02 | Samsung Electronics Co., Ltd. | Image processing apparatus and method |
US20140285486A1 (en) * | 2013-03-20 | 2014-09-25 | Siemens Product Lifecycle Management Software Inc. | Image-based 3d panorama |
CN105488775A (en) * | 2014-10-09 | 2016-04-13 | 东北大学 | Six-camera around looking-based cylindrical panoramic generation device and method |
CN105931234A (en) * | 2016-04-19 | 2016-09-07 | 东北林业大学 | Ground three-dimensional laser scanning point cloud and image fusion and registration method |
CN106647148A (en) * | 2017-01-25 | 2017-05-10 | 成都中信华瑞科技有限公司 | Device for obtaining panoramic picture and assembly method thereof |
CN106681330A (en) * | 2017-01-25 | 2017-05-17 | 北京航空航天大学 | Robot navigation method and device based on multi-sensor data fusion |
CN107292965A (en) * | 2017-08-03 | 2017-10-24 | 北京航空航天大学青岛研究院 | A kind of mutual occlusion processing method based on depth image data stream |
CN108198248A (en) * | 2018-01-18 | 2018-06-22 | 维森软件技术(上海)有限公司 | A kind of vehicle bottom image 3D display method |
CN108765475A (en) * | 2018-05-25 | 2018-11-06 | 厦门大学 | A kind of building three-dimensional point cloud method for registering based on deep learning |
CN109115186A (en) * | 2018-09-03 | 2019-01-01 | 山东科技大学 | A kind of 360 ° for vehicle-mounted mobile measuring system can measure full-view image generation method |
CN109360150A (en) * | 2018-09-27 | 2019-02-19 | 轻客小觅智能科技(北京)有限公司 | A kind of joining method and device of the panorama depth map based on depth camera |
CN109600556A (en) * | 2019-02-18 | 2019-04-09 | 武汉大学 | A kind of high quality precision omnidirectional imaging system and method based on slr camera |
CN110415342A (en) * | 2019-08-02 | 2019-11-05 | 深圳市唯特视科技有限公司 | A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors |
Non-Patent Citations (2)
Title |
---|
QIAO WU等: "Visual and LiDAR-based for the mobile 3D mapping", 《2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)》 * |
刘继忠 等: "基于Kinect传感器的移动机器人室内三维环境创建", 《广西大学学报(自然科学版)》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085653A (en) * | 2020-08-07 | 2020-12-15 | 四川九洲电器集团有限责任公司 | Parallax image splicing method based on depth of field compensation |
CN114079768B (en) * | 2020-08-18 | 2023-12-05 | 杭州海康汽车软件有限公司 | Image definition testing method and device |
CN114079768A (en) * | 2020-08-18 | 2022-02-22 | 杭州海康汽车软件有限公司 | Image definition testing method and device |
CN114078325A (en) * | 2020-08-19 | 2022-02-22 | 北京万集科技股份有限公司 | Multi-perception system registration method and device, computer equipment and storage medium |
CN114078325B (en) * | 2020-08-19 | 2023-09-05 | 北京万集科技股份有限公司 | Multi-perception system registration method, device, computer equipment and storage medium |
CN112308778A (en) * | 2020-10-16 | 2021-02-02 | 香港理工大学深圳研究院 | Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information |
CN112308778B (en) * | 2020-10-16 | 2021-08-10 | 香港理工大学深圳研究院 | Method and terminal for assisting panoramic camera splicing by utilizing spatial three-dimensional information |
WO2022083118A1 (en) * | 2020-10-23 | 2022-04-28 | 华为技术有限公司 | Data processing method and related device |
CN112927281A (en) * | 2021-04-06 | 2021-06-08 | Oppo广东移动通信有限公司 | Depth detection method, depth detection device, storage medium, and electronic apparatus |
CN112990373B (en) * | 2021-04-28 | 2021-08-03 | 四川大学 | Convolution twin point network blade profile splicing system based on multi-scale feature fusion |
CN112990373A (en) * | 2021-04-28 | 2021-06-18 | 四川大学 | Convolution twin point network blade profile splicing system based on multi-scale feature fusion |
CN113240615B (en) * | 2021-05-20 | 2022-06-07 | 北京城市网邻信息技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113240615A (en) * | 2021-05-20 | 2021-08-10 | 北京城市网邻信息技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113720852A (en) * | 2021-08-16 | 2021-11-30 | 中国飞机强度研究所 | Multi-camera image acquisition monitoring device |
CN114066723A (en) * | 2021-11-11 | 2022-02-18 | 贝壳找房(北京)科技有限公司 | Equipment detection method, device and storage medium |
CN113920270A (en) * | 2021-12-15 | 2022-01-11 | 深圳市其域创新科技有限公司 | Layout reconstruction method and system based on multi-view panorama |
CN114757834A (en) * | 2022-06-16 | 2022-07-15 | 北京建筑大学 | Panoramic image processing method and panoramic image processing device |
CN114943940A (en) * | 2022-07-26 | 2022-08-26 | 山东金宇信息科技集团有限公司 | Method, equipment and storage medium for visually monitoring vehicles in tunnel |
CN115291767A (en) * | 2022-08-01 | 2022-11-04 | 北京奇岱松科技有限公司 | Control method and device of Internet of things equipment, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111275750B (en) | 2022-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111275750B (en) | Indoor space panoramic image generation method based on multi-sensor fusion | |
CN112894832B (en) | Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium | |
CN112132972B (en) | Three-dimensional reconstruction method and system for fusing laser and image data | |
CN106327573B (en) | A kind of outdoor scene three-dimensional modeling method for urban architecture | |
WO2019127445A1 (en) | Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product | |
KR100912715B1 (en) | Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors | |
JP4685313B2 (en) | Method for processing passive volumetric image of any aspect | |
CN111060924B (en) | SLAM and target tracking method | |
CN112367514A (en) | Three-dimensional scene construction method, device and system and storage medium | |
CN112461210B (en) | Air-ground cooperative building surveying and mapping robot system and surveying and mapping method thereof | |
CN109472865B (en) | Free measurable panoramic reproduction method based on image model drawing | |
CN111141264B (en) | Unmanned aerial vehicle-based urban three-dimensional mapping method and system | |
CN109547769B (en) | Highway traffic dynamic three-dimensional digital scene acquisition and construction system and working method thereof | |
CN109709977B (en) | Method and device for planning movement track and moving object | |
CN112465732A (en) | Registration method of vehicle-mounted laser point cloud and sequence panoramic image | |
CN112862966B (en) | Method, device, equipment and storage medium for constructing surface three-dimensional model | |
US20230351625A1 (en) | A method for measuring the topography of an environment | |
KR20220064524A (en) | Method and system for visual localization | |
CN111197986B (en) | Real-time early warning and obstacle avoidance method for three-dimensional path of unmanned aerial vehicle | |
Zhao et al. | Alignment of continuous video onto 3D point clouds | |
CN114972672B (en) | Method, device, equipment and storage medium for constructing live-action three-dimensional model of power transmission line | |
CN113129422A (en) | Three-dimensional model construction method and device, storage medium and computer equipment | |
CN113345084B (en) | Three-dimensional modeling system and three-dimensional modeling method | |
CN116704112A (en) | 3D scanning system for object reconstruction | |
Frueh | Automated 3D model generation for urban environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |