CN113340266A - Indoor space surveying and mapping system and method - Google Patents

Indoor space surveying and mapping system and method Download PDF

Info

Publication number
CN113340266A
CN113340266A CN202110613003.2A CN202110613003A CN113340266A CN 113340266 A CN113340266 A CN 113340266A CN 202110613003 A CN202110613003 A CN 202110613003A CN 113340266 A CN113340266 A CN 113340266A
Authority
CN
China
Prior art keywords
indoor space
point cloud
sub
voxel
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110613003.2A
Other languages
Chinese (zh)
Inventor
吴爽
王运动
朱晨霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Haojie Surveying And Mapping Technology Co ltd
Original Assignee
Jiangsu Haojie Surveying And Mapping Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Haojie Surveying And Mapping Technology Co ltd filed Critical Jiangsu Haojie Surveying And Mapping Technology Co ltd
Priority to CN202110613003.2A priority Critical patent/CN113340266A/en
Publication of CN113340266A publication Critical patent/CN113340266A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides an indoor space surveying and mapping system and a surveying and mapping method, wherein the indoor space surveying and mapping system comprises: the system comprises a data acquisition module, a data acquisition module and a data processing module, wherein the data acquisition module is used for scanning an indoor space, acquiring laser point cloud data and image data of the indoor space and capturing a motion track of the data acquisition module; the identification module is used for fusing the laser point cloud data and the image data at the same time based on the motion track, identifying the structures of a necessary measurement area and a non-necessary measurement area in the indoor space and obtaining the scale data of each part of the structure of the indoor space; the building module is used for automatically drawing a three-dimensional space model of an indoor space based on the scale data, is convenient, intelligent and quick, saves time and manpower and material resources, is automatically completed by an instrument, and has strong practicability.

Description

Indoor space surveying and mapping system and method
Technical Field
The invention relates to the field of intelligent surveying and mapping, in particular to an indoor space surveying and mapping system and method.
Background
With the continuous development of the land and house industry in China, indoor space surveying and mapping is applied to the real estate of an application domain, decoration projects are often carried out behind civil engineering in the existing building construction, due to the fact that output results of the civil engineering have errors within a certain range, actual measurement is often needed according to the concrete civil engineering conditions of each construction site, efficiency is low, accuracy is not good, the door, the window, the corner type and the wall direction need to be recognized manually, the process is complicated, a three-dimensional space model of an indoor space cannot be generated automatically, and the requirement of modern decoration cannot be met.
Therefore, the invention provides an indoor space surveying and mapping system and a method thereof.
Disclosure of Invention
The invention provides a system and a method for surveying and mapping indoor space, wherein a data acquisition module is adopted to survey and map the indoor space according to a clockwise sequence, and the structures of regions which need to be measured and regions which do not need to be measured in the indoor space are identified to obtain the scale data of the structures of all parts of the indoor space; the building module automatically draws a three-dimensional space model of an indoor space according to measured data, is convenient, intelligent and rapid, saves time and manpower and material resources, all is automatically completed by an instrument, and has strong practicability.
The invention provides an indoor space mapping system based on a mapper, which comprises:
the system comprises a data acquisition module, a data acquisition module and a data processing module, wherein the data acquisition module is used for scanning an indoor space, acquiring laser point cloud data and image data of the indoor space and capturing a motion track of the data acquisition module;
the identification module is used for fusing the laser point cloud data and the image data at the same time based on the motion track, identifying the structures of a necessary measurement area and a non-necessary measurement area in the indoor space and obtaining the scale data of each part of the structure of the indoor space;
and the establishing module is used for automatically drawing a three-dimensional space model of the indoor space based on the scale data.
In one possible way of realisation,
the data acquisition module comprises:
the laser sampling unit is used for carrying out clockwise laser scanning on the indoor space to obtain laser point cloud data of the indoor space under a standard laser coordinate system;
the image sampling unit is used for carrying out clockwise image scanning on the indoor space and acquiring image data of the indoor space under a standard image coordinate system;
and the track capturing unit is used for capturing the motion track of the data acquisition module.
In one possible way of realisation,
the laser sampling unit includes:
the laser distance measuring sensor is used for measuring the distance between the laser point cloud data and the origin of the standard laser coordinate system;
the corner distance sensor is used for measuring the distance from the corner of the indoor space to the origin of the standard image coordinate system;
the image sampling unit includes:
the camera is used for shooting images of the indoor space;
a trajectory capture unit comprising:
the electronic compass is used for acquiring the rotation angle of the data acquisition module;
the gravity sensor is used for acquiring the acceleration direction of the data acquisition module;
and the processor is used for obtaining the motion trail of the data acquisition module according to the rotation angle and the acceleration direction.
In one possible way of realisation,
the identification module comprises:
a pretreatment unit: the system comprises a laser sampling unit, an image sampling unit, a standard laser coordinate system and a standard image coordinate system, wherein the laser sampling unit is used for sampling laser points in the standard laser coordinate system;
a straight line detection unit: based on a preset Gaussian filter, performing smooth filtering on image data of a preset time node, calculating the filtered image data, obtaining partial derivatives of the image data in the x-axis direction and the y-axis direction, thus obtaining a gradient image of the image data, performing suppression processing on the gradient image, obtaining a refined gradient amplitude array, and performing double thresholding on the refined gradient amplitude array:
setting the points with gradient amplitude lower than the low threshold value and the points with gradient amplitude higher than the high threshold value as edge points; for the points with the gradient amplitude between the two thresholds, checking a preset number of adjacent points around the points, if the adjacent points have edge points, setting the points with the gradient amplitude between the two thresholds as the edge points, otherwise, setting the points as non-edge points, and removing the points to obtain an edge point set of image data of preset time nodes;
a calibration unit: mapping laser point cloud data of preset time nodes into image data based on a mapping matrix to obtain binary images of a plurality of sub-time nodes, reading the laser point cloud data of the preset time nodes to obtain depth information of the laser point cloud, performing color addition on the binary images based on a preset addition algorithm and the depth information of the laser point cloud to obtain initial depth images, performing pixel expansion on the initial depth images based on added pixel points in the initial depth images to obtain expanded images, finding out mapping point sets corresponding to edge point sets in the expanded images, calculating the mapping point sets and the edge point sets respectively to obtain coordinate offset, calibrating the mapping matrix based on the coordinate offset, and obtaining a calibration mapping matrix;
a fusion unit: and fusing the laser point cloud data and the image data of each sub time node based on the calibration mapping matrix to obtain a plurality of sub color point clouds, performing coordinate decomposition on the sub color point clouds respectively, obtaining a transformation relation of coordinates of different sub color point clouds based on a preset conversion algorithm and the motion track of the data acquisition module, and fusing the sub color point clouds based on the transformation relation to obtain color point cloud data.
In one possible way of realisation,
the identification module further comprises:
a filtering unit: the system is used for obtaining the Euclidean distance from each color sub-point cloud in the color point cloud data to the neighborhood of the color sub-point cloud, and judging whether to filter the color point cloud data or not based on the relation between the Euclidean distance and a preset range:
if the Euclidean distance is not within the preset range, deleting the color sub-point cloud;
otherwise, the color sub-point cloud is reserved, so that the filtered color point cloud data is obtained;
a processing unit: constructing a root node based on the filtered color point cloud data, and segmenting the root node based on a preset segmentation algorithm and preset segmentation precision, so as to obtain a plurality of sub-terminal nodes, and distinguishing the sub-terminal nodes:
if the sub terminal node does not contain the colored sub point cloud, the sub terminal node is indicated to be a null node, and the sub terminal node is deleted;
if the sub terminal node contains the color sub point cloud, the sub terminal node is indicated to be a non-empty node, and the sub terminal node is reserved, so that a voxel model of the indoor space is obtained;
a calculation unit: performing surface fitting on a voxel model based on a preset fitting algorithm to obtain a surface structure of the voxel model, performing tangent plane estimation on each sub-voxel in the voxel model to obtain a sub-tangent plane of the sub-voxel, calculating the sub-tangent plane based on a preset normal vector algorithm to obtain a normal vector of the sub-voxel, calculating based on the normal vector of the sub-voxel to obtain a standard deviation of the normal vector of each sub-voxel, and selecting the sub-voxel with the minimum standard deviation as the seed voxel;
a classification unit: if the included angle between the normal vector of the neighborhood voxel of the seed voxel and the normal vector of the seed voxel is smaller than a preset value, classifying the neighborhood voxel and the seed voxel into the same type of voxel;
if the included angle between the normal vector of the neighborhood voxel of the seed voxel and the normal vector of the seed voxel is not smaller than the preset value, taking the neighborhood voxel as the current seed voxel, and judging whether the included angle between the normal vector of the current seed voxel and the neighborhood voxel and the normal vector of the seed voxel is smaller than the preset value:
if so, classifying the neighborhood voxels and the seed voxels into the same class voxels, otherwise, continuously classifying the neighborhood voxels of the current seed voxels as the current seed voxels until each sub-voxel in the voxel model is classified, thereby obtaining a plurality of groups of voxel clusters with similar characteristics;
an identification unit: selecting a voxel with adjacent voxels smaller than a preset value as a sub-boundary voxel in the voxel cluster so as to obtain a boundary voxel set, and distinguishing sub-voxels around the sub-boundary voxel by taking the sub-boundary voxel as a center:
if the distance between the surrounding sub-voxels and the surface structure is not smaller than a preset value, rejecting the surrounding sub-voxels;
if the distance between the surrounding sub-voxels and the surface structure is smaller than a preset value, the surrounding sub-voxels and the sub-boundary voxels are classified into the same plane, and therefore a structure model of the indoor space is obtained;
and identifying the structural model based on a preset classifier to obtain wall information, door information and window information of the indoor space.
In one possible way of realisation,
the identification module is further configured to calculate a ratio of the actual wall length to the wall length in the image data according to the following formula:
Figure BDA0003096738550000051
wherein D is the ratio of the actual wall length to the wall length in the image data, D is the top distance from the data acquisition module to the wall, and DpZeta is the pixel length occupied by the wall body based on the data acquisition module, and is the horizontal distance from the data acquisition module to the wall body; theta is the horizontal offset angle of the data acquisition module based on the wall body, and the value of theta is [0 DEG, 5 DEG ]]J is the measurement accuracy of the data acquisition module, and j is 0.98;
the identification module is further used for calculating and obtaining the actual length of each wall body according to the proportion of the actual wall body length to the wall body length in the image data and the pixel distance of each wall body in the image data.
In one possible way of realisation,
and the establishing module is used for automatically drawing a three-dimensional space model of the indoor space based on a preset construction algorithm and the actual length of each wall.
In one possible way of realisation,
the establishing module further comprises:
the first calculation unit is used for calculating the redundancy of the laser point cloud data in the indoor space according to the following formula:
Figure BDA0003096738550000061
wherein, beta represents the redundancy of the laser point cloud data; i represents the number of a straight line segment in laser point cloud data in a preset time node; n represents the total number of straight line segments in the laser point cloud data in the preset time node; f. ofiThe ratio of the length of the ith straight line segment in the preset time node to the average length of the straight line segments in the laser point cloud data is represented, and the value range is (0.8, 1);
Figure BDA0003096738550000062
representing the matching degree of the gravity center of the ith straight line segment in the preset time node and the average gravity center of the straight line segments in the laser point cloud data; thetaiRepresenting the matching degree of the angle of the ith straight line segment in the preset time node and the average angle of the straight line segments in the laser point cloud data;
Figure BDA0003096738550000063
representing the proportion of the confidence coefficient of the ith straight line segment in the preset time node to the average confidence coefficient of the straight line segments in the laser point cloud data; mu represents the number of corners corresponding to the indoor space determined based on the laser point cloud data, and the value range is (0, 20);
the second calculation unit is used for calculating the integrity of the laser point cloud data in the indoor space according to the following formula;
Figure BDA0003096738550000064
wherein gamma represents the integrity of laser point cloud data in an indoor space, and the value range is (0%, 100%); l represents the data capacity of the indoor space acquired based on the data acquisition module;
when the integrity of the laser point cloud data is larger than a preset value, indicating that the laser point cloud data in the indoor space is completely acquired;
and when the integrity of the laser point cloud data is not greater than a preset value, indicating that the laser point cloud data in the indoor space is not completely acquired, and continuing to acquire the laser point cloud data by the data acquisition module.
The invention provides an indoor space surveying and mapping method, which comprises the following steps:
the method comprises the following steps that 1, the indoor space is scanned, laser point cloud data and image data of the indoor space are obtained, and meanwhile, the motion track of a data acquisition module is captured;
step 2, fusing the laser point cloud data and the image data at the same time based on the motion track, and identifying the structures of an area which needs to be measured and an area which does not need to be measured in the indoor space to obtain scale data of each part of the structure of the indoor space;
and 3, automatically drawing a three-dimensional space model of the indoor space based on the scale data.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block diagram of an indoor space mapping system in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of a data acquisition module in an embodiment of the present invention;
FIG. 3 is a block diagram of an identification module in an embodiment of the invention;
FIG. 4 is another block diagram of an identification module in an embodiment of the invention;
FIG. 5 is a block diagram of a building block in an embodiment of the invention;
fig. 6 is a flowchart of a method for indoor space mapping according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1
An embodiment of the present invention provides an indoor space mapping system, as shown in fig. 1, including:
the system comprises a data acquisition module, a data acquisition module and a data processing module, wherein the data acquisition module is used for scanning an indoor space, acquiring laser point cloud data and image data of the indoor space and capturing a motion track of the data acquisition module;
the identification module is used for fusing the laser point cloud data and the image data at the same time based on the motion track, identifying the structures of a necessary measurement area and a non-necessary measurement area in the indoor space and obtaining the scale data of each part of the structure of the indoor space;
and the establishing module is used for automatically drawing a three-dimensional space model of the indoor space based on the scale data.
In this embodiment, the data acquisition module includes that inside is provided with electron compass, laser rangefinder sensor, gravity inductor, corner distance inductor and wireless transmitter.
In this embodiment, the necessary measurement areas are wall and corner areas;
in this embodiment, the unnecessary measurement area is a window area;
in this embodiment, the scale data is length data of each part structure;
the beneficial effect of above-mentioned scheme does: through discerning wall, turning door and window, corridor among the indoor space to measure, thereby draw the three-dimensional space model of indoor space according to each item measured data is automatic, convenient and intelligent is swift, saves time, the material resources of using manpower sparingly, all accomplish by the instrument is automatic, the practicality is stronger.
Example 2
An embodiment of the present invention provides an indoor space mapping system, as shown in fig. 2, including:
the data acquisition module comprises:
the laser sampling unit is used for carrying out clockwise laser scanning on the indoor space to obtain laser point cloud data of the indoor space under a standard laser coordinate system;
the image sampling unit is used for carrying out clockwise image scanning on the indoor space and acquiring image data of the indoor space under a standard image coordinate system;
and the track capturing unit is used for capturing the motion track of the data acquisition module.
The beneficial effect of above-mentioned scheme does: the integrity of data is ensured by performing laser scanning and image scanning on the indoor space in a clockwise sequence, and the laser point cloud data and the image data can be fused according to the motion track of the data acquisition module.
Example 3
An embodiment of the present invention provides an indoor space mapping system, as shown in fig. 2, including:
the laser sampling unit includes:
the laser distance measuring sensor is used for measuring the distance between the laser point cloud data and the origin of the standard laser coordinate system;
the corner distance sensor is used for measuring the distance from the corner of the indoor space to the origin of the standard image coordinate system;
the image sampling unit includes:
the camera is used for shooting images of the indoor space;
a trajectory capture unit comprising:
the electronic compass is used for acquiring the rotation angle of the data acquisition module;
the gravity sensor is used for acquiring the acceleration direction of the data acquisition module;
and the processor is used for obtaining the motion trail of the data acquisition module according to the rotation angle and the acceleration direction.
The beneficial effect of above-mentioned scheme does: data acquisition is carried out through the laser ranging sensor, the distance sensor, the corner distance sensor, the electronic compass and the gravity sensor, so that the accuracy of the data is ensured, and the reliability of the data is improved.
Example 4
An embodiment of the present invention provides an indoor space mapping system, as shown in fig. 3, including:
the identification module comprises:
a pretreatment unit: the system comprises a laser sampling unit, an image sampling unit, a standard laser coordinate system and a standard image coordinate system, wherein the laser sampling unit is used for sampling laser points in the standard laser coordinate system;
a straight line detection unit: based on a preset Gaussian filter, performing smooth filtering on image data of a preset time node, calculating the filtered image data, obtaining partial derivatives of the image data in the x-axis direction and the y-axis direction, thus obtaining a gradient image of the image data, performing suppression processing on the gradient image, obtaining a refined gradient amplitude array, and performing double thresholding on the refined gradient amplitude array:
setting the points with gradient amplitude lower than the low threshold value and the points with gradient amplitude higher than the high threshold value as edge points; for the points with the gradient amplitude between the two thresholds, checking a preset number of adjacent points around the points, if the adjacent points have edge points, setting the points with the gradient amplitude between the two thresholds as the edge points, otherwise, setting the points as non-edge points, and removing the points to obtain an edge point set of image data of preset time nodes;
a calibration unit: mapping laser point cloud data of preset time nodes into image data based on a mapping matrix to obtain binary images of a plurality of sub-time nodes, reading the laser point cloud data of the preset time nodes to obtain depth information of the laser point cloud, performing color addition on the binary images based on a preset addition algorithm and the depth information of the laser point cloud to obtain initial depth images, performing pixel expansion on the initial depth images based on added pixel points in the initial depth images to obtain expanded images, finding out mapping point sets corresponding to edge point sets in the expanded images, calculating the mapping point sets and the edge point sets respectively to obtain coordinate offset, calibrating the mapping matrix based on the coordinate offset, and obtaining a calibration mapping matrix;
a fusion unit: and fusing the laser point cloud data and the image data of each sub time node based on the calibration mapping matrix to obtain a plurality of sub color point clouds, performing coordinate decomposition on the sub color point clouds respectively, obtaining a transformation relation of coordinates of different sub color point clouds based on a preset conversion algorithm and the motion track of the data acquisition module, and fusing the sub color point clouds based on the transformation relation to obtain color point cloud data.
In this embodiment, the relative position relationship is the relative position of the laser sampling unit and the image sampling unit in the data acquisition module;
in this embodiment, the relationship model is a coordinate conversion model established based on the relative position relationship between the laser sampling unit and the image sampling unit;
in the embodiment, the conversion relation converts the coordinate transformation relation in the standard image coordinate system to the standard laser coordinate system;
in this embodiment, the mapping matrix is in the form of a matrix of a transformation relationship;
in this embodiment, the predetermined number is 8 neighboring points around;
in this embodiment, the edge point set is a set formed by edge points;
in this embodiment, the preset time node is the second time of each sampling period;
in this embodiment, the extended image is an image obtained by performing pixel extension on the initial depth image based on the added pixel points in the initial depth image;
in this embodiment, the mapping point set is a point set corresponding to the edge point set after mapping the edge point set to the standard laser coordinate system;
in this embodiment, the calibration mapping matrix is a coordinate transformation relationship converted into a standard laser coordinate system in a standard image coordinate system;
in this embodiment, the partial derivatives of the image data in the x-axis direction and the y-axis direction are gradient change rates of the image data in the x-axis and the y-axis, respectively;
in the present embodiment, the suppression processing is to subdivide an area where the gradient change rate exceeds a preset threshold;
in this embodiment, the depth information is distance information between the laser point cloud and the data acquisition module;
in this embodiment, the preset addition algorithm is a kalman color addition algorithm;
in the embodiment, the pixel expansion is based on colored pixels, and the color expansion is carried out on the pixels without the color around;
in this embodiment, the offset is the offset between the result obtained based on the mapping algorithm and the actual mapping;
in this embodiment, the color point cloud is laser point cloud data including color information;
the working principle and the beneficial effects of the scheme are as follows: the laser point cloud data and the image data are fused by preprocessing, straight line detection, calibration and fusion of the laser points and the image pixels, so that necessary preconditions are provided for subsequent processes.
Example 5
An embodiment of the present invention provides an indoor space mapping system, as shown in fig. 4, including:
the identification module further comprises:
a filtering unit: the system is used for obtaining the Euclidean distance from each color sub-point cloud in the color point cloud data to the neighborhood of the color sub-point cloud, and judging whether to filter the color point cloud data or not based on the relation between the Euclidean distance and a preset range:
if the Euclidean distance is not within the preset range, deleting the color sub-point cloud;
otherwise, the color sub-point cloud is reserved, so that the filtered color point cloud data is obtained;
a processing unit: constructing a root node based on the filtered color point cloud data, and segmenting the root node based on a preset segmentation algorithm and preset segmentation precision, so as to obtain a plurality of sub-terminal nodes, and distinguishing the sub-terminal nodes:
if the sub terminal node does not contain the colored sub point cloud, the sub terminal node is indicated to be a null node, and the sub terminal node is deleted;
if the sub terminal node contains the color sub point cloud, the sub terminal node is indicated to be a non-empty node, and the sub terminal node is reserved, so that a voxel model of the indoor space is obtained;
a calculation unit: performing surface fitting on a voxel model based on a preset fitting algorithm to obtain a surface structure of the voxel model, performing tangent plane estimation on each sub-voxel in the voxel model to obtain a sub-tangent plane of the sub-voxel, calculating the sub-tangent plane based on a preset normal vector algorithm to obtain a normal vector of the sub-voxel, calculating based on the normal vector of the sub-voxel to obtain a standard deviation of the normal vector of each sub-voxel, and selecting the sub-voxel with the minimum standard deviation as the seed voxel;
a classification unit: if the included angle between the normal vector of the neighborhood voxel of the seed voxel and the normal vector of the seed voxel is smaller than a preset value, classifying the neighborhood voxel and the seed voxel into the same type of voxel;
if the included angle between the normal vector of the neighborhood voxel of the seed voxel and the normal vector of the seed voxel is not smaller than the preset value, taking the neighborhood voxel as the current seed voxel, and judging whether the included angle between the normal vector of the current seed voxel and the neighborhood voxel and the normal vector of the seed voxel is smaller than the preset value:
if so, classifying the neighborhood voxels and the seed voxels into the same class voxels, otherwise, continuously classifying the neighborhood voxels of the current seed voxels as the current seed voxels until each sub-voxel in the voxel model is classified, thereby obtaining a plurality of groups of voxel clusters with similar characteristics;
an identification unit: selecting a voxel with adjacent voxels smaller than a preset value as a sub-boundary voxel in the voxel cluster so as to obtain a boundary voxel set, and distinguishing sub-voxels around the sub-boundary voxel by taking the sub-boundary voxel as a center:
if the distance between the surrounding sub-voxels and the surface structure is not smaller than a preset value, rejecting the surrounding sub-voxels;
if the distance between the surrounding sub-voxels and the surface structure is smaller than a preset value, the surrounding sub-voxels and the sub-boundary voxels are classified into the same plane, and therefore a structure model of the indoor space is obtained;
and identifying the structural model based on a preset classifier to obtain wall information, door information and window information of the indoor space.
In this embodiment, the euclidean distance is the distance from the color sub-point cloud to the color sub-point cloud neighborhood in the three-dimensional space;
in this embodiment, the root node is a cube range containing all color point cloud data;
in this embodiment, the child terminal node divides the root node into sub-ranges obtained after the preset range;
in this embodiment, the voxel model is an indoor space model formed by voxels;
in this embodiment, the surface structure is a structure of a surface of the voxel model obtained by surface fitting the voxel model;
in this embodiment, the sub-voxels are non-null terminal nodes;
in this embodiment, the section estimation is to estimate the section of the sub-voxel;
in this embodiment, the sub-tangent plane is a tangent plane of the sub-voxel obtained by estimating the sub-voxel;
in this embodiment, the seed voxel is the smallest standard deviation;
in this embodiment, the same type of voxels are voxels whose included angle with the normal vector of the seed voxel is smaller than a preset value;
in this embodiment, the neighborhood voxels are sub-voxels around the current seed voxel;
in this embodiment, the similar features are similar in the direction of the normal vector;
in this embodiment, the sub-boundary voxels are sub-voxels at the boundary;
in this embodiment, the wall information, the door information, and the window information are the direction and the length of the wall, and the door information and the window information are the position of the door and the position of the window;
in this embodiment, the preset fitting algorithm is a plane fitting algorithm;
in the present embodiment, the normal vector is a vector perpendicular to the tangent plane;
in this embodiment, the voxels are clustered into several category sets formed by different voxels;
in this embodiment, the preset classifiers include a wall surface classifier, a corner type classifier, a door classifier, and a window classifier;
the working principle and the beneficial effects of the scheme are as follows: through the filtering unit, the processing unit, the calculating unit, the classifying unit and the identifying unit, the wall body direction, the wall body corner type, the door and the window of the indoor space are identified, and basic conditions are provided for subsequently constructing a three-dimensional model of the indoor space.
Example 6
An embodiment of the present invention provides an indoor space mapping system, as shown in fig. 3, including:
the identification module is further configured to calculate a ratio of the actual wall length to the wall length in the image data according to the following formula:
Figure BDA0003096738550000141
wherein D is the ratio of the actual wall length to the wall length in the image data, D is the top distance from the data acquisition module to the wall, and DpZeta is the pixel length occupied by the wall body based on the data acquisition module, and is the horizontal distance from the data acquisition module to the wall body; theta is the horizontal offset angle of the data acquisition module based on the wall body, and the value of theta is [0 DEG, 5 DEG ]]J is the measurement accuracy of the data acquisition module, and j is 0.98;
the identification module is further used for calculating and obtaining the actual length of each wall body according to the proportion of the actual wall body length to the wall body length in the image data and the pixel distance of each wall body in the image data.
The working principle and the beneficial effects of the scheme are as follows: the actual length of each wall body is obtained through calculation according to the pixel distance of each wall body in the image data by calculating the proportion of the actual wall body length to the wall body length in the image data, and the calculation process is simplified. The calculation amount is reduced.
Example 7
An embodiment of the present invention provides an indoor space mapping system, as shown in fig. 5, including:
and the establishing module is used for automatically drawing a three-dimensional space model of the indoor space based on a preset construction algorithm and the actual length of each wall.
The beneficial effect of above-mentioned scheme does: the model building module automatically draws a three-dimensional space model of an indoor space according to the obtained data, is convenient, intelligent and quick, saves time and manpower and material resources, all is automatically completed by an instrument, and has strong practicability.
Example 8
An embodiment of the present invention provides an indoor space mapping system, as shown in fig. 5, including:
the model building module further comprises:
the first calculation unit is used for calculating the redundancy of the laser point cloud data in the indoor space according to the following formula:
Figure BDA0003096738550000151
wherein, beta represents the redundancy of the laser point cloud data; i represents the number of a straight line segment in laser point cloud data in a preset time node; n represents the total number of straight line segments in the laser point cloud data in the preset time node; f. ofiThe ratio of the length of the ith straight line segment in the preset time node to the average length of the straight line segments in the laser point cloud data is represented, and the value range is (0.8, 1);
Figure BDA0003096738550000152
representing the matching degree of the gravity center of the ith straight line segment in the preset time node and the average gravity center of the straight line segments in the laser point cloud data; thetaiRepresenting the matching degree of the angle of the ith straight line segment in the preset time node and the average angle of the straight line segments in the laser point cloud data;
Figure BDA0003096738550000153
representing the proportion of the confidence coefficient of the ith straight line segment in the preset time node to the average confidence coefficient of the straight line segments in the laser point cloud data; mu represents the number of corners corresponding to the indoor space determined based on the laser point cloud data, and the value range is (0, 20);
the second calculation unit is used for calculating the integrity of the laser point cloud data in the indoor space according to the following formula;
Figure BDA0003096738550000154
wherein gamma represents the integrity of laser point cloud data in an indoor space, and the value range is (0%, 100%); l represents the data capacity of the indoor space acquired based on the data acquisition module;
when the integrity of the laser point cloud data is larger than a preset value, indicating that the laser point cloud data in the indoor space is completely acquired;
and when the integrity of the laser point cloud data is not greater than a preset value, indicating that the laser point cloud data in the indoor space is not completely acquired, and continuing to acquire the laser point cloud data by the data acquisition module.
In this embodiment, the straight line segment is a boundary line of the wall surface in the laser point cloud data;
the working principle and the beneficial effects of the scheme are as follows: by calculating the redundancy and the integrity of the laser point cloud data in the indoor space, whether the acquisition is needed to be continued is judged, the integrity of the indoor space data acquisition is ensured, and the quality of the established three-dimensional model is ensured.
Example 9
An embodiment of the present invention provides an indoor space mapping method, as shown in fig. 6, including:
the method comprises the following steps that 1, the indoor space is scanned, laser point cloud data and image data of the indoor space are obtained, and meanwhile, the motion track of a data acquisition module is captured;
step 2, fusing the laser point cloud data and the image data at the same time based on the motion track, and identifying the structures of an area which needs to be measured and an area which does not need to be measured in the indoor space to obtain scale data of each part of the structure of the indoor space;
and 3, automatically drawing a three-dimensional space model of the indoor space based on the scale data.
The working principle and the advantageous effects of the above technical solutions have been described in the system claims, and are not described herein again.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. An indoor space mapping system, comprising:
the system comprises a data acquisition module, a data acquisition module and a data processing module, wherein the data acquisition module is used for scanning an indoor space, acquiring laser point cloud data and image data of the indoor space and capturing a motion track of the data acquisition module;
the identification module is used for fusing the laser point cloud data and the image data at the same time based on the motion track, identifying the structures of a necessary measurement area and a non-necessary measurement area in the indoor space and obtaining the scale data of each part of the structure of the indoor space;
and the establishing module is used for automatically drawing a three-dimensional space model of the indoor space based on the scale data.
2. An indoor space mapping system according to claim 1, wherein the data acquisition module comprises:
the laser sampling unit is used for carrying out clockwise laser scanning on the indoor space to obtain laser point cloud data of the indoor space under a standard laser coordinate system;
the image sampling unit is used for carrying out clockwise image scanning on the indoor space and acquiring image data of the indoor space under a standard image coordinate system;
and the track capturing unit is used for capturing the motion track of the data acquisition module.
3. An indoor space mapping system according to claim 1, wherein the laser sampling unit comprises:
the laser distance measuring sensor is used for measuring the distance between the laser point cloud data and the origin of the standard laser coordinate system;
the corner distance sensor is used for measuring the distance from the corner of the indoor space to the origin of the standard image coordinate system;
the image sampling unit includes:
the camera is used for shooting images of the indoor space;
a trajectory capture unit comprising:
the electronic compass is used for acquiring the rotation angle of the data acquisition module;
the gravity sensor is used for acquiring the acceleration direction of the data acquisition module;
and the processor is used for obtaining the motion trail of the data acquisition module according to the rotation angle and the acceleration direction.
4. An indoor space mapping system according to claim 1, wherein the identification module comprises:
a pretreatment unit: the system comprises a laser sampling unit, an image sampling unit, a standard laser coordinate system and a standard image coordinate system, wherein the laser sampling unit is used for sampling laser points in the standard laser coordinate system;
a straight line detection unit: based on a preset Gaussian filter, performing smooth filtering on image data of a preset time node, calculating the filtered image data, obtaining partial derivatives of the image data in the x-axis direction and the y-axis direction, thus obtaining a gradient image of the image data, performing suppression processing on the gradient image, obtaining a refined gradient amplitude array, and performing double thresholding on the refined gradient amplitude array:
setting the points with gradient amplitude lower than the low threshold value and the points with gradient amplitude higher than the high threshold value as edge points; for the points with the gradient amplitude between the two thresholds, checking a preset number of adjacent points around the points, if the adjacent points have edge points, setting the points with the gradient amplitude between the two thresholds as the edge points, otherwise, setting the points as non-edge points, and removing the points to obtain an edge point set of image data of preset time nodes;
a calibration unit: mapping laser point cloud data of preset time nodes into image data based on a mapping matrix to obtain binary images of a plurality of sub-time nodes, reading the laser point cloud data of the preset time nodes to obtain depth information of the laser point cloud, performing color addition on the binary images based on a preset addition algorithm and the depth information of the laser point cloud to obtain initial depth images, performing pixel expansion on the initial depth images based on added pixel points in the initial depth images to obtain expanded images, finding out mapping point sets corresponding to edge point sets in the expanded images, calculating the mapping point sets and the edge point sets respectively to obtain coordinate offset, calibrating the mapping matrix based on the coordinate offset, and obtaining a calibration mapping matrix;
a fusion unit: and fusing the laser point cloud data and the image data of each sub time node based on the calibration mapping matrix to obtain a plurality of sub color point clouds, performing coordinate decomposition on the sub color point clouds respectively, obtaining a transformation relation of coordinates of different sub color point clouds based on a preset conversion algorithm and the motion track of the data acquisition module, and fusing the sub color point clouds based on the transformation relation to obtain color point cloud data.
5. An indoor space mapping system according to claim 1, wherein the identification module further comprises:
a filtering unit: the system is used for obtaining the Euclidean distance from each color sub-point cloud in the color point cloud data to the neighborhood of the color sub-point cloud, and judging whether to filter the color point cloud data or not based on the relation between the Euclidean distance and a preset range:
if the Euclidean distance is not within the preset range, deleting the color sub-point cloud;
otherwise, the color sub-point cloud is reserved, so that the filtered color point cloud data is obtained;
a processing unit: constructing a root node based on the filtered color point cloud data, and segmenting the root node based on a preset segmentation algorithm and preset segmentation precision, so as to obtain a plurality of sub-terminal nodes, and distinguishing the sub-terminal nodes:
if the sub terminal node does not contain the colored sub point cloud, the sub terminal node is indicated to be a null node, and the sub terminal node is deleted;
if the sub terminal node contains the color sub point cloud, the sub terminal node is indicated to be a non-empty node, and the sub terminal node is reserved, so that a voxel model of the indoor space is obtained;
a calculation unit: performing surface fitting on a voxel model based on a preset fitting algorithm to obtain a surface structure of the voxel model, performing tangent plane estimation on each sub-voxel in the voxel model to obtain a sub-tangent plane of the sub-voxel, calculating the sub-tangent plane based on a preset normal vector algorithm to obtain a normal vector of the sub-voxel, calculating based on the normal vector of the sub-voxel to obtain a standard deviation of the normal vector of each sub-voxel, and selecting the sub-voxel with the minimum standard deviation as the seed voxel;
a classification unit: if the included angle between the normal vector of the neighborhood voxel of the seed voxel and the normal vector of the seed voxel is smaller than a preset value, classifying the neighborhood voxel and the seed voxel into the same type of voxel;
if the included angle between the normal vector of the neighborhood voxel of the seed voxel and the normal vector of the seed voxel is not smaller than the preset value, taking the neighborhood voxel as the current seed voxel, and judging whether the included angle between the normal vector of the current seed voxel and the neighborhood voxel and the normal vector of the seed voxel is smaller than the preset value:
if so, classifying the neighborhood voxels and the seed voxels into the same class voxels, otherwise, continuously classifying the neighborhood voxels of the current seed voxels as the current seed voxels until each sub-voxel in the voxel model is classified, thereby obtaining a plurality of groups of voxel clusters with similar characteristics;
an identification unit: selecting a voxel with adjacent voxels smaller than a preset value as a sub-boundary voxel in the voxel cluster so as to obtain a boundary voxel set, and distinguishing sub-voxels around the sub-boundary voxel by taking the sub-boundary voxel as a center:
if the distance between the surrounding sub-voxels and the surface structure is not smaller than a preset value, rejecting the surrounding sub-voxels;
if the distance between the surrounding sub-voxels and the surface structure is smaller than a preset value, the surrounding sub-voxels and the sub-boundary voxels are classified into the same plane, and therefore a structure model of the indoor space is obtained;
and identifying the structural model based on a preset classifier to obtain wall information, door information and window information of the indoor space.
6. The indoor space mapping system of claim 1, wherein the identification module is further configured to calculate a ratio of the actual wall length to the wall length in the image data according to the following formula:
Figure FDA0003096738540000041
wherein D is the ratio of the actual wall length to the wall length in the image data, D is the top distance from the data acquisition module to the wall, and DpZeta is the pixel length occupied by the wall body based on the data acquisition module, and is the horizontal distance from the data acquisition module to the wall body; theta is the horizontal offset angle of the data acquisition module based on the wall body, and the value of theta is [0 DEG, 5 DEG ]]J is the measurement accuracy of the data acquisition module, and j is 0.98;
the identification module is further used for calculating and obtaining the actual length of each wall body according to the proportion of the actual wall body length to the wall body length in the image data and the pixel distance of each wall body in the image data.
7. The indoor space surveying and mapping system of claim 1, wherein the building module is configured to automatically draw a three-dimensional space model of the indoor space based on a preset construction algorithm and an actual length of each of the walls.
8. An indoor space mapping system as claimed in claim 1, wherein the establishing module further comprises:
the first calculation unit is used for calculating the redundancy of the laser point cloud data in the indoor space according to the following formula:
Figure FDA0003096738540000051
wherein beta represents redundancy of laser point cloud dataRedundancy; i represents the number of a straight line segment in laser point cloud data in a preset time node; n represents the total number of straight line segments in the laser point cloud data in the preset time node; f. ofiThe ratio of the length of the ith straight line segment in the preset time node to the average length of the straight line segments in the laser point cloud data is represented, and the value range is (0.8, 1);
Figure FDA0003096738540000052
representing the matching degree of the gravity center of the ith straight line segment in the preset time node and the average gravity center of the straight line segments in the laser point cloud data; thetaiRepresenting the matching degree of the angle of the ith straight line segment in the preset time node and the average angle of the straight line segments in the laser point cloud data;
Figure FDA0003096738540000053
representing the proportion of the confidence coefficient of the ith straight line segment in the preset time node to the average confidence coefficient of the straight line segments in the laser point cloud data; mu represents the number of corners corresponding to the indoor space determined based on the laser point cloud data, and the value range is (0, 20);
the second calculation unit is used for calculating the integrity of the laser point cloud data in the indoor space according to the following formula;
Figure FDA0003096738540000054
wherein gamma represents the integrity of laser point cloud data in an indoor space, and the value range is (0%, 100%); l represents the data capacity of the indoor space acquired based on the data acquisition module;
when the integrity of the laser point cloud data is larger than a preset value, indicating that the laser point cloud data in the indoor space is completely acquired;
and when the integrity of the laser point cloud data is not greater than a preset value, indicating that the laser point cloud data in the indoor space is not completely acquired, and continuing to acquire the laser point cloud data by the data acquisition module.
9. A method of mapping an indoor space, comprising:
the method comprises the following steps that 1, the indoor space is scanned, laser point cloud data and image data of the indoor space are obtained, and meanwhile, the motion track of a data acquisition module is captured;
step 2, fusing the laser point cloud data and the image data at the same time based on the motion track, and identifying the structures of an area which needs to be measured and an area which does not need to be measured in the indoor space to obtain scale data of each part of the structure of the indoor space;
and 3, automatically drawing a three-dimensional space model of the indoor space based on the scale data.
CN202110613003.2A 2021-06-02 2021-06-02 Indoor space surveying and mapping system and method Pending CN113340266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110613003.2A CN113340266A (en) 2021-06-02 2021-06-02 Indoor space surveying and mapping system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110613003.2A CN113340266A (en) 2021-06-02 2021-06-02 Indoor space surveying and mapping system and method

Publications (1)

Publication Number Publication Date
CN113340266A true CN113340266A (en) 2021-09-03

Family

ID=77472985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110613003.2A Pending CN113340266A (en) 2021-06-02 2021-06-02 Indoor space surveying and mapping system and method

Country Status (1)

Country Link
CN (1) CN113340266A (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1799252A (en) * 2003-06-02 2006-07-05 卡西欧计算机株式会社 Captured image projection apparatus and captured image correction method
CN1909590A (en) * 2005-08-04 2007-02-07 卡西欧计算机株式会社 Image-capturing apparatus, image correction method and program
CN105890580A (en) * 2016-04-06 2016-08-24 马嘉伦 Indoor space surveying and mapping system and method
CN106204705A (en) * 2016-07-05 2016-12-07 长安大学 A kind of 3D point cloud segmentation method based on multi-line laser radar
JP2017198517A (en) * 2016-04-27 2017-11-02 株式会社国際電気通信基礎技術研究所 Three dimensional map generation system
CN108230432A (en) * 2017-12-12 2018-06-29 中国南方电网有限责任公司超高压输电公司广州局 A kind of insulator laser point cloud three-dimensional rebuilding method based on CS-RBF
CN108510530A (en) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 A kind of three-dimensional point cloud matching process and its system
CN108564653A (en) * 2018-03-16 2018-09-21 中国传媒大学 Human skeleton tracing system and method based on more Kinect
CN108961271A (en) * 2018-06-27 2018-12-07 西安工程大学 A kind of super voxel bumps partitioning algorithm of colour 3D point cloud
CN109543697A (en) * 2018-11-16 2019-03-29 西北工业大学 A kind of RGBD images steganalysis method based on deep learning
CN109801358A (en) * 2018-12-06 2019-05-24 宁波市电力设计院有限公司 A kind of substation's three-dimensional investigation method scanning and put cloud visual fusion based on SLAM
CN110443810A (en) * 2019-07-17 2019-11-12 北京建筑大学 Point cloud plane dividing method based on quick adjacent voxel inquiry
CN111275810A (en) * 2020-01-17 2020-06-12 五邑大学 K nearest neighbor point cloud filtering method and device based on image processing and storage medium
CN111488769A (en) * 2019-01-28 2020-08-04 北京工商大学 Unsupervised fusion point cloud superpixelization method based on light spot divergence size
CN111679288A (en) * 2020-06-19 2020-09-18 中国林业科学研究院资源信息研究所 Method for measuring spatial distribution of point cloud data
CN112017251A (en) * 2020-10-19 2020-12-01 杭州飞步科技有限公司 Calibration method and device, road side equipment and computer readable storage medium
CN112084875A (en) * 2020-08-11 2020-12-15 南京航空航天大学 Multi-laser radar coordinate system method
CN112258406A (en) * 2020-10-16 2021-01-22 中国人民解放军空军航空大学 Linear array push-broom CCD image rapid automatic geometric correction method
CN112308974A (en) * 2020-10-30 2021-02-02 南京航空航天大学 Large-scale point cloud visualization method for improving octree and adaptive reading
CN112487919A (en) * 2020-11-25 2021-03-12 吉林大学 3D target detection and tracking method based on camera and laser radar
CN112561985A (en) * 2020-10-27 2021-03-26 广西大学 Hedgerow nursery stock trimming and centering method based on binocular vision
CN112653889A (en) * 2020-12-23 2021-04-13 北汽福田汽车股份有限公司 Camera self-calibration method and device, camera equipment and vehicle

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1799252A (en) * 2003-06-02 2006-07-05 卡西欧计算机株式会社 Captured image projection apparatus and captured image correction method
CN1909590A (en) * 2005-08-04 2007-02-07 卡西欧计算机株式会社 Image-capturing apparatus, image correction method and program
CN105890580A (en) * 2016-04-06 2016-08-24 马嘉伦 Indoor space surveying and mapping system and method
JP2017198517A (en) * 2016-04-27 2017-11-02 株式会社国際電気通信基礎技術研究所 Three dimensional map generation system
CN106204705A (en) * 2016-07-05 2016-12-07 长安大学 A kind of 3D point cloud segmentation method based on multi-line laser radar
CN108510530A (en) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 A kind of three-dimensional point cloud matching process and its system
CN108230432A (en) * 2017-12-12 2018-06-29 中国南方电网有限责任公司超高压输电公司广州局 A kind of insulator laser point cloud three-dimensional rebuilding method based on CS-RBF
CN108564653A (en) * 2018-03-16 2018-09-21 中国传媒大学 Human skeleton tracing system and method based on more Kinect
CN108961271A (en) * 2018-06-27 2018-12-07 西安工程大学 A kind of super voxel bumps partitioning algorithm of colour 3D point cloud
CN109543697A (en) * 2018-11-16 2019-03-29 西北工业大学 A kind of RGBD images steganalysis method based on deep learning
CN109801358A (en) * 2018-12-06 2019-05-24 宁波市电力设计院有限公司 A kind of substation's three-dimensional investigation method scanning and put cloud visual fusion based on SLAM
CN111488769A (en) * 2019-01-28 2020-08-04 北京工商大学 Unsupervised fusion point cloud superpixelization method based on light spot divergence size
CN110443810A (en) * 2019-07-17 2019-11-12 北京建筑大学 Point cloud plane dividing method based on quick adjacent voxel inquiry
CN111275810A (en) * 2020-01-17 2020-06-12 五邑大学 K nearest neighbor point cloud filtering method and device based on image processing and storage medium
CN111679288A (en) * 2020-06-19 2020-09-18 中国林业科学研究院资源信息研究所 Method for measuring spatial distribution of point cloud data
CN112084875A (en) * 2020-08-11 2020-12-15 南京航空航天大学 Multi-laser radar coordinate system method
CN112258406A (en) * 2020-10-16 2021-01-22 中国人民解放军空军航空大学 Linear array push-broom CCD image rapid automatic geometric correction method
CN112017251A (en) * 2020-10-19 2020-12-01 杭州飞步科技有限公司 Calibration method and device, road side equipment and computer readable storage medium
CN112561985A (en) * 2020-10-27 2021-03-26 广西大学 Hedgerow nursery stock trimming and centering method based on binocular vision
CN112308974A (en) * 2020-10-30 2021-02-02 南京航空航天大学 Large-scale point cloud visualization method for improving octree and adaptive reading
CN112487919A (en) * 2020-11-25 2021-03-12 吉林大学 3D target detection and tracking method based on camera and laser radar
CN112653889A (en) * 2020-12-23 2021-04-13 北汽福田汽车股份有限公司 Camera self-calibration method and device, camera equipment and vehicle

Similar Documents

Publication Publication Date Title
CN111882612B (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN110221603B (en) Remote obstacle detection method based on laser radar multi-frame point cloud fusion
US9846946B2 (en) Objection recognition in a 3D scene
CN108171131B (en) Improved MeanShift-based method for extracting Lidar point cloud data road marking line
CN108868268B (en) Unmanned parking space posture estimation method based on point-to-surface distance and cross-correlation entropy registration
CN112233177B (en) Unmanned aerial vehicle pose estimation method and system
CN113359097B (en) Millimeter wave radar and camera combined calibration method
CN110097093A (en) A kind of heterologous accurate matching of image method
CN115761550A (en) Water surface target detection method based on laser radar point cloud and camera image fusion
CN111007531A (en) Road edge detection method based on laser point cloud data
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN113281782A (en) Laser radar snow point filtering method based on unmanned vehicle
CN115908539A (en) Target volume automatic measurement method and device and storage medium
CN117218350A (en) SLAM implementation method and system based on solid-state radar
CN113487631B (en) LEGO-LOAM-based adjustable large-angle detection sensing and control method
CN107657621B (en) Two-dimensional laser point cloud sequence real-time segmentation method based on linear region growth
CN113219472A (en) Distance measuring system and method
CN112379393A (en) Train collision early warning method and device
CN113340266A (en) Indoor space surveying and mapping system and method
CN115930946A (en) Method for describing multiple characteristics of dynamic barrier in indoor and outdoor alternating environment
CN115965712A (en) Building two-dimensional vector diagram construction method, system, equipment and storage medium
CN113095324A (en) Classification and distance measurement method and system for cone barrel
CN114089376A (en) Single laser radar-based negative obstacle detection method
CN113554705A (en) Robust positioning method for laser radar in changing scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210903