WO2019127347A1 - 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品 - Google Patents

三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品 Download PDF

Info

Publication number
WO2019127347A1
WO2019127347A1 PCT/CN2017/119784 CN2017119784W WO2019127347A1 WO 2019127347 A1 WO2019127347 A1 WO 2019127347A1 CN 2017119784 W CN2017119784 W CN 2017119784W WO 2019127347 A1 WO2019127347 A1 WO 2019127347A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
coordinate system
line lidar
point cloud
information
Prior art date
Application number
PCT/CN2017/119784
Other languages
English (en)
French (fr)
Inventor
高军强
廉士国
林义闽
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2017/119784 priority Critical patent/WO2019127347A1/zh
Priority to CN201780002718.6A priority patent/CN108337915A/zh
Publication of WO2019127347A1 publication Critical patent/WO2019127347A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Definitions

  • the present application relates to the field of map reconstruction technologies, and in particular, to a three-dimensional mapping method, device, system, cloud platform, electronic device and computer program product.
  • 3D map reconstruction is a mathematical model that is suitable for computer representation and understanding. It is the basis for computer processing, operation and analysis of 3D space environment. It is also the key technology to establish virtual reality in the computer to express the objective world.
  • the mobile platform first needs to locate the location where it is located, and then splicing the three-dimensional map models acquired at different locations to achieve the purpose of real-time three-dimensional mapping. Therefore, real-time positioning and 3D scene reconstruction are two key technologies for 3D mapping. In summary, there are two ways to move positioning and 3D reconstruction, based on lidar and vision-based positioning reconstruction.
  • the laser radar-based positioning reconstruction method uses laser flight time to obtain spatial position information.
  • the method is mature, the positioning accuracy is high, the environmental requirements are not high, and the stability is high.
  • the embodiments of the present disclosure provide a three-dimensional mapping method, apparatus, system, cloud platform, electronic device, and computer program product, which are positioned by using a laser radar, and use multiple visual sensors to acquire visual three-dimensional data without visual 3D data. Optimized calculations greatly reduce computational complexity.
  • an embodiment of the present disclosure provides a three-dimensional mapping method, the method comprising:
  • Positioning is performed by a single-line laser radar, and the pose information of the single-line lidar in the world coordinate system is obtained, and the reference visual coordinate system is obtained according to the first pose relationship between the single-line lidar coordinate system and the reference visual coordinate system.
  • an embodiment of the present disclosure provides a three-dimensional mapping device, where the device includes a positioning unit, a mapping unit, a three-dimensional reconstruction unit, and a splicing unit;
  • the positioning unit performs positioning by using a single-line laser radar to obtain posture information of the single-line laser radar in a world coordinate system;
  • the mapping unit obtains a second pose relationship between the reference visual coordinate system and the world coordinate system according to a first pose relationship between the single-line lidar coordinate system and the reference visual coordinate system, and according to the second pose relationship Converting the unified three-dimensional point cloud data into the world coordinate system;
  • the three-dimensional reconstruction unit uses a plurality of vision sensors to collect a plurality of three-dimensional point cloud data at different angles;
  • the splicing unit integrates the plurality of three-dimensional point cloud data obtained by the three-dimensional reconstruction unit into a reference visual coordinate system to obtain unified three-dimensional point cloud data, and convert the unified three-dimensional point cloud data to the world In the coordinate system, a three-dimensional map reconstructed by the vision sensor is obtained.
  • an embodiment of the present disclosure provides a three-dimensional mapping system, the system comprising: a single-line lidar and at least four vision sensors;
  • the single-line laser radar is located at the bottom of the system, and the position and posture information of the single-line laser radar is obtained;
  • Each of the vision sensors is located above the single-line lidar, and is directed to four directions of up, down, left, and right, and collects three-dimensional maps of multiple angles, and splices the three-dimensional maps collected by each of the visual sensors to obtain a three-dimensional map.
  • an embodiment of the present disclosure provides a cloud platform, where the cloud platform includes the foregoing three-dimensional mapping device, and stores a three-dimensional map uploaded by the three-dimensional mapping device.
  • an embodiment of the present disclosure provides an electronic device, including: a communication device, a memory, one or more processors; and one or more modules, the one or more modules being stored In the memory, and configured to be executed by the one or more processors, the one or more modules include instructions for performing the various steps in any of the three-dimensional mapping methods described above.
  • an embodiment of the present disclosure provides a computer program product for use in conjunction with an electronic device, the computer program product comprising a computer program embedded in a computer readable storage medium, the computer program comprising The electronic device is caused to execute instructions of each of the above three-dimensional mapping methods.
  • the embodiment of the present disclosure combines single-line lidar positioning with three-dimensional reconstruction of a vision sensor, and adopts a method of laser radar positioning and multiple visual sensors for three-dimensional reconstruction, which can achieve very high precision due to laser radar positioning, and does not require visual reconstruction.
  • the 3D data is optimized for calculation, which greatly reduces the computational complexity.
  • the use of multiple visual sensors for three-dimensional reconstruction can increase the construction angle of view and achieve high-efficiency three-dimensional map reconstruction.
  • FIG. 1 is a schematic structural diagram of a three-dimensional drawing system in an embodiment of the present disclosure
  • FIG. 2 is another schematic structural diagram of a three-dimensional drawing system in an embodiment of the present disclosure
  • FIG. 3 is a flow chart of a three-dimensional drawing method in an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of conversion between a reference visual coordinate system, a single-line lidar coordinate system, and a world coordinate system in an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a three-dimensional drawing device in an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • the method of capturing three-dimensional scenes of upper, middle and lower angles by using three depth cameras requires rotating and capturing 360-degree panoramic three-dimensional graphics at different positions, and splicing three-dimensional models of different positions into three-dimensional drawings. This method cannot achieve the effect of real-time reconstruction.
  • the present disclosure provides a three-dimensional mapping method, which adopts single-line laser radar for positioning, and combines multiple visual sensors for three-dimensional reconstruction, can realize three-dimensional mapping, and can realize three-dimensional mapping for dynamic motion scenes. .
  • the present disclosure provides a three-dimensional mapping method, which is applied to the three-dimensional mapping system shown in FIG. 1, the system comprising a single-line laser radar and at least four vision sensors;
  • the single-line lidar is located at the bottom of the system and is positioned to obtain the pose information of the single-line lidar;
  • Each visual sensor is located above the single-line laser radar, pointing to the four directions of up, down, left and right, respectively, collecting multiple three-dimensional point cloud maps at different angles, and splicing the three-dimensional map of each visual sensor to obtain a three-dimensional map.
  • the present disclosure provides a method for three-dimensional reconstruction using four vision sensors.
  • the three-dimensional mapping system specifically includes four visual sensors, which respectively point to four directions of up, down, left, and right, and form a square with the visual sensor as a vertex.
  • one side of the square may be parallel to the horizontal plane and may be at an angle of 45 degrees to the horizontal plane.
  • a wheeled carriage 103 equipped with a three-dimensional drawing system is shown.
  • the trolley performs three-dimensional mapping of the scene in real time during the movement, and the single-line lidar 101 is placed at the bottom of the trolley, so that the single-line laser radar can scan more objects.
  • the efficiency of the positioning of the laser radar is improved, and the positioning accuracy is improved.
  • the positions of the four visual sensors 102 are respectively directed to the upper, lower, left and right directions, and the upper and lower left and right viewing angles of the three-dimensional drawing are expanded to reduce the dead angle of the reconstruction, so that the reconstruction coverage is larger.
  • the three-dimensional drawing system is a whole, and the system can be installed on the corresponding sports car according to different scenarios, and the use is more flexible.
  • FIG. 3 specifically includes:
  • Step 201 performing positioning by using a single-line laser radar to obtain pose information of the single-line laser radar;
  • This step acquires the real-time pose information of the single-line lidar.
  • the pose of the single-line lidar in the world coordinate system is obtained. There are two ways to locate the single-line lidar. .
  • the first way is to scan the object in a plane to obtain the distance information point by point, calculate the geographic information of the single-line lidar, and perform the splicing calculation with the global map to obtain the coordinate and orientation information transform1 of the current position in the global coordinate system.
  • the second way is to combine the single-line lidar with the IMU (Inertral Measurement Unit) unit to locate the IMU unit, which can increase the stability of the construction.
  • IMU Intelligent Measurement Unit
  • the single-line laser radar is combined with the IMU unit for positioning, specifically: using the IMU unit to dynamically calculate the motion increment of the single-line lidar, and using the single-line lidar positioning to obtain the first pose information of the single-line lidar, The first pose information and the motion increment are combined to obtain the pose information of the single-line lidar.
  • the first pose information and the motion increment are merged to obtain the pose information of the single-line lidar, which specifically includes: calculating the first gain matrix of the multi-information fusion filter of the first pose information, and increasing the motion increment.
  • the second gain matrix of the information fusion filter combines the first gain matrix and the second gain matrix by using multi-information fusion filtering to obtain pose information of the single-line lidar.
  • the wheeled trolley can achieve smooth motion and achieve high-precision position tracking.
  • the multi-information fusion filter can adopt Kalman filtering, particle filtering and other algorithms to realize real-time positioning of the single-line laser radar.
  • Step 202 obtaining a second pose relationship between the reference visual coordinate system and the world coordinate system by using a first pose relationship between the single-line lidar coordinate system and the reference visual coordinate system;
  • the constructed three-dimensional map needs to be located in the world coordinate system, and this step needs to construct a relationship between the reference visual coordinate system and the world coordinate system.
  • the first pose relationship between the reference visual coordinate system and the world coordinate system is obtained by using the first pose relationship between the single-line lidar coordinate system and the reference visual coordinate system. Since the lidar coordinate system and the reference visual coordinate system are both fixed, their positions will not be transformed. Based on the real-time pose information of the single-line lidar in step 201, the relationship between the two coordinate systems can be obtained. The second pose relationship between the reference visual coordinate system and the world coordinate system.
  • each visual sensor is located in a visual coordinate system.
  • This scheme proposes the concept of a reference visual coordinate system, that is, constructing a reference visual coordinate system based on the visual coordinate system of a certain visual sensor. .
  • four vision sensors are used for three-dimensional reconstruction. Since four vision sensors are located at different positions, a reference visual coordinate can be selected based on the coordinate system of the visual sensor located below the single-line lidar. system.
  • step 201 has obtained the coordinate and orientation information of the single-line lidar in the global coordinate system, and the reference visual coordinate system and the laser radar
  • the coordinate system has the first pose relationship transform2. It can be seen that the positional relationship between the reference visual coordinate system and the world coordinate system can be obtained as the second pose relationship transform3, that is, the real-time pose information of the visual sensor in the world coordinate system is obtained.
  • Step 203 Collecting visual three-dimensional data of different angles by using multiple vision sensors
  • the step 203 can be performed simultaneously with the step 201 and the step 202.
  • the visual sensor can collect the visual three-dimensional data, and the two have no sequential relationship.
  • the visual sensor can be a visual depth sensor.
  • the present disclosure proposes a method for three-dimensional reconstruction using multiple visual sensors, effectively combining multiple visual sensors, and constructing three-dimensional spatial information of the upper, lower, left and right perspectives of the motion process to achieve three-dimensional construction.
  • three-dimensional reconstruction is performed at different angles by using four vision sensors, and three-dimensional point cloud data reconstructed by each vision sensor is obtained, which are directed to four directions of up, down, left, and right, and are configured by vertices of visual sensors.
  • square Specifically, one side of the square may be parallel to the horizontal plane and may be at an angle of 45 degrees to the horizontal plane.
  • the four sensors are visual depth sensors, and four visual depth sensors are used to perform three-dimensional reconstruction on different angle scenes to obtain four three-dimensional point cloud data, and the four three-dimensional point cloud data can be followed by a certain visual sensor position.
  • the posture is unified and unified into the reference visual coordinate system.
  • Step 204 Unify a plurality of three-dimensional point cloud data collected by the plurality of vision sensors into a reference visual coordinate system
  • a plurality of three-dimensional data are unified into the reference visual coordinate system to obtain three-dimensional data of large viewing angle.
  • Step 205 Convert the unified three-dimensional point cloud data according to the second pose relationship between the reference visual coordinate system and the world coordinate system to obtain a real-time three-dimensional map in the world coordinate system;
  • step 204 the three-dimensional point cloud data is unified to the reference visual coordinate system, and the step converts the unified three-dimensional point cloud data into the world coordinate system according to the relationship between the reference visual coordinate system and the world coordinate system. Get a three-dimensional map in the world coordinate system.
  • Step 206 Perform meshing processing on the three-dimensional map to obtain a gridded three-dimensional map.
  • the three-dimensional map is regarded as a huge grid, and the extremum of each coordinate axis in the three-dimensional object space is found.
  • the X, Y, and Z axes are divided into 3a intervals, and the whole huge mesh is divided. It will be divided into 2 2a square grids, each of which has a number.
  • the reconstructed three-dimensional point cloud map is meshed, and a map with higher visual effect can be obtained, and the amount of data of the three-dimensional map is also reduced.
  • the solution also proposes an implementation manner, that is, after obtaining a three-dimensional map by using the above method at a certain location, a three-dimensional map of multiple positions is acquired, and when the three-dimensional positioning system is located on the trolley for motion, the motion is different.
  • the three-dimensional maps are constructed at the positions, and the three-dimensional maps at different positions are spliced to obtain a three-dimensional map under motion.
  • the present disclosure utilizes a single-line laser radar to perform positioning processing first, and obtains a pose change information of the visual sensor through the positional relationship between the single-line laser radar and the visual sensor, and obtains a three-dimensional map by combining the visual three-dimensional data collected by the visual sensor.
  • the present disclosure utilizes a plurality of vision sensors to perform visual acquisition at different angles, and splices three-dimensional maps generated at different angles to obtain a three-dimensional map. On this basis, the 3D map is also meshed to obtain a map with higher visual effects, and also reduces the amount of data of the 3D map.
  • the embodiment provides a three-dimensional drawing device, and the principle of solving the problem is similar to the three-dimensional drawing method. Therefore, the three-dimensional drawing implementation can refer to the implementation of the three-dimensional drawing method, and the repetition is performed. No longer.
  • a three-dimensional mapping device includes a positioning unit 301, a mapping unit 302, a three-dimensional reconstruction unit 303, and a splicing unit 304:
  • the positioning unit 301 is configured to perform positioning by using a single-line lidar to obtain pose information of the single-line lidar in the world coordinate system;
  • the mapping unit 302 obtains a second pose relationship between the reference visual coordinate system and the world coordinate system according to the first pose relationship between the single-line lidar coordinate system and the reference visual coordinate system, and obtains a unified three-dimensional relationship according to the second pose relationship.
  • Point cloud data is converted to the world coordinate system;
  • the three-dimensional reconstruction unit 303 collects a plurality of three-dimensional point cloud data of different angles by using multiple vision sensors;
  • the splicing unit 304 integrates the plurality of three-dimensional point cloud data obtained by the three-dimensional reconstruction unit into a reference visual coordinate system to obtain unified three-dimensional point cloud data, and converts the unified three-dimensional point cloud data into a world coordinate system to obtain a visual sensor. Reconstructed 3D map.
  • the splicing unit 304 specifically includes a splicing subunit and a converting subunit
  • the splicing sub-unit is configured to unify a plurality of three-dimensional point cloud data into a reference visual coordinate system according to a positional relationship between the plurality of visual sensors to obtain unified three-dimensional point cloud data;
  • a conversion subunit configured to convert the unified three-dimensional point cloud data obtained by the splicing subunit into the world coordinate system to obtain a three-dimensional map reconstructed by the visual sensor.
  • the positioning unit may adopt two positioning implementation manners, and the positioning unit includes a first positioning unit or a second positioning unit:
  • a first positioning unit according to the distance information between the single-line lidar and the object obtained by point-by-point scanning of an object in a plane by the single-line lidar, and the geographic information of the single-line lidar, to obtain the single line Positional information of the lidar;
  • a second positioning unit wherein the motion increment of the single-line lidar is dynamically calculated by using an inertial measurement unit, and the first pose information of the single-line lidar is obtained by using a single-line lidar, and the first pose information and the The motion increments are merged to obtain the pose information of the single-line lidar.
  • the second positioning unit includes a first calculating subunit, a second calculating subunit, and a fusion subunit;
  • the first calculating sub-unit dynamically calculates the motion increment of the single-line lidar by using the inertial measurement unit, and obtains the first pose information of the single-line lidar by using the single-line lidar positioning;
  • a second calculating subunit calculating a first gain matrix of the multi-information fusion filtering of the first pose information, and calculating a second gain matrix of the multi-information fusion filtering of the motion increment;
  • the fusion subunit combines the first gain matrix and the second gain matrix by using multi-information fusion filtering to obtain the pose change information of the single-line lidar.
  • the device further comprises a dynamic splicing unit, configured to acquire a three-dimensional map of the plurality of position reconstructions, and reconstruct the three-dimensional maps by the plurality of locations to obtain a three-dimensional map.
  • a dynamic splicing unit configured to acquire a three-dimensional map of the plurality of position reconstructions, and reconstruct the three-dimensional maps by the plurality of locations to obtain a three-dimensional map.
  • the device further comprises a meshing unit for meshing the three-dimensional map to obtain a gridded three-dimensional map.
  • the disclosure uses a single-line laser radar to perform positioning processing first.
  • the pose information of the visual sensor is obtained, and the three-dimensional map is obtained by combining the visual three-dimensional data collected by the visual sensor.
  • the present disclosure utilizes a plurality of vision sensors to perform visual acquisition at different angles, and splices three-dimensional maps generated at different angles to obtain a three-dimensional map.
  • the 3D map is also meshed to obtain a map with higher visual effects, and also reduces the amount of data of the 3D map.
  • the embodiment of the present disclosure provides a cloud platform, which includes any three-dimensional mapping device in the above embodiment, and stores a three-dimensional map uploaded by the three-dimensional mapping device.
  • the cloud platform further includes a processing device for processing the three-dimensional map uploaded by the three-dimensional drawing device.
  • the processing operation includes storing the three-dimensional map, storing the three-dimensional map reported by different three-dimensional mapping devices into different cloud storage spaces, or storing the three-dimensional maps reported by different three-dimensional mapping devices into a cloud storage space, and according to different The three-dimensional map reported by the regional 3D mapping device is stored in different cloud storage spaces.
  • the processing operation further includes splicing a plurality of the three-dimensional maps to obtain a large three-dimensional map, for example, combining a plurality of three-dimensional maps reported by different three-dimensional mapping devices in the same area, and splicing the plurality of three-dimensional maps. Form a three-dimensional map within a large area.
  • an electronic device is further provided in the embodiment of the present disclosure. Since the principle is similar to the three-dimensional mapping method, the implementation of the method may refer to the implementation of the method, and the repeated description is not repeated.
  • the electronic device 600 includes: a communication device 601, a memory 602, one or more processors 603; and one or more modules, the one or more modules being stored in the memory, And being configured to be executed by the one or more processors, the one or more modules including instructions for performing the various steps in any of the above three-dimensional mapping methods.
  • the electronic device is a robot.
  • an embodiment of the present disclosure further provides a computer program product for use in conjunction with an electronic device, the computer program product comprising a computer program embedded in a computer readable storage medium,
  • the computer program includes instructions for causing the electronic device to perform various steps in any of the above three-dimensional mapping methods.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Processing Or Creating Images (AREA)

Abstract

三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品,所述方法包括:利用单线激光雷达进行定位,得到单线激光雷达在世界坐标系下的位姿信息,并根据单线激光雷达坐标系与基准视觉坐标系的第一位姿关系,得到基准视觉坐标系与所述世界坐标系的第二位姿关系;使用多个视觉传感器采集不同角度的多个三维点云数据,将多个三维点云数据统一到所述基准视觉坐标系中,得到统一的三维点云数据;根据第二位姿关系,将统一的三维点云数据转换到世界坐标系中,得到视觉传感器重建的三维地图。通过将视觉和激光雷达定位建图进行融合,采用激光雷达定位的方式提高定位精确度,而使用视觉传感器进行三维重建,能够实现高效率的三维地图重构。

Description

三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品 技术领域
本申请涉及地图重建技术领域,特别涉及三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品。
背景技术
三维地图重建是将三维场景处理成适于计算机表示和理解的数学模型,它是计算机对三维空间环境进行处理、操作和分析的基础,也是在计算机中建立表达客观世界的虚拟现实的关键技术。
移动平台首先需要对本身所处的位置进行定位,然后将不同位置处获取的三维地图模型拼接起来,达到实时三维建图的目的。因此,实时定位和三维场景重建是三维建图的两个关键技术。总起来讲,移动定位和三维重建有两种方式,基于激光雷达和基于视觉的定位重建。
基于激光雷达的定位重建方法是利用激光飞行时间来获取空间位置信息,方法比较成熟,定位精确度高,对环境要求不高,稳定性很高。
目前有使用多视觉深度传感器三维建图的设备,该设备使用三个深度相机捕获上中下视角的三维场景。该设备通过旋转捕获不同位置上360度全景三维图形,并将不同位置的三维模型拼接成三维建图,该方法无法达到实时重建的效果,需要人工在现有环境中摆放设备。
发明内容
本公开文本实施例提供了一种三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品,利用激光雷达进行定位,并使用多个视觉传 感器获取视觉三维数据,无需对视觉三维数据进行优化计算,大大降低了计算复杂度。
第一方面,本公开文本实施例提供了一种三维建图方法,所述方法包括:
利用单线激光雷达进行定位,得到所述单线激光雷达在世界坐标系下的位姿信息,并根据单线激光雷达坐标系与基准视觉坐标系的第一位姿关系,得到基准视觉坐标系与所述世界坐标系的第二位姿关系;
使用多个视觉传感器采集不同角度的多个三维点云数据,将所述多个三维点云数据统一到所述基准视觉坐标系中,得到统一的三维点云数据;
根据所述第二位姿关系,将所述统一的三维点云数据转换到所述世界坐标系中,得到视觉传感器重建的三维地图。
第二方面,本公开文本实施例提供了一种三维建图装置,所述装置包括定位单元、映射单元、三维重建单元和拼接单元;
所述定位单元,利用单线激光雷达进行定位,得到所述单线激光雷达在世界坐标系下的位姿信息;
所述映射单元,根据单线激光雷达坐标系与基准视觉坐标系的第一位姿关系,得到基准视觉坐标系与所述世界坐标系的第二位姿关系,并根据所述第二位姿关系,将所述统一的三维点云数据转换到所述世界坐标系中;
所述三维重建单元,使用多个视觉传感器采集不同角度的多个三维点云数据;
所述拼接单元,将所述三维重建单元得到的所述多个三维点云数据统一到基准视觉坐标系中得到统一的三维点云数据,将所述统一的三维点云数据转换到所述世界坐标系中,得到视觉传感器重建的三维地图。
第三方面,本公开文本实施例提供了一种三维建图系统,所述系统包括:单线激光雷达和至少四个视觉传感器;
所述单线激光雷达位于所述系统底部,定位得到所述单线激光雷达的位姿信息;
每个视觉传感器位于所述单线激光雷达的上方,分别指向上下左右四个方向,采集多个角度的三维地图,将每个视觉传感器采集的三维地图进行拼接,得到三维地图。
第四方面,本公开文本实施例提供了一种云端平台,所述云端平台包括上述三维建图装置,存储三维建图装置上传的三维地图。
第五方面,本公开文本实施例提供了一种电子设备,所述电子设备包括:通信设备,存储器,一个或多个处理器;以及一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行任一上述三维建图方法中各个步骤的指令。
第六方面,本公开文本实施例提供了一种与电子设备结合使用的计算机程序产品,所述计算机程序产品包括内嵌于计算机可读的存储介质中的计算机程序,所述计算机程序包括用于使所述电子设备执行任一上述三维建图方法中的各个步骤的指令。
有益效果如下:
本公开文本实施例将单线激光雷达定位与视觉传感器的三维重建进行融合,采用激光雷达定位及多个视觉传感器进行三维重建的方式,因激光雷达定位可以达到非常高的精度,不需要对视觉重建的三维数据进行优化计算,大大降低了计算复杂度。而其利用多个视觉传感器进行三维重建,能够在增大建图视角,实现高效率的三维地图重构。
附图说明
下面将参照附图描述本公开文本的具体实施例,其中:
图1为本公开文本实施例中的三维建图系统结构示意图;
图2为本公开文本实施例中的三维建图系统的另一结构示意图;
图3为本公开文本实施例中的三维建图方法流程图;
图4为本公开文本实施例中的基准视觉坐标系、单线激光雷达坐标系与世界坐标系之间的转换示意图;
图5为本公开文本实施例中的三维建图装置结构示意图;
图6为本公开文本实施例中的一种电子设备的结构示意图。
具体实施方式
为了使本公开文本的技术方案及优点更加清楚明白,以下结合附图对本公开文本的示例性实施例进行进一步详细的说明,显然,所描述的实施例仅是本公开文本的一部分实施例,而不是所有实施例的穷举。并且在不冲突的情况下,本公开文本中的实施例及实施例中的特征可以互相结合。
发明人在发明过程中注意到:
使用三个深度相机捕获上中下视角的三维场景的方式,需要旋转捕获不同位置上360度全景三维图形,并将不同位置的三维模型拼接成三维建图,该方法无法达到实时重建的效果。
基于此,本公开文本提供了一种三维建图方法,采用单线激光雷达进行定位,结合多个视觉传感器进行三维重建的方式,能够实现三维建图,也能对动态的运动场景实现三维建图。
本公开文本提供的一种三维建图方法,该方法应用于图1所示的三维建图系统中,该系统包括一个单线激光雷达和至少四个视觉传感器;
单线激光雷达位于所述系统底部,定位得到单线激光雷达的位姿信息;
每个视觉传感器位于单线激光雷达的上方,分别指向上下左右四个方向,在不同角度采集多个三维点云地图,将每个视觉传感器的三维地图进行拼接,得到三维地图。
本公开文本提供一种利用四个视觉传感器进行三维重建的方式,该三维建图系统具体包括四个视觉传感器,分别指向上下左右四个方向,以视觉传感器为顶点构成正方形。具体的,正方形的一条边可以平行于水平面,可以与水平面呈45度角。
如2示出了置有三维建图系统的轮式小车103,小车在运动中实时对场景进行三维建图,单线激光雷达101放置在小车底部,可以使单线激光雷达扫描到更多的物体,提高激光雷达定位的效率,提升定位精度,四个视觉传感器102的摆放位置分别指向上下左右四个方向,扩大三维建图的上下左右视角,减小重建的死角区域,使重建覆盖面更大。
本公开文本中,三维建图系统是一个整体,可根据不同的场景该系统安装到相应的运动小车上,使用较为灵活。
下面结合图3,具体说明本公开文本的三维建图方法,具体包括:
步骤201:利用单线激光雷达进行定位,得到单线激光雷达的位姿信息;
该步骤获取单线激光雷达的实时位姿信息,通过建立激光雷达坐标系与世界坐标系之间的位置关系,得到单线激光雷达在世界坐标系下的位姿,单线激光雷达的定位方式有两种。
第一种方式是,对一个平面内的的物体逐点扫描获取距离信息,计算该单线激光雷达的地理信息,与全局地图进行拼接计算得到当前位置在全局坐标系下的坐标和方位信息transform1,如图4所示。第二种方式是,将单线激光雷达结合IMU(Inertral Measurement Unit,惯性测量单元)单元进行定位,融合IMU单元的方式,能够增加建图的稳定性能。
对于第二种方式,将单线激光雷达结合IMU单元进行定位的方式,具体为:利用IMU单元动态计算单线激光雷达的运动增量,利用单线激光雷达定位得到单线激光雷达的第一位姿信息,将第一位姿信息和运动增量进行融合,得到单线激光雷达的位姿信息。
其中,将第一位姿信息和运动增量进行融合,得到单线激光雷达的位姿信息,具体包括:计算第一位姿信息的多信息融合滤波的第一增益矩阵,及运动增量的多信息融合滤波的第二增益矩阵,利用多信息融合滤波对第一增益矩阵和第二增益矩阵进行融合,得到单线激光雷达的位姿信息。
采用单线激光雷达和IMU实时获取位置信息,轮式小车可以实现平稳运动,实现高精度的位置跟踪。
实际应用中,该多信息融合滤波可以采用卡尔曼滤波、粒子滤波等算法,实现对单线激光雷达的实时定位。
步骤202:利用单线激光雷达坐标系与基准视觉坐标系的第一位姿关系,得到基准视觉坐标系与世界坐标系的第二位姿关系;
其中,构建的三维地图需要位于世界坐标系下,该步骤需要对基准视觉坐标系与世界坐标系构建关系。具体的,利用单线激光雷达坐标系与基准视觉坐标系的第一位姿关系,得到基准视觉坐标系与世界坐标系的第二位姿关系。由于激光雷达坐标系和基准视觉坐标系都已经固定,他们的位置不会发生变换,在步骤201已经得到单线激光雷达的实时位姿信息的基础上,那么可根据两者坐标系的关系,得到基准视觉坐标系与世界坐标系的第二位姿关系。
其中,对于多个视觉传感器,每个视觉传感器位于一个视觉坐标系中,本方案提出了基准视觉坐标系的概念,即需要以某一个视觉传感器的视觉坐标系为基准,构建一个基准视觉坐标系。
作为一种实现方式,采用四个视觉传感器进行三维重建,因四个视觉传感器位于不同位置,可以选择以离单线激光雷达最近的、位于下方的视觉传感器的坐标系为基准,构建一个基准视觉坐标系。
如图4所示,通过建立激光雷达坐标系与世界坐标系之间的位姿关系transform1,步骤201已经得到单线激光雷达在全局坐标系下的坐标和方位 信息,而基准视觉坐标系和激光雷达坐标系具有第一位姿关系transform2,可见,可以得到基准视觉坐标系与世界坐标系之间的位置关系第二位姿关系transform3,即得到视觉传感器在世界坐标系中的实时位姿信息。
步骤203:使用多个视觉传感器采集不同角度的视觉三维数据;
其中,该步骤203可以与步骤201、步骤202同时进行,即单线激光雷达进行定位的过程中,视觉传感器可以采集视觉三维数据,两者没有先后顺序关系。
实际应用中,对于单个视觉传感器,可以多次拍摄和重建达到三维建图。具体的,该视觉传感器可以为视觉深度传感器。
为了提高三维建图的效率,本公开文本提出利用多个视觉传感器进行三维重建的方法,将多个视觉传感器有效结合在一起,对运动过程的上下左右多个视角的空间信息进行三维构建,达到大范围高效率的三维建图。
如图2所示,作为一种实现方式,利用四个视觉传感器在不同角度进行三维重建,得到每个视觉传感器重建的三维点云数据,分别指向上下左右四个方向,以视觉传感器为顶点构成正方形。具体的,正方形的一条边可以平行于水平面,可以与水平面呈45度角。
具体的,四个传感器为视觉深度传感器,使用四个视觉深度传感器对不同角度的场景进行三维重建,得到四个三维点云数据,后续可以将四个三维点云数据以某一个视觉传感器的位姿为基础进行统一,统一到基准视觉坐标系中。
步骤204:将多个视觉传感器采集得到的多个三维点云数据,统一到基准视觉坐标系中;
该步骤中,根据多个视觉传感器之间位姿关系,将多个三维数据统一到基准视觉坐标系下,得到大视角三维数据。
步骤205:根据基准视觉坐标系与世界坐标系的第二位姿关系,将统一 过后的三维点云数据进行转换,得到世界坐标系下的实时三维地图;
其中,在步骤204将三维点云数据统一到基准视觉坐标系的基础上,该步骤根据基准视觉坐标系与世界坐标系的关系,将统一后的三维点云数据转换到世界坐标系中,此时得到世界坐标系中的三维地图。
步骤206:对三维地图进行网格化处理,得到网格化的三维地图。
本步骤将三维地图视为一个巨大的网格,找出其在三维物空间中各坐标轴的极值,根据分割次数a,将X、Y、Z轴切分成3a个区间,整个巨大网格会被分为2的2a次方格子网格,每一个子网格皆有编号。
通过该步骤,将重建的三维点云地图进行网格化,可以获得视觉效果更高的地图,同时也减小三维地图的数据量。
另外,本方案还提出一种实现方式,即在某一位置通过上述方法得到三维地图后,获取多个位置的三维地图,如该三维定位系统位于小车上进行运动时,运动过程中在不同的位置均构建三维地图,那么对不同位置上的三维地图进行拼接,得到运动状态下的三维地图。
本公开文本利用单线激光雷达先进行定位处理,通过单线激光雷达与视觉传感器的位姿关系,得到视觉传感器的位姿变化信息,结合视觉传感器采集的视觉三维数据得到三维地图。而且本公开文本利用多个视觉传感器在不同角度进行视觉采集,并将不同角度生成的三维地图进行拼接,得到三维地图。在此基础上,还对三维地图进行网格化处理,能够获得视觉效果更高的地图,同时也减小三维地图的数据量。
基于同一发明构思,本实施例提供了一种三维建图装置,该三维建图装置解决问题的原理与三维建图方法相似,因此三维建图实施可以参见三维建图方法的实施,重复之处不再赘述。
参见图5,三维建图装置,包括定位单元301、映射单元302、三维重建单元303和拼接单元304:
定位单元301,用于利用单线激光雷达进行定位,得到单线激光雷达在世界坐标系下的位姿信息;
映射单元302,根据单线激光雷达坐标系与基准视觉坐标系的第一位姿关系,得到基准视觉坐标系与世界坐标系的第二位姿关系,并根据第二位姿关系,将统一的三维点云数据转换到世界坐标系中;
三维重建单元303,使用多个视觉传感器采集不同角度的多个三维点云数据;
拼接单元304,将三维重建单元得到的所述多个三维点云数据统一到基准视觉坐标系中得到统一的三维点云数据,将统一的三维点云数据转换到世界坐标系中,得到视觉传感器重建的三维地图。
进一步的,拼接单元304具体包括拼接子单元和转换子单元;
拼接子单元,用于根据多个视觉传感器之间的位姿关系,以某一个视觉传感器为基准,将多个三维点云数据统一到基准视觉坐标系中得到统一的三维点云数据;
转换子单元,用于将拼接子单元得到的所述统一的三维点云数据转换到所述世界坐标系中,得到视觉传感器重建的三维地图。
其中,对于定位单元可以采用两种定位实现方式,定位单元包括第一定位单元或第二定位单元:
第一定位单元,根据所述单线激光雷达对平面内的物体逐点扫描得到的所述单线激光雷达与所述物体之间的距离信息,及所述单线激光雷达的地理信息,得到所述单线激光雷达的位姿信息;
第二定位单元,利用惯性测量单元动态计算所述单线激光雷达的运动增量,利用单线激光雷达定位得到所述单线激光雷达的第一位姿信息,将所述第一位姿信息和所述运动增量进行融合,得到所述单线激光雷达的位姿信息。
具体的,第二定位单元包括第一计算子单元、第二计算子单元和融合子单元;
第一计算子单元,利用惯性测量单元动态计算单线激光雷达的运动增量,利用单线激光雷达定位得到单线激光雷达的第一位姿信息;
第二计算子单元,计算第一位姿信息的多信息融合滤波的第一增益矩阵,及计算运动增量的多信息融合滤波的第二增益矩阵;
融合子单元,利用多信息融合滤波对第一增益矩阵和第二增益矩阵进行融合,得到单线激光雷达的位姿变化信息。
优选的,装置还包括动态拼接单元,用于获取获取多个位置重建的三维地图,将多个位置重建三维地图进行拼接,得到三维地图。该种方式适合对运动状态中的物体进行三维重建,构建动态的三维地图。
优选的,装置还包括网格化单元,用于对三维地图进行网格化处理,得到网格化的三维地图。
本公开文本利用单线激光雷达先进行定位处理,通过标定单线激光雷达与视觉传感器的位姿关系,得到视觉传感器的位姿信息,结合视觉传感器采集的视觉三维数据得到三维地图。而且本公开文本利用多个视觉传感器在不同角度进行视觉采集,并将不同角度生成的三维地图进行拼接,得到三维地图。在此基础上,还对三维地图进行网格化处理,能够获得视觉效果更高的地图,同时也减小三维地图的数据量。
再一方面,基于同一发明构思,本公开文本实施例提供了一种云端平台,该云端平台包括上述实施例中任意一种三维建图装置,存储该三维建图装置上传的三维地图。
该云端平台还包括处理装置,对三维建图装置上传的三维地图进行处理。
其中,该处理操作包括存储该三维地图,可以将不同三维建图装置上 报的三维地图存储到不同云存储空间,或将不同三维建图装置上报的三维地图存储到一个云存储空间,而根据不同区域的三维建图装置上报的三维地图存储到不同的云存储空间。
另外,该处理操作还包括对多个该三维地图进行拼接,得到较大的三维地图,如,可以结合同一区域内不同三维建图装置上报的多个三维地图,对多个三维地图进行拼接,形成较大区域内的三维建图。
再一方面,基于同一发明构思,本公开文本实施例中还提供了一种电子设备,由于其原理与三维建图定方法相似,因此其实施可以参见方法的实施,重复之处不再赘述。如图6所示,所述电子设备600包括:通信设备601,存储器602,一个或多个处理器603;以及一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行任一上述三维建图方法中各个步骤的指令。
其中,该电子设备为机器人。
再一方面,基于同一发明构思,本公开文本实施例还提供了一种与电子设备结合使用的计算机程序产品,所述计算机程序产品包括内嵌于计算机可读的存储介质中的计算机程序,所述计算机程序包括用于使所述电子设备执行任一上述三维建图方法中的各个步骤的指令。
为了描述的方便,以上所述装置的各部分以功能分为各种模块分别描述。当然,在实施本申请时可以把各模块或单元的功能在同一个或多个软件或硬件中实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个 或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。

Claims (19)

  1. 一种三维建图方法,其特征在于,所述方法包括:
    利用单线激光雷达进行定位,得到所述单线激光雷达在世界坐标系下的位姿信息,并根据单线激光雷达坐标系与基准视觉坐标系的第一位姿关系,得到基准视觉坐标系与所述世界坐标系的第二位姿关系;
    使用多个视觉传感器采集不同角度的多个三维点云数据,将所述多个三维点云数据统一到所述基准视觉坐标系中,得到统一的三维点云数据;
    根据所述第二位姿关系,将所述统一的三维点云数据转换到所述世界坐标系中,得到视觉传感器重建的三维地图。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述多个三维点云数据统一到所述基准视觉坐标系中,具体包括:
    根据多个视觉传感器之间的位姿关系,以某一个视觉传感器为基准,将所述多个三维点云数据统一到所述基准视觉坐标系中。
  3. 根据权利要求1所述的方法,其特征在于,所述利用单线激光雷达进行定位,得到所述单线激光雷达在世界坐标系下的位姿信息,具体包括:
    所述单线激光雷达对平面内的物体逐点扫描,得到所述单线激光雷达与所述物体之间的距离信息;
    根据所述距离信息及所述单线激光雷达的地理信息,得到所述单线激光雷达在世界坐标系下的位姿信息;
    或,
    利用惯性测量单元动态计算所述单线激光雷达的运动增量,利用单线激光雷达定位得到所述单线激光雷达的第一位姿信息,将所述第一位姿信息和所述运动增量进行融合,得到所述单线激光雷达在世界坐标系下的位姿信息。
  4. 根据权利要求3所述的方法,其特征在于,将所述第一位姿信息和 所述运动增量进行融合,得到所述单线激光雷达的位姿信息,具体包括:
    计算所述第一位姿信息的多信息融合滤波的第一增益矩阵,及所述运动增量的多信息融合滤波的第二增益矩阵,利用多信息融合滤波对所述第一增益矩阵和所述第二增益矩阵进行融合,得到所述单线激光雷达的位姿信息。
  5. 根据权利要求1-4任一所述的方法,其特征在于,所述方法还包括:
    获取多个位置重建的三维地图;
    将所述多个位置重建三维地图进行拼接,得到三维地图。
  6. 根据权利要求1-5任一所述的方法,其特征在于,所述方法还包括:
    对所述三维地图进行网格化处理,得到网格化的三维地图。
  7. 一种三维建图装置,其特征在于,所述装置包括定位单元、映射单元、三维重建单元和拼接单元;
    所述定位单元,利用单线激光雷达进行定位,得到所述单线激光雷达在世界坐标系下的位姿信息;
    所述映射单元,根据单线激光雷达坐标系与基准视觉坐标系的第一位姿关系,得到基准视觉坐标系与所述世界坐标系的第二位姿关系,并根据所述第二位姿关系,将所述统一的三维点云数据转换到所述世界坐标系中;
    所述三维重建单元,使用多个视觉传感器采集不同角度的多个三维点云数据;
    所述拼接单元,将所述三维重建单元得到的所述多个三维点云数据统一到基准视觉坐标系中得到统一的三维点云数据,将所述统一的三维点云数据转换到所述世界坐标系中,得到视觉传感器重建的三维地图。
  8. 根据权利要求7所述的装置,其特征在于,所述拼接单元具体包括拼接子单元和转换子单元;
    所述拼接子单元,用于根据多个视觉传感器之间的位姿关系,以某一 个视觉传感器为基准,将所述多个三维点云数据统一到所述基准视觉坐标系中得到统一的三维点云数据;
    所述转换子单元,用于将所述拼接子单元得到的所述统一的三维点云数据转换到所述世界坐标系中,得到视觉传感器重建的三维地图。
  9. 根据权利要求7所述的装置,其特征在于,所述定位单元具体包括第一定位单元或第二定位单元;
    所述第一定位单元,根据所述单线激光雷达对平面内的物体逐点扫描得到的所述单线激光雷达与所述物体之间的距离信息,及所述单线激光雷达的地理信息,得到所述单线激光雷达的位姿信息;
    所述第二定位单元,利用惯性测量单元动态计算所述单线激光雷达的运动增量,利用单线激光雷达定位得到所述单线激光雷达的第一位姿信息,将所述第一位姿信息和所述运动增量进行融合,得到所述单线激光雷达的位姿信息。
  10. 根据权利要求9所述的装置,其特征在于,所述第二定位单元具体包括第一计算子单元、第二计算子单元和融合子单元;
    所述第一计算子单元,利用惯性测量单元动态计算所述单线激光雷达的运动增量,利用单线激光雷达定位得到所述单线激光雷达的第一位姿信息;
    所述第二计算子单元,计算所述第一位姿信息的多信息融合滤波的第一增益矩阵,及计算所述运动增量的多信息融合滤波的第二增益矩阵;
    所述融合子单元,利用多信息融合滤波对所述第一增益矩阵和所述第二增益矩阵进行融合,得到所述单线激光雷达的位姿信息。
  11. 根据权利要求7-10任一所述的装置,其特征在于,所述装置还包括动态拼接单元,用于获取获取多个位置重建的三维地图,将所述多个位置重建三维地图进行拼接,得到三维地图。
  12. 根据权利要求7-10任一所述的装置,其特征在于,所述装置还包括网格化单元,用于对所述三维地图进行网格化处理,得到网格化的三维地图。
  13. 一种三维建图系统,其特征在于,所述系统包括:单线激光雷达和至少四个视觉传感器;
    所述单线激光雷达位于所述系统底部,定位得到所述单线激光雷达的位姿信息;
    每个视觉传感器位于所述单线激光雷达的上方,分别指向上下左右四个方向,采集多个角度的三维地图,将每个视觉传感器采集的三维地图进行拼接,得到三维地图。
  14. 根据权利要求13所述的系统,其特征在于,所述系统具体包括四个视觉传感器,分别指向上下左右四个方向,以所述视觉传感器为顶点构成正方形。
  15. 根据权利要求13或14所述的系统,其特征在于,所述系统置于运动小车中。
  16. 一种云端平台,其特征在于,所述云端平台包括如权利要求7-12任一所述的三维建图装置,存储所述三维建图装置上传的三维地图。
  17. 一种电子设备,其特征在于,所述电子设备包括:
    通信设备,存储器,一个或多个处理器;以及一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行权利要求1-6任一所述三维建图方法中各个步骤的指令。
  18. 根据权利要求17所述的电子设备,其特征在于,所述电子设备为机器人。
  19. 一种与电子设备结合使用的计算机程序产品,所述计算机程序产品包括内嵌于计算机可读的存储介质中的计算机程序,所述计算机程序包 括用于使所述电子设备执行权利要求1-6任一所述三维建图方法中的各个步骤的指令。
PCT/CN2017/119784 2017-12-29 2017-12-29 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品 WO2019127347A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/119784 WO2019127347A1 (zh) 2017-12-29 2017-12-29 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品
CN201780002718.6A CN108337915A (zh) 2017-12-29 2017-12-29 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/119784 WO2019127347A1 (zh) 2017-12-29 2017-12-29 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品

Publications (1)

Publication Number Publication Date
WO2019127347A1 true WO2019127347A1 (zh) 2019-07-04

Family

ID=62924296

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119784 WO2019127347A1 (zh) 2017-12-29 2017-12-29 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品

Country Status (2)

Country Link
CN (1) CN108337915A (zh)
WO (1) WO2019127347A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111076724A (zh) * 2019-12-06 2020-04-28 苏州艾吉威机器人有限公司 三维激光定位方法及系统
CN111721281A (zh) * 2020-05-27 2020-09-29 北京百度网讯科技有限公司 位置识别方法、装置和电子设备
CN112308904A (zh) * 2019-07-29 2021-02-02 北京初速度科技有限公司 一种基于视觉的建图方法、装置及车载终端
CN113269837A (zh) * 2021-04-27 2021-08-17 西安交通大学 一种适用于复杂三维环境的定位导航方法
AT524063A1 (de) * 2020-07-15 2022-02-15 Track Machines Connected Ges M B H Verteilte und offene Datenbank zur dynamischen Erfassung des Eisenbahnstreckennetzes und dessen Gewerke

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019127347A1 (zh) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品
CN110895833A (zh) * 2018-09-13 2020-03-20 北京京东尚科信息技术有限公司 一种室内场景三维建模的方法和装置
CN109297510B (zh) 2018-09-27 2021-01-01 百度在线网络技术(北京)有限公司 相对位姿标定方法、装置、设备及介质
CN111220992B (zh) * 2018-11-26 2022-05-20 长沙智能驾驶研究院有限公司 雷达数据融合方法、装置及系统
CN109540142B (zh) * 2018-11-27 2021-04-06 达闼科技(北京)有限公司 一种机器人定位导航的方法、装置、计算设备
CN111380515B (zh) * 2018-12-29 2023-08-18 纳恩博(常州)科技有限公司 定位方法及装置、存储介质、电子装置
CN109767499A (zh) * 2018-12-29 2019-05-17 北京诺亦腾科技有限公司 基于mr设备的多用户沉浸式交互方法、系统及存储介质
CN111402423B (zh) * 2019-01-02 2023-10-27 中国移动通信有限公司研究院 一种传感器设置方法、装置及服务器
CN111609854A (zh) * 2019-02-25 2020-09-01 北京奇虎科技有限公司 基于多个深度相机的三维地图构建方法及扫地机器人
CN110007300B (zh) * 2019-03-28 2021-08-06 东软睿驰汽车技术(沈阳)有限公司 一种得到点云数据的方法及装置
CN110275179A (zh) * 2019-04-09 2019-09-24 安徽理工大学 一种基于激光雷达以及视觉融合的构建地图方法
CN111982071B (zh) * 2019-05-24 2022-09-27 Tcl科技集团股份有限公司 一种基于tof相机的3d扫描方法及系统
CN110196044A (zh) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 一种基于gps闭环检测的变电站巡检机器人建图方法
CN110276834B (zh) * 2019-06-25 2023-04-11 达闼科技(北京)有限公司 一种激光点云地图的构建方法、终端和可读存储介质
CN111007522A (zh) * 2019-12-16 2020-04-14 深圳市三宝创新智能有限公司 一种移动机器人的位置确定系统
CN113497944A (zh) * 2020-03-19 2021-10-12 上海科技大学 多视角三维直播方法、系统、装置、终端和存储介质
CN111489436A (zh) * 2020-04-03 2020-08-04 北京博清科技有限公司 一种焊缝三维重建方法、装置、设备及存储介质
CN111784835B (zh) * 2020-06-28 2024-04-12 北京百度网讯科技有限公司 制图方法、装置、电子设备及可读存储介质
CN111812668B (zh) * 2020-07-16 2023-04-14 南京航空航天大学 绕机检查装置及其定位方法、存储介质
CN114022519A (zh) * 2020-07-28 2022-02-08 清华大学 坐标系转换方法、装置及包含该装置的多源融合slam系统
CN114079866B (zh) * 2020-08-10 2022-12-23 大唐移动通信设备有限公司 一种信号传输方法及设备和装置
CN112379390A (zh) * 2020-11-18 2021-02-19 成都通甲优博科技有限责任公司 基于异源数据的位姿测量方法、装置、系统及电子设备
CN114519765A (zh) * 2020-11-20 2022-05-20 杭州智乎物联科技有限公司 一种三维重建方法、装置、计算机设备和存储介质
CN112884903A (zh) * 2021-03-22 2021-06-01 浙江浙能兴源节能科技有限公司 一种行车三维建模系统及其方法
CN113295175A (zh) * 2021-04-30 2021-08-24 广州小鹏自动驾驶科技有限公司 一种地图数据修正的方法和装置
CN113624223B (zh) * 2021-07-30 2024-05-24 中汽创智科技有限公司 一种室内停车场地图构建方法及装置
CN113610702B (zh) * 2021-08-09 2022-05-06 北京百度网讯科技有限公司 一种建图方法、装置、电子设备及存储介质
CN114994700B (zh) * 2022-05-19 2024-06-11 瑞诺(济南)动力科技有限公司 一种流动机械的定位方法、设备及介质
CN115861423A (zh) * 2022-11-29 2023-03-28 北京天玛智控科技股份有限公司 实景监测方法、装置及其系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140003987A (ko) * 2012-06-25 2014-01-10 서울대학교산학협력단 비젼 센서 정보와 모션 센서 정보를 융합한 모바일 로봇용 slam 시스템
CN105758426A (zh) * 2016-02-19 2016-07-13 深圳杉川科技有限公司 移动机器人的多传感器的联合标定方法
CN106652028A (zh) * 2016-12-28 2017-05-10 深圳乐行天下科技有限公司 一种环境三维建图方法及装置
CN106780629A (zh) * 2016-12-28 2017-05-31 杭州中软安人网络通信股份有限公司 一种三维全景数据采集、建模方法
CN106959691A (zh) * 2017-03-24 2017-07-18 联想(北京)有限公司 可移动电子设备和即时定位与地图构建方法
CN108337915A (zh) * 2017-12-29 2018-07-27 深圳前海达闼云端智能科技有限公司 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140003987A (ko) * 2012-06-25 2014-01-10 서울대학교산학협력단 비젼 센서 정보와 모션 센서 정보를 융합한 모바일 로봇용 slam 시스템
CN105758426A (zh) * 2016-02-19 2016-07-13 深圳杉川科技有限公司 移动机器人的多传感器的联合标定方法
CN106652028A (zh) * 2016-12-28 2017-05-10 深圳乐行天下科技有限公司 一种环境三维建图方法及装置
CN106780629A (zh) * 2016-12-28 2017-05-31 杭州中软安人网络通信股份有限公司 一种三维全景数据采集、建模方法
CN106959691A (zh) * 2017-03-24 2017-07-18 联想(北京)有限公司 可移动电子设备和即时定位与地图构建方法
CN108337915A (zh) * 2017-12-29 2018-07-27 深圳前海达闼云端智能科技有限公司 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308904A (zh) * 2019-07-29 2021-02-02 北京初速度科技有限公司 一种基于视觉的建图方法、装置及车载终端
CN111076724A (zh) * 2019-12-06 2020-04-28 苏州艾吉威机器人有限公司 三维激光定位方法及系统
CN111076724B (zh) * 2019-12-06 2023-12-22 苏州艾吉威机器人有限公司 三维激光定位方法及系统
CN111721281A (zh) * 2020-05-27 2020-09-29 北京百度网讯科技有限公司 位置识别方法、装置和电子设备
CN111721281B (zh) * 2020-05-27 2022-07-15 阿波罗智联(北京)科技有限公司 位置识别方法、装置和电子设备
AT524063A1 (de) * 2020-07-15 2022-02-15 Track Machines Connected Ges M B H Verteilte und offene Datenbank zur dynamischen Erfassung des Eisenbahnstreckennetzes und dessen Gewerke
CN113269837A (zh) * 2021-04-27 2021-08-17 西安交通大学 一种适用于复杂三维环境的定位导航方法
CN113269837B (zh) * 2021-04-27 2023-08-18 西安交通大学 一种适用于复杂三维环境的定位导航方法

Also Published As

Publication number Publication date
CN108337915A (zh) 2018-07-27

Similar Documents

Publication Publication Date Title
WO2019127347A1 (zh) 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品
WO2019127445A1 (zh) 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品
CN106826833B (zh) 基于3d立体感知技术的自主导航机器人系统
CN110189399B (zh) 一种室内三维布局重建的方法及系统
CN109272537B (zh) 一种基于结构光的全景点云配准方法
WO2017114507A1 (zh) 基于射线模型三维重构的图像定位方法以及装置
DK3144881T3 (en) PROCEDURE FOR 3D PANORAMA MOSAIC CREATION OF A SCENE
CN113570721A (zh) 三维空间模型的重建方法、装置和存储介质
JP4511147B2 (ja) 三次元形状生成装置
Sequeira et al. Automated 3D reconstruction of interiors with multiple scan views
Gupta et al. Indoor mapping for smart cities—An affordable approach: Using Kinect Sensor and ZED stereo camera
CN111275750A (zh) 基于多传感器融合的室内空间全景图像生成方法
CN111489392B (zh) 多人环境下单个目标人体运动姿态捕捉方法及系统
Hosseininaveh et al. A low-cost and portable system for 3D reconstruction of texture-less objects
CN114429527A (zh) 基于slam的远程操作场景三维重建方法
Li et al. UAV-based SLAM and 3D reconstruction system
Maki et al. 3d model generation of cattle using multiple depth-maps for ict agriculture
Hou et al. Octree-based approach for real-time 3d indoor mapping using rgb-d video data
Zare et al. Reconstructing 3-D Graphical Model Using an Under-Constrained Cable-Driven Parallel Robot
Bitzidou et al. Multi-camera 3D object reconstruction for industrial automation
CN108151712B (zh) 一种人体三维建模及测量方法和系统
Wu et al. Automated large scale indoor reconstruction using vehicle survey data
Amer et al. Crowd-sourced visual data collection for monitoring indoor construction in 3d
García-Moreno Dynamic Multi-Sensor Platform for Efficient Three-Dimensional-Digitalization of Cities
Wang et al. Acquisition of UAV images and the application in 3D city modeling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936399

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 17936399

Country of ref document: EP

Kind code of ref document: A1