WO2020014941A1 - 一种建立地图的方法、定位方法、装置、终端及存储介质 - Google Patents

一种建立地图的方法、定位方法、装置、终端及存储介质 Download PDF

Info

Publication number
WO2020014941A1
WO2020014941A1 PCT/CN2018/096374 CN2018096374W WO2020014941A1 WO 2020014941 A1 WO2020014941 A1 WO 2020014941A1 CN 2018096374 W CN2018096374 W CN 2018096374W WO 2020014941 A1 WO2020014941 A1 WO 2020014941A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
positioning
different perspectives
map
full
Prior art date
Application number
PCT/CN2018/096374
Other languages
English (en)
French (fr)
Inventor
易万鑫
廉士国
林义闽
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2018/096374 priority Critical patent/WO2020014941A1/zh
Priority to CN201880001095.5A priority patent/CN109073398B/zh
Publication of WO2020014941A1 publication Critical patent/WO2020014941A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Definitions

  • the present application relates to the field of computer vision, and in particular, to a method, a positioning method, a device, a terminal, and a storage medium for establishing a map.
  • mapping and positioning methods generally complete mapping and positioning based on map information collected from a single perspective.
  • a technical problem to be solved in some embodiments of the present application is how to improve the positioning capability.
  • An embodiment of the present application provides a method for establishing a map, including: obtaining image data of N different perspectives; where N is a positive integer; combining image data of N different perspectives into image data of a full perspective; View image data to build a full-view map.
  • An embodiment of the present application further provides a positioning method, including: obtaining first image data of N different perspectives; where N is a positive integer; determining the first based on the first image data and map of N different perspectives Positioning results; where the map includes a full-view map, and the full-view map is established based on the second image data of M different views, where M is a positive integer.
  • An embodiment of the present application further provides a device for establishing a map, including: an acquisition module, a merge module, and a map building module; the acquisition module is used to acquire image data of N different perspectives; wherein N is a positive integer; It is used to compose image data of N different perspectives into full-view image data; a mapping module is used to establish a full-view map based on the image data of full-view.
  • An embodiment of the present application further provides a positioning device, including: an acquiring module and a positioning module; the acquiring module is configured to acquire first image data of N different perspectives; wherein N is a positive integer; First image data and maps of different perspectives determine the first positioning result; wherein the map includes a full-view map, and the full-view map is established based on the second image data of M different views, where M is a positive integer.
  • An embodiment of the present application further provides a terminal, including at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor.
  • the processor executes to enable at least one processor to execute the method for establishing a map mentioned in the foregoing embodiment.
  • An embodiment of the present application further provides a terminal, including at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor.
  • the processor executes to enable at least one processor to execute the positioning method mentioned in the foregoing embodiment.
  • An embodiment of the present application further provides a computer-readable storage medium storing a computer program.
  • the computer program is executed by a processor, the method for establishing a map mentioned in the foregoing embodiment is implemented.
  • An embodiment of the present application further provides a computer-readable storage medium storing a computer program.
  • the computer program is executed by a processor, the positioning method mentioned in the foregoing embodiment is implemented.
  • the embodiments of the present application establish a full-view map according to image data of different views. Since the full-view map is stored in the terminal, the terminal can perform positioning according to the full-view map even if the perspective of the sensor during positioning of the terminal is different from that of the sensor during mapping. This method solves the problem of positioning failure due to the deviation of the viewing angle or the field of view being blocked, reduces the positioning blind zone of the terminal, and improves the positioning capability of the terminal.
  • FIG. 1 is a flowchart of a method for establishing a map according to a first embodiment of the present application
  • FIG. 2 is a schematic diagram of sensor distribution in the first embodiment of the present application.
  • FIG. 3 is a flowchart of a method for establishing a map according to a second embodiment of the present application.
  • FIG. 4 is a flowchart of a positioning method according to a third embodiment of the present application.
  • FIG. 5 is a flowchart of a positioning method according to a fourth embodiment of the present application.
  • FIG. 6 is a schematic diagram of a method of combining a method of establishing a map and a positioning method in a fourth embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a device for establishing a map according to a fifth embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another device for establishing a map according to a fifth embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a positioning device according to a sixth embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of another positioning device according to a sixth embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a terminal according to a seventh embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a terminal according to an eighth embodiment of the present application.
  • the first embodiment of the present application relates to a method for establishing a map, which is applied to a terminal or a cloud.
  • the terminal may be a smart robot, a driverless vehicle, a blind navigation device, or the like.
  • the cloud communicates with the terminal, providing the terminal with a map for positioning or providing positioning results directly to the terminal.
  • a terminal is used as an example to describe the execution process of the method for establishing a map.
  • the method for establishing a map includes the following steps:
  • Step 101 Obtain image data of N different perspectives. Among them, N is a positive integer.
  • the terminal collects image data of different perspectives of the surrounding environment of the terminal through one or more sensors.
  • Step 102 Composing image data of N different perspectives into full-view image data.
  • the terminal combines image data of N different perspectives through image processing technology to obtain image data of a full perspective.
  • the method for combining image data of N different perspectives into image data of full perspective includes, but is not limited to, the following three methods:
  • Method 1 The terminal determines the similar regions between the image data of the N different perspectives, and merges the image data of the N different perspectives according to the similar regions between the image data of the N different perspectives.
  • one sensor is respectively provided in different orientations of the terminal, and the image data of N different perspectives respectively correspond to one sensor.
  • the sensors corresponding to the image data of the N different perspectives there is a common field of view between two adjacent sensors. Because there is a common field of view between two adjacent sensors, there is a similar area between the image data captured by the two adjacent sensors.
  • the terminal combines the image data of the N different perspectives according to the similar regions between the image data of the N different perspectives.
  • a sensor is installed on the terminal, and the terminal controls the rotation of the sensor to obtain image data of N different perspectives. For example, during the process of establishing a map by the terminal, image data is acquired every preset distance. In each process of acquiring image data, the terminal performs the following operations: controlling the sensor to capture the image data at the initial angle, and after the shooting is completed, the terminal moves to the first preset angle, and the sensor takes the sensor to shoot at the first preset angle.
  • the image data of the sensor the sensor has a common field of view at the initial angle and the sensor is at the first preset angle; after the shooting is completed, the sensor is turned to the second preset angle to capture the image data at the second preset angle; There is a common field of view when the first preset angle and the sensor are at the second preset angle ... until the sensor turns back to the initial angle.
  • the image data captured by the sensor at each angle to obtain image data of N different perspectives.
  • Method 2 The terminal acquires a pre-built merge model, and merges image data of N different perspectives according to the merge model.
  • one sensor is respectively provided in different orientations of the terminal, and the image data of N different perspectives respectively correspond to one sensor.
  • the merging model is used to indicate the merging order of the image data of N different perspectives, and the merging order of the image data of N different perspectives is determined according to the arrangement order of the sensors corresponding to the image data of N different perspectives.
  • five sensors are installed on the terminal, and the sensor distribution diagram is shown in Figure 2.
  • the image data of all sensors are merged in a clockwise or counterclockwise direction according to the arrangement order of the sensors.
  • Method 3 Merge methods 1 and 2. Specifically, the terminal obtains a pre-established merge model, and determines a merge order of image data of N different perspectives according to the merge model. The terminal arranges the image data of the N different perspectives according to the combining order of the image data of the N different perspectives. The terminal merges the image data of the N different perspectives according to the similarity area between the adjacent two image data in the arranged image data of the N different perspectives.
  • Step 103 Establish a full-view map based on the full-view image data.
  • the terminal is based on full-view image data and uses visual real-time localization and mapping (Visual Simultaneous Localization). And Mapping (VSLAM) technology to build a full-view map, for example, based on fast and robust binary descriptors for real-time positioning and mapping (Oriented FAST and Rotated BRIEF_Simultaneous Localization And Mapping (ORG_SLAM) technology.
  • VSLAM Visual Simultaneous Localization
  • ORG_SLAM Binary descriptors for real-time positioning and mapping
  • the map building method provided in this embodiment establishes a full-view map according to image data of different views. Since the full-view map is stored in the terminal, the terminal can perform positioning according to the full-view map even if the perspective of the sensor during positioning of the terminal is different from that of the sensor during mapping. This method solves the problem of positioning failure due to the deviation of the viewing angle or the field of view being blocked, reduces the positioning blind zone of the terminal, and improves the positioning capability of the terminal.
  • the second embodiment of the present application relates to a method for establishing a map.
  • This embodiment is a further improvement on the first embodiment.
  • the specific improvement is that other related steps are added after step 103.
  • this embodiment includes steps 201 to 204.
  • Steps 201 to 203 are substantially the same as steps 101 to 103 in the first embodiment, respectively, and will not be described in detail here. The differences are mainly introduced below:
  • Step 204 Create N single-view maps according to the image data of N different views.
  • N sensors are installed on the terminal.
  • the terminal uses the VSLAM technology to create a single-view map, for example, the ORB_SLAM technology, according to the image data captured by each sensor.
  • a single-view map is created for the image data of each perspective, so that the terminal can locate according to the single-view map after the full-view map fails to locate, further improving the terminal's Positioning ability.
  • step 204 is taken as the subsequent step of step 202 and step 203. Those skilled in the art can understand that in actual application, step 204 can also be used as the predecessor of step 202 and step 203. One step, this embodiment does not limit the execution order of steps 202, 203, and 204.
  • the method for building a map provided in this embodiment, after establishing a full-view map based on image data of N different perspectives, separately creates a single-view map for the image data of each perspective, so that the terminal can After the positioning of the full-view map fails, positioning based on the single-view map further improves the positioning capability of the terminal.
  • the third embodiment of the present application relates to a positioning method, which is applied to a terminal or a cloud.
  • the terminal may be a smart robot, a driverless vehicle, a blind navigation device, or the like.
  • the cloud communicates with the terminal to provide positioning results for the terminal.
  • This embodiment uses a terminal as an example to describe the execution process of the positioning method. For the process of executing the positioning method in the cloud, reference may be made to the content of the embodiment of the present application. As shown in FIG. 4, the positioning method includes the following steps:
  • Step 301 Obtain first image data of N different perspectives. Among them, N is a positive integer.
  • multiple sensors are installed on the terminal.
  • the terminal controls multiple sensors to acquire the first image data simultaneously, or the terminal controls multiple sensors to acquire the first image data in sequence.
  • a sensor is installed on the terminal.
  • the terminal controls the sensor to capture a first image data, and after the first image data is captured, it is rotated according to a preset direction and a preset angle to obtain the first image data again until the first image data of N different perspectives is obtained.
  • Step 302 Determine a first positioning result according to the first image data and maps of N different perspectives.
  • the map includes a full-view map, and the full-view map is established based on the second image data of M different views, where M is a positive integer.
  • the terminal matches the first image data of N different perspectives with the map, and determines the first positioning result according to the matching result.
  • the positioning method provided in this embodiment matches the acquired image data of different perspectives with a full-view map, so that the terminal can use any one of the perspectives for positioning, and reduces the positioning blind spot of the terminal. Because the terminal has a full-view map stored in the terminal, even if the terminal ’s perspective during the positioning process is different from the sensor ’s perspective during the mapping process, the terminal can perform positioning based on the full-view map. Problems, improving the positioning capabilities of the terminal.
  • the fourth embodiment of the present application relates to a positioning method.
  • This embodiment is a further refinement of the third embodiment, and specifically describes step 302.
  • this embodiment includes steps 401 to 406.
  • the step 401 is substantially the same as the step 301 in the third embodiment, which will not be described in detail here. The following mainly describes the differences:
  • Step 402 Match the first image data of N different perspectives with the full-view map, and determine the second positioning result according to the matching results of the first image data of N different perspectives with the full-view map, respectively.
  • the terminal determines whether there is a matching result indicating that the positioning is successful in the matching results of the first image data of N different perspectives with the full-view map, and if it is determined to exist, determines that the second positioning result indicates that the positioning is successful, and The pose data in the matching result indicating the successful positioning, and the pose data in the second positioning result are determined.
  • Step 403 Determine whether the second positioning result indicates that the positioning is successful.
  • step 404 is performed; otherwise, step 405 is performed.
  • Step 404 Determine the first positioning result according to the second positioning result. The process ends after that.
  • the terminal uses the pose data included in the second positioning result as the pose data in the first positioning result.
  • Step 405 Match the first image data of N different perspectives with M single-view maps, and determine the third positioning result according to the matching results of the first image data of N different perspectives and M single-view maps.
  • the map also includes M single-perspective maps, and the M single-perspective maps are separately established according to the second image data of M different perspectives.
  • the method for determining the third positioning result is described below as an example.
  • Method A For each first image data, the terminal performs the following operations: matching the first image data with M single-view maps respectively; and determining a fourth positioning result corresponding to the first image data according to the matching result.
  • the fourth positioning result indicates that the positioning is successful or the positioning fails.
  • the terminal determines the third positioning result according to the fourth positioning results corresponding to the first image data of the N different perspectives, respectively.
  • Method B The terminal determines the correspondence between the first image data of N different views and the M single-view maps, and performs the following operations for each first image data respectively: corresponding the first image data with the first image data Single-view map matching; determining a fourth positioning result corresponding to the first image data according to the matching result.
  • the fourth positioning result indicates that the positioning is successful or the positioning fails.
  • the terminal determines the third positioning result according to the fourth positioning results corresponding to the first image data of the N different perspectives, respectively.
  • the terminal determines the third positioning result according to the fourth positioning results corresponding to the first image data of N different perspectives as follows: The terminal determines the fourth positioning results corresponding to the first image data of N different perspectives Whether there is a fourth positioning result indicating successful positioning; if it is determined to exist, the terminal determines pose data included in each fourth positioning result indicating successful positioning, and calculates pose data in all fourth positioning results indicating successful positioning The terminal determines that the third positioning result indicates that the positioning has failed.
  • Step 406 Determine the first positioning result according to the third positioning result.
  • the positioning method provided in this embodiment uses a single-view map for positioning after a full-view map fails to locate, which improves the positioning capability of the terminal, and determines the final location based on the positioning results of multiple single-view maps.
  • the positioning result improves the accuracy of terminal positioning.
  • FIG. 6 a schematic diagram of a method of combining a map building method and a positioning method is shown in FIG. 6.
  • the terminal is provided with 5 sensors (Sensor 1, Sensor 2, Sensor 3, Sensor 4 and Sensor 5), and each sensor has a different perspective.
  • the terminal acquires the second image data captured by the sensor.
  • the terminal establishes a full-view map according to the second image data captured by the sensors 1 to 5.
  • the terminal establishes a single-view map 1 based on the second image data captured by the sensor 1, establishes a single-view map 2 based on the second image data captured by the sensor 2, establishes a single-view map 3 based on the second image data acquired by the sensor 3, and
  • the single-view map 4 is established by the two image data
  • the single-view map 5 is established by the second image data captured by the sensor 5.
  • the terminal acquires five first image data captured by the five sensors, and first matches the five first image data with the full-view map, respectively, and determines a matching result corresponding to each first image data. If the terminal determines that there is a matching result indicating that the positioning is successful, it determines a first positioning result according to the matching result.
  • a fifth embodiment of the present application relates to a device for building a map.
  • the device includes an obtaining module 501, a merging module 502, and a mapping module 503.
  • the obtaining module 501 is configured to obtain image data of N different perspectives; , N is a positive integer;
  • the merging module 502 is configured to combine image data of N different perspectives into full-view image data;
  • the mapping module 503 is configured to establish a full-view map based on the image data of the full perspective.
  • FIG. 8 a schematic structural diagram of another device for establishing a map is shown in FIG. 8.
  • the device for establishing a map further includes N sensors 504, and the N sensors 504 are used to acquire image data of different perspectives.
  • this embodiment is a system embodiment corresponding to the first embodiment, and this embodiment can be implemented in cooperation with the first embodiment.
  • the related technical details mentioned in the first embodiment are still valid in this embodiment. To reduce repetition, details are not described here. Accordingly, the related technical details mentioned in this embodiment can also be applied in the first embodiment.
  • the sixth embodiment of the present application relates to a positioning device.
  • the positioning device includes: an obtaining module 601 and a positioning module 602; the obtaining module 601 is configured to obtain first image data of N different perspectives, where N is positive Integer.
  • the positioning module 602 is configured to determine a first positioning result according to the first image data and maps of N different perspectives, where the map includes a full-view map, and the full-view map is established based on the second image data of M different perspectives, where M is positive Integer.
  • FIG. 10 a schematic structural diagram of another positioning device is shown in FIG. 10.
  • the positioning device further includes a single-view map loading module 603 and a full-view map loading module 604.
  • the single-view map loading module 603 is used to load the single-view maps respectively established according to the second image data of M different views, and the full-view map loading module 604 is used to load the full-view maps.
  • this embodiment is a system embodiment corresponding to the third embodiment, and this embodiment can be implemented in cooperation with the third embodiment.
  • the related technical details mentioned in the third embodiment are still valid in this embodiment. To reduce repetition, details are not described here. Correspondingly, the related technical details mentioned in this embodiment can also be applied in the third embodiment.
  • each module involved in the fifth embodiment and the sixth embodiment is a logic module.
  • a logical unit may be a physical unit or a part of a physical unit. It can also be implemented as a combination of multiple physical units.
  • no unit that is not closely related to solving the technical problem proposed by the present invention is introduced, but this does not mean that there are no other units in this embodiment.
  • a seventh embodiment of the present application relates to a terminal.
  • the terminal includes at least one processor 701 and a memory 702 communicatively connected to the at least one processor 701.
  • the memory 702 stores instructions that can be executed by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the foregoing method of establishing a map.
  • An eighth embodiment of the present application relates to a terminal, as shown in FIG. 12, including at least one processor 801; and a memory 802 communicatively connected to the at least one processor 801.
  • the memory 802 stores instructions executable by the at least one processor 801, and the instructions are executed by the at least one processor 801, so that the at least one processor 801 can execute the positioning method.
  • the processor uses a central processing unit (CPU) as an example, and the memory uses a read-write memory (Random Access) Memory, RAM) as an example.
  • the processor and the memory may be connected through a bus or in other manners. In FIG. 11 and FIG. 12, connection through a bus is taken as an example.
  • the memory can be used to store non-volatile software programs, non-volatile computer executable programs, and modules. As shown in the embodiment of the present application, the full-view map is stored in the memory.
  • the processor executes various functional applications and data processing of the device by running non-volatile software programs, instructions, and modules stored in the memory, that is, the above-mentioned method for establishing a map and the positioning method are implemented.
  • the memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system and an application program required for at least one function; the storage data area may store a list of options and the like.
  • the memory may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory may optionally include a memory remotely set with respect to the processor, and these remote memories may be connected to an external device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • One or more modules are stored in the memory, and when executed by one or more processors, execute the method for establishing a map and the positioning method in any of the foregoing method embodiments.
  • a ninth embodiment of the present application relates to a computer-readable storage medium storing a computer program.
  • the computer program is executed by the processor, the method for building a map described in any of the above method embodiments is implemented.
  • a tenth embodiment of the present application relates to a computer-readable storage medium storing a computer program.
  • the computer program is executed by the processor, the positioning method described in any of the above method embodiments is implemented.
  • the program is stored in a storage medium and includes several instructions for making a device ( It may be a single-chip microcomputer, a chip, etc.) or a processor (processor) to perform all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

本申请部分实施例提供了一种建立地图的方法、定位方法、装置、终端及存储介质。建立地图的方法应用于终端或云端,包括以下步骤:获取N个不同视角的图像数据;其中,N为正整数;将N个不同视角的图像数据组成全视角的图像数据;根据全视角的图像数据,建立全视角地图。

Description

一种建立地图的方法、定位方法、装置、终端及存储介质 技术领域
本申请涉及计算机视觉领域,尤其涉及一种建立地图的方法、定位方法、装置、终端及存储介质。
背景技术
智能机器人或无人驾驶车辆等设备在未知的环境中需要实时的建图和定位来感知周围的环境,只有成功建图、定位才能为智能机器人或无人驾驶车辆等设备的导航以及其他功能提供保障。目前,建图和定位方法一般是根据单个视角采集的地图信息来完成建图和定位的。
技术问题
发明人在研究现有技术过程中发现,单个视角下的定位有着很多的局限性。由于建图中所用的单个传感器设备(例如摄像头)的视野角度有限,获取的地图信息有限,导致智能机器人等设备只能在建图的视角下进行定位。一旦智能机器人等设备偏离原先视角或者原先视角上发生了视野遮挡,就会导致定位失败,轻则影响定位效果和用户体验,重则危及到他人生命安全。
可见,如何提高定位能力,是需要解决的问题。
技术解决方案
本申请部分实施例所要解决的一个技术问题在于如何提高定位能力。
本申请的一个实施例提供了一种建立地图的方法,包括:获取N个不同视角的图像数据;其中,N为正整数;将N个不同视角的图像数据组成全视角的图像数据;根据全视角的图像数据,建立全视角地图。
本申请的一个实施例还提供了一种定位方法,包括:获取N个不同视角的第一图像数据;其中,N为正整数;根据N个不同视角的第一图像数据和地图,确定第一定位结果;其中,地图包括全视角地图,全视角地图根据M个不同视角的第二图像数据建立,M为正整数。
本申请的一个实施例还提供了一种建立地图的装置,包括:获取模块、合并模块和建图模块;获取模块用于获取N个不同视角的图像数据;其中,N为正整数;合并模块用于将N个不同视角的图像数据组成全视角的图像数据;建图模块用于根据全视角的图像数据,建立全视角地图。
本申请的一个实施例还提供了一种定位装置,包括:获取模块和定位模块;获取模块用于获取N个不同视角的第一图像数据;其中,N为正整数;定位模块用于根据N个不同视角的第一图像数据和地图,确定第一定位结果;其中,地图包括全视角地图,全视角地图根据M个不同视角的第二图像数据建立,M为正整数。
本申请的一个实施例还提供了一种终端,包括至少一个处理器;以及,与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述实施例提及的建立地图的方法。
本申请的一个实施例还提供了一种终端,包括至少一个处理器;以及,与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述实施例提及的定位方法。
本申请的一个实施例还提供了一种计算机可读存储介质,存储有计算机程序,计算机程序被处理器执行时实现上述实施例提及的建立地图的方法。
本申请的一个实施例还提供了一种计算机可读存储介质,存储有计算机程序,计算机程序被处理器执行时实现上述实施例提及的定位方法。
有益效果
本申请的实施例相对于现有技术而言,根据不同视角的图像数据,建立全视角地图。由于终端中存储有全视角地图,即使终端在定位过程中传感器的视角与建图过程中传感器的视角不同,终端也能够根据全视角地图进行定位。该方法解决了由于视角偏差或视野受到遮挡导致定位失败的问题,减少了终端的定位盲区,提高了终端的定位能力。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本申请第一实施例的建立地图的方法的流程图;
图2是本申请第一实施例的传感器分布示意图;
图3是本申请第二实施例的建立地图的方法的流程图;
图4是本申请第三实施例的定位方法的流程图;
图5是本申请第四实施例的定位方法的流程图;
图6是本申请第四实施例中将建立地图的方法和定位方法合并的方法示意图;
图7是本申请第五实施例的建立地图的装置的结构示意图;
图8是本申请第五实施例的另一建立地图的装置的结构示意图;
图9是本申请第六实施例的定位装置的结构示意图;
图10是本申请第六实施例的另一定位装置的结构示意图;
图11是本申请第七实施例的终端的结构示意图;
图12是本申请第八实施例的终端的结构示意图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请的第一实施例涉及一种建立地图的方法,应用于终端或云端。终端可以是智能机器人、无人驾驶车辆、盲人导航装置等。云端与终端通信连接,为终端提供用于定位的地图或直接为终端提供定位结果。本实施例以终端为例说明建立地图的方法的执行过程,云端执行该建立地图的方法的过程可以参考本申请实施例的内容。如图1所示,该建立地图的方法包括以下步骤:
步骤101:获取N个不同视角的图像数据。其中,N为正整数。
具体地说,终端通过一个或多个传感器,采集终端周围环境的不同视角的图像数据。
步骤102:将N个不同视角的图像数据组成全视角的图像数据。
具体地说,终端通过图像处理技术,将N个不同视角的图像数据合并,得到全视角的图像数据。其中,将N个不同视角的图像数据合并为全视角的图像数据的方法包括但不限于以下三种:
方法1:终端确定N个不同视角的图像数据之间的相似区域,根据N个不同视角的图像数据之间的相似区域,将N个不同视角的图像数据合并。
具体实现中,终端的不同朝向上分别设置有一个传感器,N个不同视角的图像数据各自对应一个传感器。N个不同视角的图像数据各自对应的传感器中,相邻的两个传感器之间存在公共视野。由于相邻的两个传感器之间存在公共视野,使得相邻的两个传感器拍摄的图像数据之间存在相似区域。终端根据N个不同视角的图像数据之间的相似区域,将N个不同视角的图像数据合并。
另一具体实现中,终端上安装有一个传感器,终端通过控制传感器转动,以获取N个不同视角的图像数据。例如,终端建立地图的过程中,每间隔预设距离获取一次图像数据。终端在每次获取图像数据的过程中,分别进行以下操作:控制传感器拍摄在初始角度的图像数据,并在拍摄结束后,转至第一预设角度,传感器在传感器拍摄在第一预设角度的图像数据;其中,传感器在初始角度与传感器在第一预设角度时存在公共视野;拍摄结束后,传感器转至第二预设角度,拍摄在第二预设角度的图像数据;其中,传感器在第一预设角度与传感器在第二预设角度时存在公共视野……直至该传感器转回初始角度。传感器在每个角度拍摄的图像数据,以获得N个不同视角的图像数据。
方法2:终端获取预先建立的合并模型,根据合并模型,将N个不同视角的图像数据合并。
具体实现中,终端的不同朝向上分别设置有一个传感器,N个不同视角的图像数据各自对应一个传感器。合并模型用于指示N个不同视角的图像数据的合并顺序,N个不同视角的图像数据的合并顺序根据N个不同视角的图像数据各自对应的传感器的排列顺序确定。例如,终端上安装有5个传感器,传感器分布示意图如图2所示。在合并图像数据时,按照传感器的排列顺序,沿顺时针或逆时针方向将所有传感器的图像数据合并。
方法3:将方法1和方法2合并。具体地说,终端获取预先建立的合并模型,根据合并模型,确定N个不同视角的图像数据的合并顺序。终端根据N个不同视角的图像数据的合并顺序,排列N个不同视角的图像数据。终端根据排列后的N个不同视角的图像数据中相邻两个图像数据之间的相似区域,合并N个不同视角的图像数据。
步骤103:根据全视角的图像数据,建立全视角地图。
具体地说,终端基于全视角的图像数据,通过视觉即时定位和建图(Visual Simultaneous Localization And Mapping,VSLAM)技术建立全视角地图,例如,基于快速而鲁棒的二进制描述子的即时定位和建图(Oriented FAST and Rotated BRIEF_Simultaneous Localization And Mapping,ORG_SLAM)技术。
需要说明的是,本领域技术人员可以理解,实际应用中,也可以通过其他建图技术,创建全视角地图,本实施例不限制根据全视角数据创建地图的方法。
与现有技术相比,本实施例中提供的建立地图的方法,根据不同视角的图像数据,建立全视角地图。由于终端中存储有全视角地图,即使终端在定位过程中传感器的视角与建图过程中传感器的视角不同,终端也能够根据全视角地图进行定位。该方法解决了由于视角偏差或视野受到遮挡导致定位失败的问题,减少了终端的定位盲区,提高了终端的定位能力。
本申请的第二实施例涉及一种建立地图的方法,本实施例是对第一实施例的进一步改进,具体改进之处是:在步骤103之后增加了其他相关步骤。
如图3所示,本实施例包括步骤201至步骤204。其中,步骤201至步骤203分别与第一实施例中的步骤101至步骤103大致相同,此处不再详述,下面主要介绍不同之处:
执行步骤201至步骤203。
步骤204:根据N个不同视角的图像数据,分别建立N个单视角地图。
具体实现中,终端上安装有N个传感器。在建立全视角地图后,终端根据每个传感器拍摄的图像数据,分别使用VSLAM技术创建单视角地图,例如,ORB_SLAM技术。
值得一提的是,在创建全视角地图以后,针对每个视角的图像数据,分别创建单视角地图,使得终端可以在全视角地图定位失败后,根据单视角地图进行定位,进一步提高了终端的定位能力。
需要说明的是,本实施例中,为阐述清楚,将步骤204作为步骤202和步骤203的后续步骤,本领域技术人员可以理解,实际应用中,步骤204也可以作为步骤202和步骤203的前一步骤,本实施例不限制步骤202和步骤203与步骤204的执行顺序。
与现有技术相比,本实施例中提供的建立地图的方法,在根据N个不同视角的图像数据建立全视角地图后,针对每个视角的图像数据,分别创建单视角地图,使得终端可以在全视角地图定位失败后,根据单视角地图进行定位,进一步提高了终端的定位能力。
本申请的第三实施例涉及一种定位方法,应用于终端或云端。终端可以是智能机器人、无人驾驶车辆、盲人导航装置等。云端与终端通信连接,为终端提供定位结果。本实施例以终端为例说明定位方法的执行过程,云端执行该定位方法的过程可以参考本申请实施例的内容。如图4所示,该定位方法包括以下步骤:
步骤301:获取N个不同视角的第一图像数据。其中,N为正整数。
具体实现中,终端上安装有多个传感器。终端控制多个传感器同时获取第一图像数据,或,终端控制多个传感器依次获取第一图像数据。
另一具体实现中,终端上安装有一个传感器。终端控制该传感器拍摄一个第一图像数据,并在拍摄第一图像数据后,按照预设方向和预设角度转动,再次获取第一图像数据,直至获得N个不同视角的第一图像数据。
步骤302:根据N个不同视角的第一图像数据和地图,确定第一定位结果。
具体地说,地图包括全视角地图,全视角地图根据M个不同视角的第二图像数据建立,M为正整数。终端将N个不同视角的第一图像数据分别与地图进行匹配,根据匹配结果,确定第一定位结果。
例如,终端在定位过程中的传感器的设置方式与终端在建立地图过程中的传感器的设置方式相同,即N=M。
与现有技术相比,本实施例中提供的定位方法,将获取的不同视角的图像数据与全视角地图进行匹配,使得终端能够使用任意一个视角进行定位,减少了终端的定位盲区。由于终端中存储有全视角地图,即使终端在定位过程中传感器的视角与建图过程中传感器的视角不同,终端也能够根据全视角地图进行定位,解决了由于视角偏差或视野遮挡导致定位失败的问题,提高了终端的定位能力。
本申请的第四实施例涉及一种定位方法,本实施例是对第三实施例的进一步细化,具体说明了步骤302。
如图5所示,本实施例包括步骤401至步骤406。其中,步骤401与第三实施例中的步骤301大致相同,此处不再详述,下面主要介绍不同之处:
步骤402:将N个不同视角的第一图像数据分别与全视角地图匹配,根据N个不同视角的第一图像数据分别与全视角地图的匹配结果,确定第二定位结果。
具体实现中,终端判断N个不同视角的第一图像数据分别与全视角地图的匹配结果中,是否存在指示定位成功的匹配结果,若确定存在,则确定第二定位结果指示定位成功,并根据指示定位成功的匹配结果中的位姿数据,确定第二定位结果中的位姿数据。
步骤403:判断第二定位结果是否指示定位成功。
具体地说,终端若确定第二定位结果指示定位成功,执行步骤404,否则,执行步骤405。
步骤404:根据第二定位结果,确定第一定位结果。之后结束流程。
具体地说,终端将第二定位结果中包含的位姿数据作为第一定位结果中的位姿数据。
步骤405:将N个不同视角的第一图像数据与M个单视角地图匹配,根据N个不同视角的第一图像数据与M个单视角地图的匹配结果,确定第三定位结果。
具体地说,地图还包括M个单视角地图,M个单视角地图是根据M个不同视角的第二图像数据分别建立。
以下对确定第三定位结果的方法进行举例说明。
方法A:终端针对每个第一图像数据,分别进行以下操作:将第一图像数据分别与M个单视角地图匹配;根据匹配结果,确定第一图像数据对应的第四定位结果。其中,第四定位结果指示定位成功或定位失败。终端根据N个不同视角的第一图像数据分别对应的第四定位结果,确定第三定位结果。
方法B:终端确定N个不同视角的第一图像数据与M个单视角地图的对应关系,并针对每个第一图像数据,分别进行以下操作:将第一图像数据与该第一图像数据对应的单视角地图匹配;根据匹配结果,确定第一图像数据对应的第四定位结果。其中,第四定位结果指示定位成功或定位失败。终端根据N个不同视角的第一图像数据分别对应的第四定位结果,确定第三定位结果。
具体实现中,终端根据N个不同视角的第一图像数据分别对应的第四定位结果,确定第三定位结果的方法如下:终端判断N个不同视角的第一图像数据分别对应的第四定位结果中是否存在指示定位成功的第四定位结果;若确定存在,终端确定每个指示定位成功的第四定位结果中包含的位姿数据,计算所有指示定位成功的第四定位结果中的位姿数据的平均值,将平均值作为第三定位结果;若确定不存在,终端确定第三定位结果指示定位失败。
步骤406:根据第三定位结果,确定第一定位结果。
与现有技术相比,本实施例中提供的定位方法,在全视角地图定位失败后,使用单视角地图进行定位,提高了终端的定位能力,根据多个单视角地图的定位结果确定最终的定位结果,提高了终端定位的准确性。
需要说明的是,本领域技术人员可以理解,实际应用中,可以将本申请实施例中提及的建立地图的方法和定位方法合并使用。具体实现中,将建立地图的方法和定位方法合并的方法示意图如图6所示。
以下结合实际场景说明终端的建立地图和定位的过程。终端上设置有5个传感器(传感器1、传感器2、传感器3、传感器4和传感器5),且每个传感器的视角不同。终端在建图过程中,获取传感器拍摄的第二图像数据。终端根据传感器1至传感器5拍摄的第二图像数据,建立1个全视角地图。终端根据传感器1拍摄的第二图像数据建立单视角地图1,根据传感器2拍摄的第二图像数据建立单视角地图2,传感器3拍摄的第二图像数据建立单视角地图3,传感器4拍摄的第二图像数据建立单视角地图4,传感器5拍摄的第二图像数据建立单视角地图5。终端在定位过程中,获取5个传感器拍摄的5个第一图像数据,先将5个第一图像数据分别与全视角地图进行匹配,确定每个第一图像数据各自对应的匹配结果。终端若确定存在指示定位成功的匹配结果,则根据该匹配结果,确定第一定位结果。终端若确定不存在指示定位成功的匹配结果,则将传感器i拍摄的第一图像数据与单视角地图i进行匹配,根据匹配结果确定传感器i拍摄的第一图像数据对应的第四定位结果。其中,i=1,2,3,4,5。终端若确定所有第一图像数据分别对应的第四定位结果中,存在指示定位成功的第四定位结果,则根据指示定位成功的第四定位结果,确定第一定位结果。否则,确定第一定位结果指示定位失败。
本申请的第五实施例涉及一种建立地图的装置,如图7所示,包括获取模块501、合并模块502和建图模块503;获取模块501用于获取N个不同视角的图像数据;其中,N为正整数;合并模块502用于将N个不同视角的图像数据组成全视角的图像数据;建图模块503用于根据全视角的图像数据,建立全视角地图。
具体实现中,另一建立地图的装置的结构示意图如图8所示,该建立地图的装置还包括N个传感器504,N个传感器504用于获取不同视角的图像数据。
不难发现,本实施例为与第一实施例相对应的系统实施例,本实施例可与第一实施例互相配合实施。第一实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在第一实施例中。
本申请的第六实施例涉及一种定位装置,如图9所示,包括:获取模块601和定位模块602;获取模块601用于获取N个不同视角的第一图像数据;其中,N为正整数。定位模块602用于根据N个不同视角的第一图像数据和地图,确定第一定位结果;其中,地图包括全视角地图,全视角地图根据M个不同视角的第二图像数据建立,M为正整数。
具体实现中,另一定位装置的结构示意图如图10所示,该建定位装置还包括单视角地图加载模块603和全视角地图加载模块604。单视角地图加载模块603用于加载根据M个不同视角的第二图像数据分别建立的单视角地图,全视角地图加载模块604用于加载全视角地图。
不难发现,本实施例为与第三实施例相对应的系统实施例,本实施例可与第三实施例互相配合实施。第三实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在第三实施例中。
值得一提的是,第五实施例和第六实施例中所涉及到的各模块均为逻辑模块,在实际应用中,一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现。此外,为了突出本发明的创新部分,本实施方式中并没有将与解决本发明所提出的技术问题关系不太密切的单元引入,但这并不表明本实施方式中不存在其它的单元。
本申请的第七实施例涉及一种终端,如图11所示,包括至少一个处理器701;以及,与至少一个处理器701通信连接的存储器702。其中,存储器702存储有可被至少一个处理器701执行的指令,指令被至少一个处理器701执行,以使至少一个处理器701能够执行上述建立地图的方法。
本申请的第八实施例涉及一种终端,如图12所示,包括至少一个处理器801;以及,与至少一个处理器801通信连接的存储器802。其中,存储器802存储有可被至少一个处理器801执行的指令,指令被至少一个处理器801执行,以使至少一个处理器801能够执行上述定位方法。
第七实施例和第八实施例中,处理器以中央处理器(Central Processing Unit,CPU)为例,存储器以可读写存储器(Random Access Memory,RAM)为例。处理器、存储器可以通过总线或者其他方式连接,图11和图12中以通过总线连接为例。存储器作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中全视角地图就存储于存储器中。处理器通过运行存储在存储器中的非易失性软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述建立地图的方法和定位方法。
存储器可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储选项列表等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至外接设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
一个或者多个模块存储在存储器中,当被一个或者多个处理器执行时,执行上述任意方法实施例中的建立地图的方法和定位方法。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果,未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本申请的第九实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现以上任意方法实施例所描述的建立地图的方法。
本申请的第十实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现以上任意方法实施例所描述的定位方法。
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (16)

  1. 一种建立地图的方法,其中,包括:
    获取N个不同视角的图像数据;其中,N为正整数;
    将所述N个不同视角的图像数据组成全视角的图像数据;
    根据所述全视角的图像数据,建立全视角地图。
  2. 如权利要求1所述的建立地图的方法,其中,在所述获取N个不同视角的图像数据之后,所述建立地图的方法还包括:
    根据所述N个不同视角的图像数据,分别建立N个单视角地图。
  3. 如权利要求1或2所述的建立地图的方法,其中,所述N个不同视角的图像数据各自对应一个传感器,所述N个不同视角的图像数据各自对应的传感器中,相邻的两个传感器之间存在公共视野;
    所述将所述N个不同视角的图像数据组成全视角的图像数据,具体包括:
    确定所述N个不同视角的图像数据之间的相似区域;
    根据所述N个不同视角的图像数据之间的相似区域,将所述N个不同视角的图像数据合并。
  4. 如权利要求1或2所述的建立地图的方法,其中,所述N个不同视角的图像数据各自对应一个传感器;
    所述将所述N个不同视角的图像数据组成全视角的图像数据,具体包括:
    获取预先建立的合并模型;其中,所述合并模型用于指示所述N个不同视角的图像数据的合并顺序,所述N个不同视角的图像数据的合并顺序根据所述N个不同视角的图像数据各自对应的传感器的排列顺序确定;
    根据所述合并模型,将所述N个不同视角的图像数据合并。
  5. 如权利要求1或2所述的建立地图的方法,其中,所述N个不同视角的图像数据各自对应一个传感器,所述N个不同视角的图像数据各自对应的传感器中,相邻的两个传感器之间存在公共视野;
    所述将所述N个不同视角的图像数据组成全视角的图像数据,具体包括:
    获取预先建立的合并模型;其中,所述合并模型用于指示所述N个不同视角的图像数据的合并顺序,所述N个不同视角的图像数据的合并顺序根据所述N个不同视角的图像数据各自对应的传感器的排列顺序确定;
    根据所述合并模型,确定所述N个不同视角的图像数据的合并顺序;
    根据所述N个不同视角的图像数据的合并顺序,排列所述N个不同视角的图像数据;
    根据排列后的所述N个不同视角的图像数据中相邻两个图像数据之间的相似区域,合并所述N个不同视角的图像数据。
  6. 一种定位方法,其中,包括:
    获取N个不同视角的第一图像数据;其中,N为正整数;
    根据所述N个不同视角的第一图像数据和地图,确定第一定位结果;其中,所述地图包括全视角地图,所述全视角地图根据M个不同视角的第二图像数据建立,M为正整数。
  7. 如权利要求6所述的定位方法,其中,所述地图还包括M个单视角地图,所述M个单视角地图是根据所述M个不同视角的第二图像数据分别建立;
    所述根据所述N个不同视角的第一图像数据和地图,确定第一定位结果,具体包括:
    将所述N个不同视角的第一图像数据分别与所述全视角地图匹配,根据所述N个不同视角的第一图像数据分别与所述全视角地图的匹配结果,确定第二定位结果;
    判断所述第二定位结果是否指示定位成功;
    若确定是,根据所述第二定位结果,确定所述第一定位结果;
    若确定不是,将所述N个不同视角的第一图像数据与所述M个单视角地图匹配,根据所述N个不同视角的第一图像数据与所述M个单视角地图的匹配结果,确定第三定位结果;根据所述第三定位结果,确定所述第一定位结果。
  8. 如权利要求7所述的定位方法,其中,所述将所述N个不同视角的第一图像数据与所述M个单视角地图匹配,根据所述N个不同视角的第一图像数据与所述M个单视角地图的匹配结果,确定第三定位结果,具体包括:
    针对每个第一图像数据,分别进行以下操作:将所述第一图像数据分别与所述M个单视角地图匹配;根据匹配结果,确定所述第一图像数据对应的第四定位结果,其中,所述第四定位结果指示定位成功或定位失败;
    根据所述N个不同视角的第一图像数据分别对应的第四定位结果,确定所述第三定位结果。
  9. 如权利要求7所述的定位方法,其中,所述将所述N个不同视角的第一图像数据与所述M个单视角地图匹配,根据所述N个不同视角的第一图像数据与所述M个单视角地图的匹配结果,确定第三定位结果,具体包括:
    确定所述N个不同视角的第一图像数据与所述M个单视角地图的对应关系;
    针对每个第一图像数据,分别进行以下操作:将所述第一图像数据与所述第一图像数据对应的单视角地图匹配;根据匹配结果,确定所述第一图像数据对应的第四定位结果,其中,所述第四定位结果指示定位成功或定位失败;
    根据所述N个不同视角的第一图像数据分别对应的第四定位结果,确定所述第三定位结果。
  10. 如权利要求8或9所述的定位方法,其中,所述根据所述N个不同视角的第一图像数据分别对应的第四定位结果,确定所述第三定位结果,具体包括:
    判断所述N个不同视角的第一图像数据分别对应的第四定位结果中是否存在指示定位成功的第四定位结果;
    若确定存在,确定每个指示定位成功的第四定位结果中包含的位姿数据,计算所有指示定位成功的第四定位结果中的位姿数据的平均值,将所述平均值作为所述第三定位结果;
    若确定不存在,确定所述第三定位结果指示定位失败。
  11. 一种建立地图的装置,其中,包括:获取模块、合并模块和建图模块;
    所述获取模块用于获取N个不同视角的图像数据;其中,N为正整数;
    所述合并模块用于将所述N个不同视角的图像数据组成全视角的图像数据;
    所述建图模块用于根据所述全视角的图像数据,建立全视角地图。
  12. 一种定位装置,其中,包括:获取模块和定位模块;
    所述获取模块用于获取N个不同视角的第一图像数据;其中,N为正整数;
    所述定位模块用于根据所述N个不同视角的第一图像数据和地图,确定第一定位结果;其中,所述地图包括全视角地图,所述全视角地图根据M个不同视角的第二图像数据建立,M为正整数。
  13. 一种终端,其中,包括至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至5任一项所述的建立地图的方法。
  14. 一种终端,其中,包括至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求6至10任一项所述的定位方法。
  15. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1至5任一项所述的建立地图的方法。
  16. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求6至10任一项所述的定位方法。
PCT/CN2018/096374 2018-07-20 2018-07-20 一种建立地图的方法、定位方法、装置、终端及存储介质 WO2020014941A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/096374 WO2020014941A1 (zh) 2018-07-20 2018-07-20 一种建立地图的方法、定位方法、装置、终端及存储介质
CN201880001095.5A CN109073398B (zh) 2018-07-20 2018-07-20 一种建立地图的方法、定位方法、装置、终端及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/096374 WO2020014941A1 (zh) 2018-07-20 2018-07-20 一种建立地图的方法、定位方法、装置、终端及存储介质

Publications (1)

Publication Number Publication Date
WO2020014941A1 true WO2020014941A1 (zh) 2020-01-23

Family

ID=64789237

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096374 WO2020014941A1 (zh) 2018-07-20 2018-07-20 一种建立地图的方法、定位方法、装置、终端及存储介质

Country Status (2)

Country Link
CN (1) CN109073398B (zh)
WO (1) WO2020014941A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109965797B (zh) * 2019-03-07 2021-08-24 深圳市愚公科技有限公司 扫地机器人地图的生成方法、扫地机器人控制方法及终端
CN110415174B (zh) * 2019-07-31 2023-07-07 达闼科技(北京)有限公司 地图融合方法、电子设备及存储介质
CN114683270A (zh) * 2020-12-30 2022-07-01 深圳乐动机器人有限公司 一种基于机器人的构图信息采集方法及机器人系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160012588A1 (en) * 2014-07-14 2016-01-14 Mitsubishi Electric Research Laboratories, Inc. Method for Calibrating Cameras with Non-Overlapping Views
CN106251399A (zh) * 2016-08-30 2016-12-21 广州市绯影信息科技有限公司 一种基于lsd‑slam的实景三维重建方法
CN106443687A (zh) * 2016-08-31 2017-02-22 欧思徕(北京)智能科技有限公司 一种基于激光雷达和全景相机的背负式移动测绘系统
CN107223244A (zh) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 定位方法和装置
CN108053473A (zh) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 一种室内三维模型数据的处理方法
CN109074676A (zh) * 2018-07-03 2018-12-21 深圳前海达闼云端智能科技有限公司 建立地图的方法、定位方法、终端及计算机可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123727B (zh) * 2011-11-21 2015-12-09 联想(北京)有限公司 即时定位与地图构建方法和设备
CN103247225B (zh) * 2012-02-13 2015-04-29 联想(北京)有限公司 即时定位与地图构建方法和设备
CN103389103B (zh) * 2013-07-03 2015-11-18 北京理工大学 一种基于数据挖掘的地理环境特征地图构建与导航方法
JP6457648B2 (ja) * 2015-01-27 2019-01-23 ノキア テクノロジーズ オサケユイチア 位置特定およびマッピングの方法
DE102015004923A1 (de) * 2015-04-17 2015-12-03 Daimler Ag Verfahren zur Selbstlokalisation eines Fahrzeugs
CN107301654B (zh) * 2017-06-12 2020-04-03 西北工业大学 一种多传感器的高精度即时定位与建图方法
CN107885871A (zh) * 2017-11-24 2018-04-06 南京华捷艾米软件科技有限公司 基于云计算的同步定位与地图构建方法、系统、交互系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160012588A1 (en) * 2014-07-14 2016-01-14 Mitsubishi Electric Research Laboratories, Inc. Method for Calibrating Cameras with Non-Overlapping Views
CN106251399A (zh) * 2016-08-30 2016-12-21 广州市绯影信息科技有限公司 一种基于lsd‑slam的实景三维重建方法
CN106443687A (zh) * 2016-08-31 2017-02-22 欧思徕(北京)智能科技有限公司 一种基于激光雷达和全景相机的背负式移动测绘系统
CN107223244A (zh) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 定位方法和装置
CN108053473A (zh) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 一种室内三维模型数据的处理方法
CN109074676A (zh) * 2018-07-03 2018-12-21 深圳前海达闼云端智能科技有限公司 建立地图的方法、定位方法、终端及计算机可读存储介质

Also Published As

Publication number Publication date
CN109073398A (zh) 2018-12-21
CN109073398B (zh) 2022-04-08

Similar Documents

Publication Publication Date Title
WO2020168668A1 (zh) 一种车辆的slam建图方法及系统
JP6775263B2 (ja) 測位方法及び装置
US20200394445A1 (en) Method, apparatus, device and medium for calibrating pose relationship between vehicle sensor and vehicle
Heng et al. Self-calibration and visual slam with a multi-camera system on a micro aerial vehicle
US20190204084A1 (en) Binocular vision localization method, device and system
KR102347239B1 (ko) 라이다와 카메라를 이용하여 이미지 특징점의 깊이 정보를 향상시키는 방법 및 시스템
WO2018077306A1 (zh) 一种避障跟随方法和电子设备、存储介质
KR102367361B1 (ko) 위치 측정 및 동시 지도화 방법 및 장치
WO2020014941A1 (zh) 一种建立地图的方法、定位方法、装置、终端及存储介质
WO2019119328A1 (zh) 一种基于视觉的定位方法及飞行器
WO2021143286A1 (zh) 车辆定位的方法、装置、控制器、智能车和系统
Seok et al. Rovo: Robust omnidirectional visual odometry for wide-baseline wide-fov camera systems
CN106908052B (zh) 用于智能机器人的路径规划方法及装置
JP2020132155A (ja) 運転参照経路の処理方法、装置、車両、及びプログラム
CN107690650B (zh) 用于将3d场景重构为3d模型的方法
WO2020019115A1 (zh) 融合建图方法、相关装置及计算机可读存储介质
JP2018519696A5 (zh)
CN111754579A (zh) 多目相机外参确定方法及装置
JP7138361B2 (ja) 3次元仮想空間モデルを利用したユーザポーズ推定方法および装置
WO2019119455A1 (zh) 一种云台校准方法及云台设备
WO2020019117A1 (zh) 一种定位方法及装置、电子设备和可读存储介质
WO2020114433A1 (zh) 一种深度感知方法,装置和深度感知设备
CN111915681B (zh) 多组3d相机群的外参标定方法、装置、存储介质及设备
WO2021026748A1 (zh) 拍摄检测方法、装置、云台、系统及存储介质
US10935375B2 (en) Portable 3D document scanning device and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18926671

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14/05/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18926671

Country of ref document: EP

Kind code of ref document: A1