WO2020019117A1 - 一种定位方法及装置、电子设备和可读存储介质 - Google Patents
一种定位方法及装置、电子设备和可读存储介质 Download PDFInfo
- Publication number
- WO2020019117A1 WO2020019117A1 PCT/CN2018/096663 CN2018096663W WO2020019117A1 WO 2020019117 A1 WO2020019117 A1 WO 2020019117A1 CN 2018096663 W CN2018096663 W CN 2018096663W WO 2020019117 A1 WO2020019117 A1 WO 2020019117A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- map
- real
- time image
- positioning
- position information
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Definitions
- the present application relates to the field of computer vision technology, and in particular, to a positioning method and device, an electronic device, and a readable storage medium.
- VSLAM Visual simultaneous localization and mapping
- intelligent devices such as robots start moving from an unknown location in an unknown environment.
- the camera acquires image information and performs location estimation based on the map and map.
- VSLAM is considered as the key technology to realize autonomous movement of robots or autonomous driving of autonomous vehicles.
- VSLAM is used for robot or pedestrian navigation.
- the environment view is mainly collected by the camera and processed accordingly. Feature points in the environment view are extracted and matched with known map prior information to obtain position information.
- the known prior map information mainly refers to the map information established in advance by vSLAM.
- the process of creating map information by vSLAM is susceptible to the surrounding environment. If the feature points and texture information in the environment are sufficient, continuous mapping can be obtained. A continuous piece of map data; if the camera moves violently, the ambient lighting changes a lot, or the feature points are sparse, it will cause vSLAM to create map information “interrupted”, and the finally acquired map prior information includes multiple pieces of vSLAM map data.
- the positioning system will also contain multiple vSLAM map information.
- a technical problem to be solved in some embodiments of the present application is to provide a positioning method and device, an electronic device, and a readable storage medium, which are used to solve the problem that the map occupies too much memory during multi-segment map positioning.
- An embodiment of the present application provides a positioning method, including:
- the first map is a segment of the N segment maps determined in the previous positioning.
- An embodiment of the present application further provides a positioning device, including: an obtaining module and a matching module;
- the matching module locates the real-time image and the first map according to the obtained real-time image to determine the position information of the real-time image;
- the first map is a segment of the N segment maps determined in the previous positioning.
- An embodiment of the present application further provides an electronic device, including: at least one processor; and,
- Memory in communication with at least one processor; wherein:
- the memory stores instructions executable by at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the positioning method described above.
- An embodiment of the present application further provides a computer-readable storage medium storing a computer program, wherein the computer program implements the positioning method described in the claims when the computer program is executed by a processor.
- positioning the real-time image directly according to the first map can directly and quickly determine the position information of the real-time image, and the first map is the last positioning
- the segment map determined from the N segment maps avoids the problem of high storage unit occupancy when locating from all N segment maps, reduces the occupancy of the storage units in the system, and also reduces the calculation amount of the positioning process.
- FIG. 1 is a flowchart of a positioning method in a first embodiment of the present application
- FIG. 2 is a flowchart of a positioning method in a second embodiment of the present application.
- FIG. 3 is a schematic structural diagram of a positioning device in a third embodiment of the present application.
- FIG. 4 is a schematic structural diagram of an electronic device in a fourth embodiment of the present application.
- the first embodiment of the present application relates to a positioning method, which can be applied to a terminal or a cloud.
- the terminal may be a device such as an unmanned vehicle, a blind guide device, or a sweeping robot.
- the cloud communicates with the terminal to provide a map for the terminal to locate or directly provide the terminal with the positioning result.
- This embodiment uses a terminal as an example to describe the execution process of the positioning method. For the process of executing the positioning method in the cloud, reference may be made to the content of the embodiment of the present application. The specific process is shown in Figure 1, and includes the following steps:
- Step 101 Acquire a real-time image for positioning.
- the real-time image in this embodiment may be acquired through a camera or other image sensors, and is not specifically limited herein.
- the positioning method is used for positioning and navigation, and the real-time image may be an environment image for positioning according to the environment image.
- the positioning method is applied to a blind guide device for positioning, and the current position information is determined by obtaining environmental information through a camera.
- the positioning method is applied to a robot, the real-time image is an image obtained by the robot's vision, or it can be a real-time image obtained by the robot's visual processing, so as to determine the current position information from the real-time image.
- Step 102 Perform positioning according to the acquired real-time image and the first map, and determine position information of the real-time image.
- the first map is a section of the N sections of the map determined in the previous positioning.
- a specific implementation process is: obtaining an initial image; matching the initial image with an N-segment map, respectively, where N is a positive integer greater than 1, determining a map in the N-segment map that matches the initial image, and releasing the N-segment map Maps that do not match the initial image; determine the matched map as the first map.
- releasing a map that does not match the initial image can reduce the amount of map occupied in the storage unit.
- the above process of determining the first map is only an exemplary description, and does not specifically limit this embodiment. It can be understood that the obtained real-time image is located according to the first map determined last time, thereby ensuring that It is not necessary to load all the N segment maps for each positioning, which reduces the occupation of storage units in the system and achieves the purpose of reducing the amount of positioning operations.
- the maps can be numbered in advance. After the first map is determined, the number of the first map and the position of the initial image in the first map are obtained. Information, such as determining the number of the first map during continuous frame image positioning, and extracting the first map for positioning based on the map number during real-time image positioning.
- the initial image is a real-time image obtained when a device to which the positioning method is applied starts positioning or navigation, and is not the first frame of real-time images in quantity.
- This initial image is traversed and matched with the N-segment map to determine the first map.
- the traversal matching is the process of loading and positioning all the N-segment maps to the initial image.
- the first map determined after the matching is complete has the initial image If necessary, position information of the initial image may be output after the first map is determined.
- step 102 is: extracting the feature points of the acquired real-time image and the feature points in the first map; determining the location of the real-time image according to the feature points of the real-time image and the feature points in the first map information. If the vSLAM technology is used for positioning, the feature points in the first map are prior information of the vSLAM map. In addition, the technology for extracting feature points in the real-time image and feature points in the first map is relatively mature, and will not be repeated here.
- the first map includes key frames, and the key frames correspond to the position information on the first map.
- the key points corresponding to the real-time image are determined by matching the feature points of the real-time image with the feature points of the first map.
- the position information corresponding to the frame determines the position information of the real-time image.
- the position information of the real-time image is recorded and saved. It should be noted that when the first map is determined according to the initial image, the position information of the initial map may also be recorded and saved.
- the positioning method is applied to a sweeping robot. After the sweeping robot is working, after determining the first map, the acquired real-time image is positioned, and the motion trajectory of the sweeping robot is determined from the continuous frames of real-time images, and the work of the sweeping robot is determined.
- the scope can be specifically applied to other intelligent robots or home robots, which are only examples here and are not specifically limited.
- this embodiment directly locates the real-time image according to the first map, and can directly and quickly determine the position information of the real-time image, and the first map is the last time
- the segment map determined from the N segment maps during positioning avoids the problem of high storage unit occupancy when locating from all N segment maps, reduces the occupation of storage units in the system, and also reduces the amount of positioning operations.
- the second embodiment of the present application relates to a positioning method.
- This embodiment is substantially the same as the first embodiment.
- the main difference is that in the second embodiment, the implementation of determining the position information of a real-time image according to feature points is specifically described. Way, the realization process is shown in Figure 2. It can be understood that the specific implementation process of determining the position information of the real-time image is not limited to the manner described below, and is merely an example here.
- Step 201 is the same as step 101, and details are not described herein again.
- Step 202 Extract feature points of the acquired real-time image and feature points in the first map.
- Step 203 Match the feature points of the real-time image with the feature points in the first map, and obtain a matching result.
- Step 204 Determine whether the matching result indicates that the real-time image matches the first map; if yes, go to step 205; otherwise, go to step 206.
- Step 205 Determine location information of the real-time image in the first map.
- Step 206 Obtain data information of the auxiliary sensor, determine a second map in the N-segment map according to the data information of the auxiliary sensor, and determine position information of the real-time image in the second map.
- Step 207 Record and save the position information of the real-time image.
- the position information of the real-time image is determined according to the data information of the auxiliary sensor.
- the number of feature points of the real-time image obtained is small, the phenomenon that the feature points in the map cannot be matched may also occur.
- the position information is recorded and saved. Therefore, the position information of the current real-time image needs to be predicted according to the position information of the previous frame of real-time image.
- a specific implementation of the failed real-time image and determining the relative position information of the failed real-time image in the first map is to infer the current real-time image position according to the position information of the previous frame of the real-time image and the data information of the auxiliary sensor information.
- the probability of feature point matching failure is high. Therefore, after obtaining the real-time image, you can first determine whether it is directly based on the number of feature points in the real-time image.
- the auxiliary sensor determines the position information, and the position information of the real-time image is determined when the feature points are matched. This is only an example and is not specifically limited.
- the auxiliary sensors include, but are not limited to, a distance sensor and a direction sensor. This makes it possible to determine the position information of the real-time image according to the data information of the auxiliary sensor when the feature points of the extracted real-time image fail to match or there are fewer feature points.
- the position information includes coordinate position and direction information, which is specifically expressed as (x, y, ⁇ v0 ), where (x, y) corresponds to the coordinate position of the real-time image on the first map, ⁇ v0 Represents the orientation angle of a device to which the positioning method is applied on a first map when acquiring a real-time image.
- Equation 1 When determining the position information of the real-time image according to the auxiliary sensor, it is necessary to obtain the position information of the previous frame of the real-time image. There is a certain deviation between the angle value determined by the sensor and the angle value obtained by other angle sensors, and the direction information of the real-time image needs to be corrected first.
- the specific conversion process is expressed by Equation 1 and Equation 2:
- ⁇ c represents the angle value after correcting the direction information of the current real-time image
- ⁇ d represents the deviation angle of the direction angle of the position information of the previous frame of real-time image
- ⁇ v0 represents the acquisition of the visual sensor of the previous frame of real-time image The angle value of the device
- ⁇ i0 represents the directional angle of the angle sensor in the previous frame of real-time image
- ⁇ i represents the angle value of the angle sensor when acquiring the current real-time image.
- (x, y) in the above formulas 3 and 4 represent the coordinate position of the known position information of the previous frame of the real-time image; s represents the scale of the first map, that is, the distance represented by each pixel in the first map Value; d represents the step size, that is, the distance value obtained by the distance sensor; ⁇ c means the same meaning.
- the first map is a path, and the device to which the positioning method is applied travels. After completing the path in the first map and proceeding, the real-time image obtained cannot determine the position information according to the first map, and the second map needs to be determined and the real-time image positioning needs to be performed.
- the first map may be connected to at least one map, and a specific implementation of determining the second map according to the auxiliary sensor information of the real-time image is as follows: acquiring direction information in data information of the auxiliary sensor to determine a connection with the first map. For the second map for real-time image positioning, the position information of the real-time image is determined according to the distance information in the data information of the auxiliary sensor.
- the above-mentioned positioning method is based on the positioning of real-time images based on multi-segment maps.
- the multi-segment map may be interrupted during the mapping process, and then there may be multiple segments of the map.
- the rule is divided into multiple maps, so that during the positioning of real-time images, the map storage ratio of the storage unit is reduced, and the amount of positioning operations is reduced.
- the third embodiment of the present application relates to a positioning device. As shown in FIG. 3, it includes an obtaining module 301 and a matching module 302.
- the obtaining module 301 is configured to obtain a real-time image for positioning.
- the matching module 302 is configured to locate the real-time image according to the obtained real-time image and the first map.
- the first map is a segment of the N segment maps determined in the previous positioning.
- this embodiment is a device embodiment corresponding to the first or second embodiment, and this embodiment can be implemented in cooperation with the first or second embodiment.
- the related technical details mentioned in the first or second embodiment are still valid in this embodiment. To reduce repetition, details are not described herein again.
- each module involved in this embodiment is a logic module.
- a logical unit can be a physical unit or a part of a physical unit. Combined implementation of units.
- no unit that is not closely related to solving the technical problem proposed by the present invention is introduced, but this does not indicate that there are no other units in this embodiment.
- a fourth embodiment of this embodiment relates to an electronic device, and a specific structure is shown in FIG. 4.
- the memory 402 stores instructions executable by the at least one processor 401, and the instructions are executed by the at least one processor 401, so that the at least one processor 401 can execute a positioning method.
- the processor 401 is a central processing unit (Central Processing Unit (CPU) as an example
- the memory 402 is a readable and writable memory (Random Access Memory, RAM) as an example.
- the processor 401 and the memory 402 may be connected through a bus or other manners. In FIG. 4, connection through a bus is taken as an example.
- the memory 402 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules. Stored in the memory 402.
- the processor 401 executes various functional applications and data processing of the device by running the non-volatile software programs, instructions, and modules stored in the memory 402, that is, the above positioning method is implemented.
- the memory 402 may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required for at least one function; the storage data area may store a list of options and the like.
- the memory may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
- the memory 402 may optionally include a memory remotely set relative to the processor 401, and these remote memories may be connected to an external device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
- One or more program modules are stored in the memory 402, and when executed by one or more processors 401, the positioning method in any of the above method embodiments is executed.
- the above products can execute the positioning method provided in the embodiment of the present application, and have corresponding functional modules and beneficial effects of the execution method.
- the positioning method provided in the embodiment of the present application can execute the positioning method provided in the embodiment of the present application, and have corresponding functional modules and beneficial effects of the execution method.
- a fifth embodiment of the present application relates to a computer-readable storage medium.
- the readable storage medium is a computer-readable storage medium, and the computer-readable storage medium stores computer instructions that enable a computer to execute the first The positioning method involved in the first or second method embodiment.
- the positioning method in the above embodiments is performed by a program instructing related hardware.
- the program is stored in a storage medium and includes several instructions to make a device (may It is a single-chip microcomputer, a chip, or the like) or a processor that executes all or part of the steps of the method described in each embodiment of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random-AccessMemory), magnetic disks or optical disks and other media that can store program codes .
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
一种定位方法及装置、电子设备和可读存储介质,涉及计算机视觉技术领域。定位方法应用于终端或云端,包括以下步骤:获取用于定位的实时图像(101);根据获取到的实时图像与第一地图进行定位,确定实时图像的位置信息(102);第一地图为上一次定位中确定的N段地图中的一段地图。避免了从全部的N段地图进行定位时存储单元占用较高的问题,降低了系统中存储单元的占用量,同时降低了定位过程的运算量。
Description
本申请涉及计算机视觉技术领域,尤其涉及一种定位方法及装置、电子设备和可读存储介质。
视觉即时定位与地图构建(visual simultaneous localization and mapping,vSLAM)是机器人等智能设备从一个未知的环境中的一个未知位置开始移动,在移动的过程中通过相机获取图像信息,根据位置估计和地图进行自身定位,同时在自身定位的基础上建造地图,实现机器人的自主定位和导航,因而vSLAM被认为是实现机器人自主移动或无人驾驶车自动驾驶的关键技术。
使用vSLAM进行机器人或行人导航,主要通过相机采集环境视图,进行相应的处理,提取环境视图中的特征点与已知的地图先验信息进行匹配,获取位置信息。其中,已知的地图先验信息主要指vSLAM预先建立的地图信息,vSLAM建立地图信息的过程易受周围环境的影响,若环境中特征点及纹理信息足够丰富,则可进行持续建图,获取一段连续的地图数据;若相机运动较为剧烈,环境光照变化较大或特征点较为稀疏,则会造成vSLAM建立地图信息“中断”,最终获取的地图先验信息包括多段vSLAM地图数据。同时,由于地图的不断拓展、更新,也会造成定位系统包含多个vSLAM地图信息。
发明人在研究现有技术的过程中发现,使用多段vSLAM地图数据进行定位过程中,还存在一系列问题:如在根据实时图像进行定位的过程中要加载所有的地图数据进行定位,需要占用大量的存储资源;同时,使用所有的地图数据对获取的每个实时图像进行重定位,消耗大量的计算资源。
本申请部分实施例所要解决的技术问题在于提供一种定位方法及装置、电子设备和可读存储介质,用以解决多段地图定位时地图占用内存过大的问题。
本申请的一个实施例提供了一种定位方法,包括:
获取用于定位的实时图像;
根据获取到的实时图像与第一地图进行定位,确定该实时图像的位置信息;
其中,第一地图为上一次定位中确定的N段地图中的一段地图。
本申请的一个实施例还提供了一种定位装置,包括:获取模块和匹配模块;
获取模块,用于获取用于定位的实时图像;
匹配模块,根据获取到的实时图像与第一地图进行定位,确定该实时图像的位置信息;
其中,第一地图为上一次定位中确定的N段地图中的一段地图。
本申请实施例还提供了一种电子设备,包括:至少一个处理器;以及,
与至少一个处理器通信连接的存储器;其中,
存储器存储有可被至少一个处理器执行的指令,指令被所述至少一个处理器执行,以使至少一个处理器能够执行上述的定位方法。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,其中,该计算机程序被处理器执行时实现权利要求上述的定位方法。
相对于现有技术而言,在获取用于定位的实时图像之后,直接根据第一地图对实时图像进行定位,能够直接快速的确定出实时图像的位置信息,且该第一地图是上一次定位从N段地图中确定出的一段地图,避免了从全部的N段地图进行定位时存储单元占用较高的问题,降低了系统中存储单元的占用量,同时也降低了定位过程的运算量。
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定。
图1是本申请第一实施例中定位方法的流程图;
图2是本申请第二实施例中定位方法的流程图;
图3是本申请第三实施例中定位装置的结构示意图;
图4是本申请第四实施例中电子设备的结构示意图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。然而,本领域的普通技术人员可以理解,在本申请的各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。
本申请的第一实施例涉及一种定位方法,可应用于终端或云端。终端可以是无人驾驶车辆、导盲设备或扫地机器人等设备,云端与终端通信连接,为终端提供用于定位的地图或直接为终端提供定位结果。本实施例以终端为例说明定位方法的执行过程,云端执行该定位方法的过程可以参考本申请实施例的内容。具体流程如图1所示,包括如下步骤:
步骤101:获取用于定位的实时图像。
具体地说,本实施例中的实时图像可通过摄像头或其他图像传感器获取,此处不做具体限制。
在一个具体实现中,使用该定位方法进行定位和导航,实时图像可以是环境图像,用于根据环境图像进行定位。例如,在导盲设备中应用该定位方法进行定位,通过摄像头获取环境信息确定出当前的位置信息。若将定位方法应用于机器人,实时图像为机器人通过视觉获取到的图像,也可以是经过机器人的视觉处理得到的实时图像,以便于通过该实时图像确定出当前的位置信息。
步骤102:根据获取到的实时图像和第一地图进行定位,并确定该实时图像的位置信息。
其中,该第一地图为上一次定位中确定的N段地图中的一段地图。
具体地说,由于第一地图在上一次定位中已经确定,也就是说,在获取用于定位的实时图像之前,需要先确定第一地图。一个具体实现过程为:获取初始图像;将该初始图像分别与N段地图进行匹配;其中,N为大于1的正整数;确定N段地图中与该初始图像匹配的地图,并释放N段地图中与该初始图像不匹配的地图;将匹配的地图确定为第一地图。其中,将与初始图像不匹配的地图释放,能够降低地图在存储单元中占有量。
值得一提的是,上述确定第一地图的过程仅是示例性说明,并不具体限定本实施例,可以理解的是,根据上一次确定的第一地图对获取的实时图像进行定位,从而确保每次定位不需要加载全部的N段地图,降低了系统中存储单元的占用量,同时达到降低定位运算量的目的。
一个具体实现中,由于在对初始图像进行定位时,存在N段地图,可预先对地图进行编号,在确定出第一地图之后,获取第一地图的编号以及初始图像在第一地图中的位置信息,如在连续帧图像定位过程中,确定出第一地图的编号,在实时图像定位过程中根据地图编号提取第一地图进行定位。
需要说明的是,初始图像是应用该定位方法的设备开始进行定位或导航时获取的实时图像,并非是数量上的第一帧实时图像。该初始图像与N段地图进行遍历匹配确定出第一地图,此处的遍历匹配是加载全部的N段地图对初始图像匹配定位的过程,在匹配完成之后确定出的第一地图中有初始图像的位置信息,若有必要,可在确定出第一地图之后输出初始图像的位置信息。
具体地说,步骤102的一个具体实现为:提取获取到的实时图像的特征点以及第一地图中的特征点;根据实时图像的特征点以及第一地图中的特征点,确定实时图像的位置信息。若结合vSLAM技术进行定位,则第一地图中的特征点为vSLAM地图的先验信息。且提取实时图像中的特征点和第一地图中的特征点的技术较为成熟,此处不再进行赘述。
其中,第一地图中包括关键帧,关键帧对应第一地图上的位置信息,通过实时图像的特征点与第一地图的特征点进行匹配,确定出与实时图像对应的关键帧,进而通过关键帧对应的位置信息确定出实时图像的位置信息。
具体地说,通过特征点匹配确定出实时图像的位置信息之后,记录并保存实时图像的位置信息。需要说明的是,在根据初始图像确定出第一地图时也可以记录并保存初始地图的位置信息。
值得一提的是,在连续帧的实时图像定位过程中,对上一帧实时图像的位置信息记录并保存之后,有助于下一帧的实时图像的定位。例如,在扫地机器人上应用该定位方法,扫地机器人工作过程中,确定第一地图之后,对获取的实时图像进行定位,通过连续帧的实时图像确定出扫地机器人的运动轨迹,确定扫地机器人的工作范围,具体还可将该定位方法应用于其他的智能机器人或家居类机器人上,此处仅是举例,不做具体限定。
相对于现有技术,本实施例在获取用于定位的实时图像之后,直接根据第一地图对实时图像进行定位,能够直接快速的确定出实时图像的位置信息,且该第一地图是上一次定位中从N段地图确定出的一段地图,避免了从全部的N段地图进行定位时存储单元占用较高的问题,降低了系统中存储单元的占用量,同时也降低了定位的运算量。
本申请的第二实施例涉及一种定位方法,本实施例与第一实施例大致相同,主要区别之处在于,在第二实施例中具体说明了根据特征点确定实时图像的位置信息的实现方式,其实现流程如图2所示。可以理解的是,确定实时图像的位置信息的具体实现过程并不仅限于以下所描述的方式,此处仅为举例说明。
需要说明的是,该定位方法包括如下实施步骤,其中,步骤201与步骤101相同,此处不再赘述。
步骤202:提取获取到的实时图像的特征点以及第一地图中的特征点。
步骤203:将实时图像的特征点与第一地图中的特征点进行匹配,并获取匹配结果
步骤204:判断该匹配结果是否表示实时图像与第一地图匹配;若为是,则执行步骤205,否则,执行步骤206。
步骤205:确定实时图像在第一地图中的位置信息。
步骤206:获取辅助传感器的数据信息,根据辅助传感器的数据信息确定N段地图中的第二地图,并确定实时图像在第二地图中的位置信息。
步骤207:记录并保存实时图像的位置信息。
具体地说,若为连续帧的实时图像,则每次对实时图像定位之后记录并保存位置信息,再获取下一帧的实时图像,进行定位,可循环执行上述图2中的各个实施步骤。
具体地说,步骤206中,根据辅助传感器的数据信息确定实时图像的位置信息中,所获取的实时图像的特征点数量较少时,也会出现无法与地图中的特征点匹配的现象,由于在实时图像进行确定位置信息之后会记录并保存位置信息,因此,需要根据上一帧实时图像的位置信息预测当前的实时图像的位置信息,对于特征点太少的实时图像或其他原因导致特征点匹配失败的实时图像,确定匹配失败的实时图像在第一地图中的相对位置信息的一个具体实现为:根据上一帧实时图像的位置信息以及辅助传感器的数据信息推测出当前的实时图像的位置信息。
值得一提的是,对于实时图像的特征点较少的情况,出现特征点匹配失败的概率较高,因此,在获取到实时图像之后,可先根据实时图像的特征点的数量判断是否直接根据辅助传感器确定位置信息,在进行特征点的匹配确定出实时图像的位置信息,此处仅是举例说明,不做具体限定。
其中,辅助传感器包括但不限于距离传感器和方向传感器。使得在出现提取的实时图像的特征点匹配失败或特征点较少的情况下,能够根据辅助传感器的数据信息确定实时图像的位置信息。
需要说明的是,位置信息中包括坐标位置和方向信息,具体表示为(x,y,θ
v0)其中,(x,y)对应的是该实时图像在第一地图上的坐标位置,θ
v0表示应用该定位方法的设备在获取实时图像时在第一地图上的方向角。
具体地说,在根据辅助传感器确定实时图像的位置信息时还需获取上一帧实时图像的位置信息,基于上一帧实时图像的已知位置信息计算出当前实时图像的位置信息,由于根据视觉传感器确定的角度值与其他角度传感器获取的角度值存在一定的偏差,需要先对该实时图像的方向信息进行校正。具体的转换过程通过公式1和公式2表示如下:
θ
d=θ
v0-θ
i0(1)
θ
c=θ
i+θ
d(2)
其中,θ
c表示对当前实时图像的方向信息进行校正后的角度值;θ
d表示上一帧实时图像的位置信息的方向角的偏差角度;θ
v0表示上一帧实时图像的获取视觉传感器的设备的角度值;θ
i0表示上一帧实时图像中角度传感器的方向角;θ
i表示获取当前实时图像时的角度传感器的角度值。
具体的通过辅助传感器的数据信息确定位置信息的转换通过公式3和公式4进行,其中,公式3和公式4表示如下:
x’=x+s*d*cos(θ
c) (3)
y’=y+s*d*sin(θ
c) (4)
其中,上述公式3和公式4中(x,y)表示上一帧实时图像的已知位置信息的坐标位置;s表示该第一地图的比例尺,也就是第一地图中每个像素代表的距离值;d代表步长,即距离传感器获取的距离值;θ
c的表示含义不变。
具体地说,实时图像的特征点与第一地图的特征点不能匹配,则说明第一地图中不存在该实时图像的位置信息,如,第一地图为一段路径,应用该定位方法的设备行驶完成该第一地图中的路径并继续前进,则获取的实时图像不能根据第一地图确定出位置信息,则需要确定出第二地图并进行对实时图像的定位。第一地图可能与至少一个的地图连接,其中,根据该实时图像的辅助传感器信息确定第二地图的一个具体实现为:获取辅助传感器的数据信息中的方向信息确定出与第一地图连接的用于为实时图像定位的第二地图,根据辅助传感器的数据信息中的距离信息确定出实时图像的位置信息。
值得一提的是,上述的定位方式是基于多段地图对实时图像进行定位,该多段地图可能是建图过程中出现中断,进而出现多段地图,也可以是将实际上的一幅地图按照预设的规则分成多段地图,以使对实时图像的定位过程中,降低存储单元的地图存储占比,和降低定位的运算量。
本申请的第三实施例涉及一种定位装置,如图3所示,包括获取模块301和匹配模块302。
获取模块301,用于获取用于定位的实时图像。
匹配模块302,用于根据获取到的实时图像与第一地图进行定位,确定实时图像的位置信息。
其中,第一地图为上一次定位中确定的N段地图中的一段地图。
不难发现,本实施例为与第一或第二实施例相对应的装置实施例,本实施方式可与第一或第二实施例互相配合实施。第一或第二实施例中提到的相关技术细节在本实施方式中依然有效,为了减少重复,这里不再赘述。
值得一提的是,本实施例中所涉及到的各模块均为逻辑模块,在实际应用中,一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现。此外,为了突出本发明的创新部分,本实施例中并没有将与解决本发明所提出的技术问题关系不太密切的单元引入,但这并不表明本实施例中不存在其它的单元。
本实施例的第四实施例涉及一种电子设备,具体结构如图4所示。包括至少一个处理器401;以及,与至少一个处理器401通信连接的存储器402。其中,存储器402存储有可被至少一个处理器401执行的指令,指令被至少一个处理器401执行,以使至少一个处理器401能够执行定位方法。
本实施例中,处理器401以中央处理器(Central
Processing Unit,CPU)为例,存储器402以可读写存储器(Random Access
Memory,RAM)为例。处理器401、存储器402可以通过总线或者其他方式连接,图4中以通过总线连接为例。存储器402作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中实现环境信息确定方法的程序就存储于存储器402中。处理器401通过运行存储在存储器402中的非易失性软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述定位方法。
存储器402可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储选项列表等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器402可选包括相对于处理器401远程设置的存储器,这些远程存储器可以通过网络连接至外接设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
一个或者多个程序模块存储在存储器402中,当被一个或者多个处理器401执行时,执行上述任意方法实施例中的定位方法。
上述产品可执行本申请实施例所提供的定位方法,具备执行方法相应的功能模块和有益效果,未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的定位方法。
本申请的第五实施例涉及一种计算机可读存储介质,该可读存储介质为计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,该计算机指令使计算机能够执行本申请第一或第二方法实施例中涉及的定位方法。
需要说明的是,本领域的技术人员能够理解,上述实施例中定位方法是通过程序来指令相关的硬件来完成的,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random-AccessMemory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。
Claims (12)
- 一种定位方法,其中,包括:获取用于定位的实时图像;根据获取到的所述实时图像与第一地图进行定位,确定所述实时图像的位置信息;其中,所述第一地图为上一次定位中确定的N段地图中的一段地图。
- 根据权利要求1所述的定位方法,其中,所述根据获取到的所述实时图像与第一地图进行定位,确定所述实时图像的位置信息,包括:提取所述获取到的所述实时图像的特征点以及所述第一地图中的特征点;根据所述实时图像的特征点以及所述第一地图中的特征点,确定所述实时图像的位置信息。
- 根据权利要求1或2所述的定位方法,其中,所述根据获取到的所述实时图像与第一地图进行定位,确定所述实时图像的位置信息之前,所述定位方法还包括:获取初始图像;将所述初始图像分别与所述N段地图进行匹配;其中,N为大于1的正整数;确定所述N段地图中与所述初始图像匹配的地图,并释放所述N段地图中与所述初始图像不匹配的地图;将所述匹配的地图确定为所述第一地图。
- 根据权利要求2所述的定位方法,其中,根据所述实时图像的特征点以及所述第一地图中的特征点,确定所述实时图像的位置信息,包括:将所述实时图像的特征点与所述第一地图中的特征点进行匹配,并获取匹配结果;根据所述匹配结果确定所述实时图像的位置信息。
- 根据权利要求4所述的定位方法,其中,所述根据所述匹配结果确定所述实时图像的位置信息,包括:若确定所述匹配结果表示所述实时图像与所述第一地图匹配,则确定所述实时图像在所述第一地图中的位置信息;若确定所述匹配结果表示所述实时图像与所述第一地图不匹配,则获取所述辅助传感器的数据信息,根据所述辅助传感器的数据信息确定所述N段地图中的第二地图,并确定所述实时图像在所述第二地图中的位置信息。
- 根据权利要求5所述的定位方法,其中,所述辅助传感器的数据信息包括:距离信息和方向信息。
- 根据权利要求1至6中任一项所述的定位方法,其中,所述确定所述实时图像的位置信息之后,所述定位方法还包括:记录并保存所述实时图像的位置信息。
- 根据权利要求1至7中任一项所述的定位方法,其中,所述位置信息包括坐标位置和方向信息。
- 根据权利要求2、4或5所述的定位方法,其中,所述第一地图中的特征点为所述第一地图中关键帧的特征点。
- 一种定位装置,其中,包括:获取模块和匹配模块;所述获取模块,用于获取用于定位的实时图像;所述匹配模块,用于根据获取到的所述实时图像与第一地图进行定位,确定所述实时图像的位置信息;其中,所述第一地图为上一次定位中确定的N段地图中的一段地图。
- 一种电子设备,其中,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1-9任一项所述的定位方法。
- 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-9任一项所述的定位方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201880001193.9A CN109073390B (zh) | 2018-07-23 | 2018-07-23 | 一种定位方法及装置、电子设备和可读存储介质 |
PCT/CN2018/096663 WO2020019117A1 (zh) | 2018-07-23 | 2018-07-23 | 一种定位方法及装置、电子设备和可读存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/096663 WO2020019117A1 (zh) | 2018-07-23 | 2018-07-23 | 一种定位方法及装置、电子设备和可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020019117A1 true WO2020019117A1 (zh) | 2020-01-30 |
Family
ID=64789296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/096663 WO2020019117A1 (zh) | 2018-07-23 | 2018-07-23 | 一种定位方法及装置、电子设备和可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109073390B (zh) |
WO (1) | WO2020019117A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110081862B (zh) * | 2019-05-07 | 2021-12-24 | 达闼科技(北京)有限公司 | 一种对象的定位方法、定位装置、电子设备和可存储介质 |
CN110361005B (zh) * | 2019-06-26 | 2021-03-26 | 达闼机器人有限公司 | 定位方法、定位装置、可读存储介质及电子设备 |
CN111623783A (zh) * | 2020-06-30 | 2020-09-04 | 杭州海康机器人技术有限公司 | 一种初始定位方法、视觉导航设备、仓储系统 |
CN113010724A (zh) * | 2021-04-29 | 2021-06-22 | 山东新一代信息产业技术研究院有限公司 | 一种基于视觉特征点匹配的机器人地图选择方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1569558A (zh) * | 2003-07-22 | 2005-01-26 | 中国科学院自动化研究所 | 基于图像表现特征的移动机器人视觉导航方法 |
CN101008566A (zh) * | 2007-01-18 | 2007-08-01 | 上海交通大学 | 基于地面纹理的智能车视觉装置及其全局定位方法 |
CN104024880A (zh) * | 2011-10-20 | 2014-09-03 | 罗伯特·博世有限公司 | 用于使用雷达地图的精确车辆定位的方法和系统 |
CN105318881A (zh) * | 2014-07-07 | 2016-02-10 | 腾讯科技(深圳)有限公司 | 地图导航方法、装置及系统 |
CN105571608A (zh) * | 2015-12-22 | 2016-05-11 | 苏州佳世达光电有限公司 | 导航系统、车辆及导航地图传输方法 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101566471B (zh) * | 2007-01-18 | 2011-08-31 | 上海交通大学 | 基于地面纹理的智能车视觉全局定位方法 |
CN101887114B (zh) * | 2009-05-13 | 2012-10-10 | 中兴通讯股份有限公司 | 一种快速搜索定位卫星的移动终端及方法 |
CN102914303B (zh) * | 2012-10-11 | 2015-01-21 | 江苏科技大学 | 多移动机器人的智能空间系统及导航信息获取方法 |
CN103983263A (zh) * | 2014-05-30 | 2014-08-13 | 东南大学 | 一种采用迭代扩展卡尔曼滤波与神经网络的惯性/视觉组合导航方法 |
CN104729485B (zh) * | 2015-03-03 | 2016-11-30 | 北京空间机电研究所 | 一种基于车载全景影像与街景地图匹配的视觉定位方法 |
US20170350713A1 (en) * | 2016-06-02 | 2017-12-07 | Delphi Technologies, Inc. | Map update system for automated vehicles |
CN107223275B (zh) * | 2016-11-14 | 2021-05-28 | 深圳市大疆创新科技有限公司 | 多路传感数据融合的方法和系统 |
CN107223244B (zh) * | 2016-12-02 | 2019-05-03 | 深圳前海达闼云端智能科技有限公司 | 定位方法和装置 |
CN107193279A (zh) * | 2017-05-09 | 2017-09-22 | 复旦大学 | 基于单目视觉和imu信息的机器人定位与地图构建系统 |
CN108052887A (zh) * | 2017-12-07 | 2018-05-18 | 东南大学 | 一种融合slam/gnss信息的疑似违法用地自动识别系统及方法 |
CN108036793B (zh) * | 2017-12-11 | 2021-07-23 | 北京奇虎科技有限公司 | 基于点云的定位方法、装置及电子设备 |
CN108280840B (zh) * | 2018-01-11 | 2021-09-03 | 武汉理工大学 | 一种基于三维激光雷达的道路实时分割方法 |
-
2018
- 2018-07-23 CN CN201880001193.9A patent/CN109073390B/zh active Active
- 2018-07-23 WO PCT/CN2018/096663 patent/WO2020019117A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1569558A (zh) * | 2003-07-22 | 2005-01-26 | 中国科学院自动化研究所 | 基于图像表现特征的移动机器人视觉导航方法 |
CN101008566A (zh) * | 2007-01-18 | 2007-08-01 | 上海交通大学 | 基于地面纹理的智能车视觉装置及其全局定位方法 |
CN104024880A (zh) * | 2011-10-20 | 2014-09-03 | 罗伯特·博世有限公司 | 用于使用雷达地图的精确车辆定位的方法和系统 |
CN105318881A (zh) * | 2014-07-07 | 2016-02-10 | 腾讯科技(深圳)有限公司 | 地图导航方法、装置及系统 |
CN105571608A (zh) * | 2015-12-22 | 2016-05-11 | 苏州佳世达光电有限公司 | 导航系统、车辆及导航地图传输方法 |
Also Published As
Publication number | Publication date |
---|---|
CN109073390A (zh) | 2018-12-21 |
CN109073390B (zh) | 2022-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10408939B1 (en) | Learning method and learning device for integrating image acquired by camera and point-cloud map acquired by radar or LiDAR corresponding to image at each of convolution stages in neural network and testing method and testing device using the same | |
CN107990899B (zh) | 一种基于slam的定位方法和系统 | |
WO2020019117A1 (zh) | 一种定位方法及装置、电子设备和可读存储介质 | |
WO2018098811A1 (zh) | 定位方法和装置 | |
WO2019042426A1 (zh) | 增强现实场景的处理方法、设备及计算机存储介质 | |
US10885666B2 (en) | Hybrid metric-topological camera-based localization | |
CN114001733B (zh) | 一种基于地图的一致性高效视觉惯性定位算法 | |
CN111754579A (zh) | 多目相机外参确定方法及装置 | |
WO2023005457A1 (zh) | 位姿计算方法和装置、电子设备、可读存储介质 | |
WO2024077935A1 (zh) | 一种基于视觉slam的车辆定位方法及装置 | |
CN110930444B (zh) | 一种基于双边优化的点云匹配方法、介质、终端和装置 | |
CN114089316A (zh) | 一种激光雷达-惯导的联合标定系统、方法及介质 | |
CN116823954B (zh) | 铰接式车辆的位姿估计方法、装置、车辆及存储介质 | |
CN110880003B (zh) | 一种图像匹配方法、装置、存储介质及汽车 | |
WO2020014941A1 (zh) | 一种建立地图的方法、定位方法、装置、终端及存储介质 | |
CN117745845A (zh) | 一种外参信息确定方法、装置、设备和存储介质 | |
CN116105721B (zh) | 地图构建的回环优化方法、装置、设备及存储介质 | |
CN117388870A (zh) | 应用于激光雷达感知模型的真值生成方法、装置及介质 | |
US12001218B2 (en) | Mobile robot device for correcting position by fusing image sensor and plurality of geomagnetic sensors, and control method | |
WO2020019116A1 (zh) | 多源数据建图方法、相关装置及计算机可读存储介质 | |
CN114415698A (zh) | 机器人、机器人的定位方法、装置和计算机设备 | |
WO2020010521A1 (zh) | 一种定位方法、定位装置、定位系统及可读存储介质 | |
DE102022111926A1 (de) | Verfahren und Vorrichtung für eine tiefengestützte visuelle Trägheitsodometrie | |
CN109325962B (zh) | 信息处理方法、装置、设备以及计算机可读存储介质 | |
CN113554711A (zh) | 相机在线标定方法、装置、计算机设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18927833 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC , EPO FORM 1205A DATED 14.05.21. |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18927833 Country of ref document: EP Kind code of ref document: A1 |