WO2022116156A1 - 视觉定位方法、机器人及存储介质 - Google Patents
视觉定位方法、机器人及存储介质 Download PDFInfo
- Publication number
- WO2022116156A1 WO2022116156A1 PCT/CN2020/133919 CN2020133919W WO2022116156A1 WO 2022116156 A1 WO2022116156 A1 WO 2022116156A1 CN 2020133919 W CN2020133919 W CN 2020133919W WO 2022116156 A1 WO2022116156 A1 WO 2022116156A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- map
- image
- candidate
- matching
- lighting conditions
- Prior art date
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000004590 computer program Methods 0.000 claims description 25
- 238000010586 diagram Methods 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000004807 localization Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Definitions
- the present application relates to the technical field of positioning and navigation, and in particular, to a visual positioning method, a robot and a storage medium.
- the current visual SLAM positioning relies on the map established by itself to locate, but a map only contains the lighting conditions when the map is constructed, and does not include the lighting conditions at other times and other weather conditions. Therefore, when used in other times and other weather conditions, positioning The effect will be worse.
- the lighting conditions of indoor scenes change frequently. For example, the lighting conditions in the morning, afternoon and evening of the same day are very different.
- the weather changes such as rainy and cloudy, must be added.
- the current visual SLAM localization technology can only perform localization when the lighting conditions are similar to those during mapping, and cannot be positioned correctly when the lighting conditions change.
- a visual positioning method comprising:
- the candidate atlases matching the current lighting conditions are searched in a map library, where maps under different lighting conditions are stored in the map library;
- Visual positioning is performed based on the target map.
- a robot includes a memory and a processor, the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the following steps:
- the candidate atlases matching the current lighting conditions are searched in a map library, where maps under different lighting conditions are stored in the map library;
- Visual positioning is performed based on the target map.
- a computer-readable storage medium storing a computer program, when executed by a processor, the computer program causes the processor to perform the following steps:
- the candidate atlases matching the current lighting conditions are searched in a map library, where maps under different lighting conditions are stored in the map library;
- Visual positioning is performed based on the target map.
- the above-mentioned visual positioning method, robot and storage medium search for a matching candidate atlas in the map library according to the current lighting conditions, then select a target map matching the image from the candidate atlas, and then perform visual positioning based on the target map.
- This method can select a matching target map for visual positioning according to the current lighting conditions, so that even when the lighting conditions change, it can accurately perform visual positioning.
- FIG. 1 is a flowchart of a visual positioning method in one embodiment
- Figure 2 is a schematic diagram of a map library in one embodiment
- Fig. 3 is a schematic diagram of screening candidate atlases according to time and weather in one embodiment
- Figure 4 is a flow chart of a method of determining a map that matches an image in one embodiment
- FIG. 5 is a schematic diagram of selecting a map matching the image set from a candidate atlas in one embodiment
- FIG. 6 is a schematic diagram of using multi-threaded parallel computing in one embodiment
- FIG. 7 is a structural block diagram of a visual positioning device in one embodiment
- FIG. 8 is a diagram of the internal structure of the robot in one embodiment.
- a visual positioning method is proposed, and the visual positioning method can be applied to a terminal.
- the application to a robot is used as an example for illustration.
- the visual positioning method specifically includes the following steps:
- Step 102 obtaining the current lighting conditions through the photosensitive element disposed on the robot.
- the light sensing element is a camera.
- Current Lighting Condition refers to the current lighting condition. Lighting conditions are affected by time and weather conditions, for example, in a day, the lighting conditions are different in the morning, noon, and evening. Different weather, such as sunny, cloudy, cloudy, rainy, etc., have different lighting conditions. In one embodiment, it can be represented by the current time and current weather conditions corresponding to the current lighting conditions.
- Step 104 searching for candidate atlases matching the current lighting conditions in the map library according to the current lighting conditions, where maps under different lighting conditions are stored in the map library.
- maps under different lighting conditions are pre-stored in the map library.
- the first is to find a matching candidate atlas according to the current lighting conditions.
- the current lighting conditions include: the current time and the current weather conditions, and the candidate atlases are screened from the map library according to the current time and the current weather conditions.
- the maps in the map library are stored according to time and weather conditions, as shown in FIG. 2 , which is a schematic diagram of the map library in one embodiment. First, it is divided into multiple sections according to weather conditions, and then each section is divided into multiple subsections according to time.
- Step 106 Collect an image under the current lighting condition, match the image with the map in the candidate atlas, determine the map matching the image, and use the matched map as the target map under the current lighting condition.
- image acquisition is performed under the current lighting conditions.
- the collected image is matched with the map in the candidate atlas, and the map in the candidate atlas that is most similar to the collected image is used as the matching target map.
- the image matching method can be implemented by calculating the similarity between images, for example, using the DBoW2 method to calculate.
- Step 108 perform visual positioning based on the target map.
- the SLAM positioning technology is used for visual positioning based on the target map.
- the visual positioning method solves the problem that changes in indoor ambient light affect the accuracy and robustness of visual SLAM positioning, so that the visual SLAM positioning technology can be used in indoor environments for a long time, and will not fail due to lighting, weather and other reasons.
- Various types of indoor working robots with real-time positioning are possible.
- the above-mentioned visual positioning method searches for a matching candidate atlas in the map library according to the current lighting conditions, then selects a target map that matches the image from the candidate atlas, and then performs visual positioning based on the target map.
- the method can select a matching target map for visual positioning according to the current lighting conditions, so that even when the lighting conditions change, the visual positioning can be performed accurately.
- the current lighting conditions include: current time and current weather conditions; the searching a map library for candidate atlases matching the lighting conditions according to the current lighting conditions includes: according to the current time from A first atlas whose time difference from the current time is within a preset range is selected from the map library; a second atlas that matches the current weather condition is selected from the first candidate atlas according to the current weather condition, The second atlas is used as the candidate atlas.
- the current time it is compared with the time of all the maps in the map library, and all maps that meet the time difference dt (for example, half an hour) are selected as the first atlas.
- the current weather conditions are acquired, and all maps similar to the current weather conditions are selected from the first atlas according to the weather conditions to obtain a second atlas, and the second atlas is a candidate atlas.
- FIG. 3 it is a schematic diagram of selecting candidate atlases according to time and weather in one embodiment.
- collecting an image under current lighting conditions, matching the image with a map in the candidate atlas, determining a map matching the image, and using the matched map as the The target map under current lighting conditions includes: calculating the similarity between each candidate map in the candidate atlas and the image; selecting a target map matching the image in the candidate atlas according to the similarity .
- the matching between the image and the map is performed by calculating the similarity. Calculate the similarity between each candidate map and the image in the candidate atlas, and then use the candidate map with the greatest similarity as the target map.
- similarity for example, DBow2 algorithm, SIFT algorithm, ORB algorithm, etc., which can be calculated by using the existing similarity calculation method, which will not be repeated here.
- the image is collected under the current lighting condition, the image is matched with the map in the candidate atlas, the map matching the image is determined, and the matched map is The map is used as the target map under the current lighting conditions, including:
- Step 106A collecting images for a period of time by moving the camera to obtain an image set.
- the mobile camera is arranged on the mobile robot.
- the mobile camera is the camera that obtains the current lighting conditions.
- the two can also be arranged separately. Image acquisition is performed under the current lighting conditions. In order to match a more accurate map, images are collected for a period of time, and multiple images are obtained to form an image set.
- Step 106B Match each image in the image set with the candidate maps in the candidate atlas to obtain the matching degree between each image and the candidate map.
- each image in the image set needs to be matched with the candidate map in the candidate atlas, so that the matching degree between each image and the candidate map can be obtained.
- Step 106C Calculate the matching degree between the image set and the candidate map according to the matching degree between each image in the image set and the candidate map.
- the matching degree between the image set and the candidate map can be calculated.
- the average matching degree between the images in the image set and the candidate map may be calculated, and the average matching degree may be used as the matching degree between the image set and the candidate map.
- Step 106D Determine a target map matching the image set according to the matching degree between the image set and each candidate map.
- sorting is performed according to the size of the matching degree between the image set and each candidate map, and the candidate map with the largest matching degree is used as the target map.
- calculating the degree of matching between the image set and the candidate map according to the degree of matching between each image in the image set and the candidate map includes: The matching degree between each image and the candidate map is accumulated to obtain the matching degree between the image set and the candidate map.
- the matching degree between each image in the image set and the candidate map is accumulated, and the accumulated value is used as the matching degree between the image set and the candidate map.
- it can be expressed by the following formula: where i represents the ith image in the image set, j represents the jth candidate map, sj represents the degree of matching between the image set and candidate map j, and sij represents the match between the ith image in the image set and candidate map j Spend.
- the calculation of the matching degree is calculated by using the DBoW2 algorithm, and the calculated matching degree is represented by a BOW score. The higher the BOW score, the higher the matching degree.
- a map that matches the image set is selected from the candidate map set according to the calculated BOW score.
- the calculating the degree of matching between each image in the image set and the candidate map includes: using multiple threads to respectively and parallelly calculate the matching between the candidate map and each image in the image set Spend.
- the multi-thread mode is adopted.
- Figure 6 which is a schematic diagram of using multi-thread parallel computing, it shows that 8 threads are used for parallel computing, and thread i calculates the matching score between the image set and the candidate maps i, i+8, i+16, etc.
- the method further includes: monitoring lighting conditions in real time, and when monitoring changes in the current lighting conditions, entering a map library to search for candidate maps that match the current lighting conditions according to the current lighting conditions set of steps.
- the light will change slowly, such as running from the morning to the afternoon, such as running for a period of time, it starts to rain.
- These will change the lighting conditions so that the previously selected map is no longer applicable and the map needs to be re-selected. That is, after monitoring the change of the current lighting conditions, the candidate atlases matching the current lighting conditions are searched in the map library again according to the current lighting conditions, and then the target map is matched.
- the positioning algorithm selects the open source ORB-SLAM2. First, ORB features and descriptors are extracted for each new frame captured by the camera, and then according to whether the speed of the camera is provided, it is selected to track the previous key frame or to use the speed model for tracking. If the speed of the camera is provided, the speed model is used for tracking, if not, the previous keyframe is selected for tracking. Finally, count the number of tracked feature points. If the number is too small, start the relocation module, otherwise continue to process the next frame of image until the visual positioning is completed.
- a visual positioning device including:
- an obtaining module 702 configured to obtain the current lighting condition through the photosensitive element arranged on the robot;
- a search module 704 configured to search for candidate atlases matching the current lighting conditions in a map library according to the current lighting conditions, where maps under different lighting conditions are stored in the map library;
- a matching module 706, configured to collect an image under the current lighting condition, match the image with the map in the candidate atlas, determine the map that matches the image, and use the matched map as the current lighting target map under conditions;
- the positioning module 708 is configured to perform visual positioning based on the target map.
- the current lighting conditions include: current time and current weather conditions;
- the search module 704 is further configured to select a first atlas whose time difference from the current time is within a preset range from the map library according to the current time, and select a set of atlases with a time difference from the first candidate atlas according to the current weather condition. A second atlas that matches the current weather condition, and the second atlas is used as the candidate atlas.
- the matching module is further configured to calculate the similarity between each candidate map in the candidate atlas and the image; according to the similarity, select a match with the image in the candidate atlas target map.
- the matching module is further configured to collect images for a period of time by moving the camera to obtain an image set; match each image in the image set with the candidate maps in the candidate atlases to obtain each image set.
- the matching degree between an image and the candidate map; the matching degree between the image set and the candidate map is calculated according to the matching degree between each image in the image set and the candidate map; according to the image
- the degree of match between the set and each candidate map determines the target map that matches the set of images.
- the matching module is further configured to accumulate the matching degree between each image in the image set and the candidate map to obtain the matching degree between the image set and the candidate map.
- the matching module is further configured to use multiple threads to calculate the matching degree between the candidate map and each image in the image set in parallel, respectively.
- the above-mentioned device further includes: a map replacement module, configured to monitor lighting conditions in real time, and when a change in the current lighting conditions is detected, the search module is notified to search the map library according to the current lighting conditions to search for the corresponding lighting conditions in the map library.
- a map replacement module configured to monitor lighting conditions in real time, and when a change in the current lighting conditions is detected, the search module is notified to search the map library according to the current lighting conditions to search for the corresponding lighting conditions in the map library.
- Figure 8 shows a diagram of the internal structure of the robot in one embodiment.
- the robot may be a terminal or a server.
- the robot includes a processor, memory and network interface connected through a system bus.
- the memory includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium of the robot stores an operating system, and also stores a computer program.
- the processor can implement the above-mentioned visual positioning method.
- a computer program can also be stored in the internal memory, and when the computer program is executed by the processor, the processor can execute the above-mentioned visual positioning method.
- FIG. 8 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the robot to which the solution of the present application is applied. More or fewer components are shown in the figures, either in combination or with different arrangements of components.
- a robot comprising a memory and a processor, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor causes the processor to perform the following steps: obtaining current illumination conditions; according to the current lighting conditions, search for candidate atlases matching the current lighting conditions in the map library, where maps under different lighting conditions are stored in the map library; collect images under the current lighting conditions, The image is matched with the map in the candidate atlas, the map matching the image is determined, and the matched map is used as the target map under the current lighting condition; visual positioning is performed based on the target map.
- the current lighting conditions include: current time and current weather conditions; the searching for a candidate atlas matching the lighting conditions in the map library according to the current lighting conditions includes: according to the current time from A first atlas whose time difference from the current time is within a preset range is selected from the map library; a second atlas that matches the current weather condition is selected from the first candidate atlas according to the current weather condition, The second atlas is used as the candidate atlas.
- collecting an image under current lighting conditions, matching the image with a map in the candidate atlas, determining a map matching the image, and using the matched map as the The target map under current lighting conditions includes: calculating the similarity between each candidate map in the candidate atlas and the image; selecting a target map matching the image in the candidate atlas according to the similarity .
- the collecting an image under the current lighting condition, matching the image with a map in the candidate atlas, and obtaining a map matching the image includes: moving a camera to collect an image for a period of time, Obtain an image set; match each image in the image set with the candidate maps in the candidate atlas to obtain the matching degree between each image and the candidate map; The matching degree between the candidate maps is calculated to obtain the matching degree between the image set and the candidate map; the target map matching the image set is determined according to the matching degree between the image set and each candidate map.
- calculating the degree of matching between the image set and the candidate map according to the degree of matching between each image in the image set and the candidate map includes: The matching degree between each image and the candidate map is accumulated to obtain the matching degree between the image set and the candidate map.
- the calculating the degree of matching between each image in the image set and the candidate map includes: using multiple threads to respectively and parallelly calculate the matching between the candidate map and each image in the image set Spend.
- the computer program when executed by the processor, the computer program is further configured to perform the following steps: monitor the lighting conditions in real time, and when the current lighting conditions are detected to change, enter the local The step of searching the gallery for candidate atlases that match the current lighting conditions.
- a computer-readable storage medium which stores a computer program, and when the computer program is executed by a processor, causes the processor to perform the following steps: acquiring current lighting conditions; Condition: Find candidate atlases matching the current lighting conditions in the map library, which stores maps under different lighting conditions; collect images under the current lighting conditions, and compare the images with the candidate maps The centralized map is matched, the map matching the image is determined, and the matched map is used as the target map under the current lighting condition; visual positioning is performed based on the target map.
- the current lighting conditions include: current time and current weather conditions; the searching a map library for candidate atlases matching the lighting conditions according to the current lighting conditions includes: according to the current time from A first atlas whose time difference from the current time is within a preset range is selected from the map library; a second atlas that matches the current weather condition is selected from the first candidate atlas according to the current weather condition, The second atlas is used as the candidate atlas.
- collecting an image under current lighting conditions, matching the image with a map in the candidate atlas, determining a map matching the image, and using the matched map as the The target map under current lighting conditions includes: calculating the similarity between each candidate map in the candidate atlas and the image; selecting a target map matching the image in the candidate atlas according to the similarity .
- the collecting an image under the current lighting condition, matching the image with a map in the candidate atlas, and obtaining a map matching the image includes: moving a camera to collect an image for a period of time, Obtain an image set; match each image in the image set with the candidate maps in the candidate atlas to obtain the matching degree between each image and the candidate map; The matching degree between the candidate maps is calculated to obtain the matching degree between the image set and the candidate map; the target map matching the image set is determined according to the matching degree between the image set and each candidate map.
- calculating the degree of matching between the image set and the candidate map according to the degree of matching between each image in the image set and the candidate map includes: The matching degree between each image and the candidate map is accumulated to obtain the matching degree between the image set and the candidate map.
- the calculating the degree of matching between each image in the image set and the candidate map includes: using multiple threads to respectively and parallelly calculate the matching between the candidate map and each image in the image set Spend.
- the computer program when executed by the processor, the computer program is further configured to perform the following steps: monitor the lighting conditions in real time, and when the current lighting conditions are detected to change, enter the local The step of searching the gallery for candidate atlases that match the current lighting conditions.
- Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory may include random access memory (RAM) or external cache memory.
- RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDRSDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM synchronous chain Road (Synchlink) DRAM
- SLDRAM synchronous chain Road (Synchlink) DRAM
- Rambus direct RAM
- DRAM direct memory bus dynamic RAM
- RDRAM memory bus dynamic RAM
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
一种视觉定位方法,包括:通过设置于机器人上的光感元件获取当前光照条件(102);根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图(104);采集当前光照条件下的图像,将图像与候选地图集中的地图进行匹配,确定与图像匹配的地图,将匹配到的地图作为当前光照条件下的目标地图(106);基于目标地图进行视觉定位(108)。该视觉定位方法使得在光照条件变化的情况下仍可以准确定位。此外,还提出了一种机器人及存储介质。
Description
本申请涉及定位导航技术领域,具体涉及一种视觉定位方法、机器人及存储介质。
目前,使用视觉SLAM(simultaneous localization and mapping)进行室内定位的研究越来越多,因为视觉信息丰富,且成本较低。但是实际应用却比较少,这主要受限于室内环境下的光照变化会显著影响视觉SLAM的定位精度和鲁棒性。
当前视觉SLAM定位依靠自身建立的地图来定位,但是一张地图只包含建图时的光照情况,不包含其他时间、其他天气情况下的光照条件,因此在其他时间其他天气条件下使用时,定位效果就会变差。同时室内场景的光照条件变化是比较频繁的,比如同一天的上午、下午和晚上光照条件的差别就很大,同时要加上天气的变化,如下雨、多云时。
因此,当前的视觉SLAM定位技术只能在和建图时光照条件相似的情况下进行定位,在光照条件变化时无法正确定位。
申请内容
基于此,有必要针对上述问题,提出一种在光照条件变化的情况下仍可以准确定位的视觉定位方法、装置、机器人及存储介质。
一种视觉定位方法,包括:
获取当前光照条件;
根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图;
采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行 匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图;
基于所述目标地图进行视觉定位。
一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:
获取当前光照条件;
根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图;
采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图;
基于所述目标地图进行视觉定位。
一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行以下步骤:
获取当前光照条件;
根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图;
采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图;
基于所述目标地图进行视觉定位。
上述视觉定位方法、机器人及存储介质,根据当前光照条件在地图库中查找与其匹配的候选地图集,然后从候选地图集中选出与图像匹配的目标地图,然后基于该目标地图进行视觉定位。该方法能够根据当前光照条件选择匹配的 目标地图进行视觉定位,使得即使在光照条件变化时也能够准确地进行视觉定位。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图示出的结构获得其他的附图。
图1是一个实施例中视觉定位方法的流程图;
图2是一个实施例中地图库的示意图;
图3是一个实施例中根据时间和天气筛选出候选地图集的示意图;
图4是一个实施例中确定与图像匹配的地图的方法流程图;
图5是一个实施例中从候选地图集中选出与图像集匹配的地图的示意图;
图6是一个实施例中使用多线程并行计算的示意图;
图7是一个实施例中视觉定位装置的结构框图;
图8是一个实施例中机器人的内部结构图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
如图1所示,提出了一种视觉定位方法,该视觉定位方法可以应用于终端,本实施例以应用于机器人举例说明。该视觉定位方法具体包括以下步骤:
步骤102,通过设置于机器人上的光感元件获取当前光照条件。
其中,由于视觉定位受光照影响比较大,所以需要实时检测光照条件,优 选的,所述光感元件为相机。当前光照条件是指当前的光照条件。光照条件是受时间和天气状况影响的,比如,一天之中,早上、中午、晚上的光照条件是不同的。不同天气,比如,晴天、多云、阴天、下雨等的光照条件也是不同的。在一个实施例中,可以用当前光照条件对应的当前时间和当前天气状况来表示。
步骤104,根据当前光照条件在地图库中查找与当前光照条件匹配的候选地图集,地图库中存储了在不同光照条件下的地图。
其中,地图库中预先存储了在不同光照条件下的地图,为了能够准确地进行视觉定位,需要在地图库中选出与当前光照条件匹配的地图,从而便于后续的定位。首先是根据当前光照条件查找到匹配的候选地图集。当前光照条件包括:当前时间和当前天气状况,根据当前时间和当前天气状况从地图库中筛选出候选地图集。地图库中的地图是根据时间和天气状况来存储的,如图2所示,为一个实施例中,地图库的示意图。首先,根据天气状况划分为多个区间,然后针对每个区间又根据时间划分为了多个子区间。
步骤106,采集当前光照条件下的图像,将图像与候选地图集中的地图进行匹配,确定与图像匹配的地图,将匹配到的地图作为当前光照条件下的目标地图。
其中,在当前光照条件下进行图像采集。将采集到的图像与候选地图集中的地图进行匹配,将候选地图集中与采集到的图像最为相似的地图作为匹配的目标地图。图像匹配的方式可以采用计算图像之间相似度的方法来实现,比如,采用DBoW2方法来计算。
步骤108,基于目标地图进行视觉定位。
其中,在确定了目标地图后,基于目标地图采用SLAM定位技术进行视觉定位。该视觉定位方法解决了室内环境光照变化影响视觉SLAM定位精度及鲁棒性的问题,使视觉SLAM定位技术可以在室内环境下长期使用,不会因为光照,天气等原因而定位失败,适用于需要实时定位的各类室内作业机器人。
上述视觉定位方法,根据当前光照条件在地图库中查找与其匹配的候选地图集,然后从候选地图集中选出与图像匹配的目标地图,然后基于该目标地图进行视觉定位。该方法能够根据当前光照条件选择匹配的目标地图进行视觉定位,使得即使在光照条件变化时也能够准确地进行视觉定位。
在一个实施例中,所述当前光照条件包括:当前时间和当前天气状况;所述根据所述当前光照条件在地图库中查找与所述光照条件匹配的候选地图集,包括:根据当前时间从所述地图库中选择出与当前时间的时间差在预设范围内的第一地图集;根据所述当前天气状况从所述第一候选地图集中选出与当前天气状况匹配的第二地图集,将所述第二地图集作为所述候选地图集。
其中,根据当前时间,将其与地图库中所有地图的时间进行比较,选出符合时间差dt(比如,半小时)的所有地图,作为第一地图集。获取当前天气状况,根据天气状况从第一地图集中选择与当前天气状况相似的所有地图,得到第二地图集,第二地图集即为候选地图集。如图3所示,为一个实施例中,根据时间和天气筛选出候选地图集的示意图。
在一个实施例中,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图,包括:计算所述候选地图集中每一张候选地图与所述图像的相似度;根据所述相似度在所述候选地图集中选出与所述图像匹配的目标地图。
其中,通过计算相似度来进行图像与地图之间的匹配。计算候选地图集中每一张候选地图与图像的相似度,然后将相似度最大的候选地图作为目标地图。相似度的计算方式有多种,比如,DBow2算法、SIFT算法、ORB算法等,采用现有的相似度计算方式即可计算得到,这里不再赘述。
如图4所示,在一个实施例中,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹 配到的所述地图作为所述当前光照条件下的目标地图,包括:
步骤106A,通过移动摄像头采集一段时间的图像,得到图像集。
其中,所述移动摄像头设置于移动机器人上,优选的,所述移动摄像头即为所述获取当前光照条件的相机,当然,也可将二者分别设置。在当前光照条件下进行图像采集,为了能够匹配到更加准确的地图,这里是采集一段时间的图像,得到多张图像,构成了一个图像集。
步骤106B,将图像集中的每一张图像与候选地图集中的候选地图进行匹配,得到每一张图像与候选地图之间的匹配度。
其中,需要将图像集中的每一张图像与候选地图集中的候选地图进行匹配,这样就可以得到每一张图像与候选地图之间的匹配度。
步骤106C,根据图像集中每一张图像与候选地图之间的匹配度计算得到图像集与候选地图之间的匹配度。
其中,在得到了每一张图像与候选地图之间的匹配度后,就可以计算得到图像集与候选地图之间的匹配度。在一个实施例中,可以计算图像集中图像与候选地图的平均匹配度,将平均匹配度作为图像集与候选地图之间的匹配度。
步骤106D,根据图像集与每一候选地图之间的匹配度确定与图像集匹配的目标地图。
其中,根据图像集与每一候选地图之间的匹配度的大小进行排序,将匹配度最大的候选地图作为目标地图。通过准确选择出与当前光照条件作为匹配的目标地图,能够提高后续基于目标地图进行视觉定位的准确度和稳定性。
在一个实施例中,所述根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度,包括:将所述图像集中所述每一张图像与候选地图之间的匹配度进行累加得到所述图像集与所述候选地图之间的匹配度。
其中,将图像集中每一张图像与候选地图之间的匹配度进行累加,将累加 得到值作为图像集与候选地图之间的匹配度。在一个实施例中,可以采用以下公式来表示:
其中,i表示图像集中的第i个图像,j表示第j个候选地图,s
j表示图像集与候选地图j的匹配度,s
ij表示图像集第i个图像与候选地图j之间的匹配度。在一个实施例中,匹配度的计算采用DBoW2算法来计算,将计算得到的匹配度用BOW得分来表示,BOW得分越高,表示匹配度越高。如图5所示,为一个实施例中,根据计算得到的BOW得分从候选地图集中选出与图像集匹配的地图的示意图。
在一个实施例中,所述计算图像集中每一张图像与所述候选地图之间的匹配度,包括:采用多线程分别并行计算候选地图与所述图像集中的每一张图像之间的匹配度。
其中,考虑掉计算量的问题,在计算图像集与候选地图之间的匹配度时,采用多线程模式。如图6所示,为使用多线程并行计算的示意图,展示了使用8个线程并行计算,线程i计算图像集与候选地图i,i+8,i+16等的匹配度得分。
在一个实施例中,所述方法还包括:实时监测光照条件,当监测到当前光照条件发生变化时,则进入根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集的步骤。
其中,随着运行时间的增长,光照是会缓慢变化的,如从跟上午运行到下午,如运行一段时间后,开始下雨了。这些会改变光照条件,使得之前选择的地图已经不再适用,需要重新选择地图。即监测到当前光照条件发生变化后,重新根据当前光照条件在地图库中查找与当前光照条件匹配的候选地图集,继而匹配出目标地图。
在一个实施例中,在确定了目标地图后,定位算法选择开源的ORB-SLAM2。首先,针对相机采集到的每一帧新图像提取ORB特征和描述子,然后根据是否提供有相机的速度,选择跟踪上一个关键帧,还是使用速度模型进行跟踪。如果有提供相机的速度,则使用速度模型进行跟踪,如果没有 提供,则选择上一关键帧进行跟踪。最后统计跟踪到的特征点的数量,若数量过少,则启动重新定位模块,否则继续处理下一帧图像,直到完成视觉定位。
如图7所示,在一个实施例中,提出了一种视觉定位装置,包括:
获取模块702,用于通过设置于机器人上的光感元件获取当前光照条件;
查找模块704,用于根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图;
匹配模块706,用于采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图;
定位模块708,用于基于所述目标地图进行视觉定位。
在一个实施例中,所述当前光照条件包括:当前时间和当前天气状况;
查找模块704还用于根据当前时间从所述地图库中选择出与当前时间的时间差在预设范围内的第一地图集,根据所述当前天气状况从所述第一候选地图集中选出与当前天气状况匹配的第二地图集,将所述第二地图集作为所述候选地图集。
在一个实施例中,所述匹配模块还用于计算所述候选地图集中每一张候选地图与所述图像的相似度;根据所述相似度在所述候选地图集中选出与所述图像匹配的目标地图。
在一个实施例中,所述匹配模块还用于通过移动摄像头采集一段时间的图像,得到图像集;将所述图像集中的每一张图像与所述候选地图集中的候选地图进行匹配,得到每一张图像与候选地图之间的匹配度;根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度;根据所述图像集与每一候选地图之间的匹配度确定与所述图像集匹配的目标地图。
在一个实施例中,匹配模块还用于将所述图像集中述每一张图像与候选地 图之间的匹配度进行累加得到所述图像集与所述候选地图之间的匹配度。
在一个实施例中,匹配模块还用于采用多线程分别并行计算候选地图与所述图像集中的每一张图像之间的匹配度。
在一个实施例中,上述装置还包括:地图更换模块,用于实时监测光照条件,当监测到当前光照条件发生变化时,则通知查找模块根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集。
图8示出了一个实施例中机器人的内部结构图。该机器人具体可以是终端,也可以是服务器。如图8所示,该机器人包括通过系统总线连接的处理器、存储器和网络接口。其中,存储器包括非易失性存储介质和内存储器。该机器人的非易失性存储介质存储有操作系统,还可存储有计算机程序,该计算机程序被处理器执行时,可使得处理器实现上述的视觉定位方法。该内存储器中也可储存有计算机程序,该计算机程序被处理器执行时,可使得处理器执行上述的视觉定位方法。本领域技术人员可以理解,图8中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的机器人的限定,具体的机器人可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提出了一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行以下步骤:获取当前光照条件;根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图;采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图;基于所述目标地图进行视觉定位。
在一个实施例中,所述当前光照条件包括:当前时间和当前天气状况;所述根据所述当前光照条件在地图库中查找与所述光照条件匹配的候选地图集, 包括:根据当前时间从所述地图库中选择出与当前时间的时间差在预设范围内的第一地图集;根据所述当前天气状况从所述第一候选地图集中选出与当前天气状况匹配的第二地图集,将所述第二地图集作为所述候选地图集。
在一个实施例中,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图,包括:计算所述候选地图集中每一张候选地图与所述图像的相似度;根据所述相似度在所述候选地图集中选出与所述图像匹配的目标地图。
在一个实施例中,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,得到与所述图像匹配的地图,包括:移动摄像头采集一段时间的图像,得到图像集;将所述图像集中的每一张图像与所述候选地图集中的候选地图进行匹配,得到每一张图像与候选地图之间的匹配度;根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度;根据所述图像集与每一候选地图之间的匹配度确定与所述图像集匹配的目标地图。
在一个实施例中,所述根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度,包括:将所述图像集中述每一张图像与候选地图之间的匹配度进行累加得到所述图像集与所述候选地图之间的匹配度。
在一个实施例中,所述计算图像集中每一张图像与所述候选地图之间的匹配度,包括:采用多线程分别并行计算候选地图与所述图像集中的每一张图像之间的匹配度。
在一个实施例中,所述计算机程序被所述处理器执行时,还用于执行以下步骤:实时监测光照条件,当监测到当前光照条件发生变化时,则进入根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集的步骤。
在一个实施例中,提出了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行以下步骤:获取当前光照条件;根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图;采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图;基于所述目标地图进行视觉定位。
在一个实施例中,所述当前光照条件包括:当前时间和当前天气状况;所述根据所述当前光照条件在地图库中查找与所述光照条件匹配的候选地图集,包括:根据当前时间从所述地图库中选择出与当前时间的时间差在预设范围内的第一地图集;根据所述当前天气状况从所述第一候选地图集中选出与当前天气状况匹配的第二地图集,将所述第二地图集作为所述候选地图集。
在一个实施例中,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图,包括:计算所述候选地图集中每一张候选地图与所述图像的相似度;根据所述相似度在所述候选地图集中选出与所述图像匹配的目标地图。
在一个实施例中,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,得到与所述图像匹配的地图,包括:移动摄像头采集一段时间的图像,得到图像集;将所述图像集中的每一张图像与所述候选地图集中的候选地图进行匹配,得到每一张图像与候选地图之间的匹配度;根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度;根据所述图像集与每一候选地图之间的匹配度确定与所述图像集匹配的目标地图。
在一个实施例中,所述根据所述图像集中每一张图像与候选地图之间的匹 配度计算得到所述图像集与所述候选地图之间的匹配度,包括:将所述图像集中述每一张图像与候选地图之间的匹配度进行累加得到所述图像集与所述候选地图之间的匹配度。
在一个实施例中,所述计算图像集中每一张图像与所述候选地图之间的匹配度,包括:采用多线程分别并行计算候选地图与所述图像集中的每一张图像之间的匹配度。
在一个实施例中,所述计算机程序被所述处理器执行时,还用于执行以下步骤:实时监测光照条件,当监测到当前光照条件发生变化时,则进入根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
Claims (21)
- 一种视觉定位方法,其特征在于,包括:通过设置于机器人上的光感元件获取当前光照条件;根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图;采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图;基于所述目标地图进行视觉定位。
- 根据权利要求1所述的方法,其特征在于,所述当前光照条件包括:当前时间和当前天气状况;所述根据所述当前光照条件在地图库中查找与所述光照条件匹配的候选地图集,包括:根据当前时间从所述地图库中选择出与当前时间的时间差在预设范围内的第一地图集;根据所述当前天气状况从所述第一候选地图集中选出与当前天气状况匹配的第二地图集,将所述第二地图集作为所述候选地图集。
- 根据权利要求1所述的方法,其特征在于,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图,包括:计算所述候选地图集中每一张候选地图与所述图像的相似度;根据所述相似度在所述候选地图集中选出与所述图像匹配的目标地图。
- 根据权利要求1所述的方法,其特征在于,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,得到与所述图像匹配的地图,包括:通过移动摄像头采集一段时间的图像,得到图像集;将所述图像集中的每一张图像与所述候选地图集中的候选地图进行匹配,得到每一张图像与候选地图之间的匹配度;根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度;根据所述图像集与每一候选地图之间的匹配度确定与所述图像集匹配的目标地图。
- 根据权利要求4所述的方法,其特征在于,所述根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度,包括:将所述图像集中述每一张图像与候选地图之间的匹配度进行累加得到所述图像集与所述候选地图之间的匹配度。
- 根据权利要求5所述的方法,其特征在于,所述计算图像集中每一张图像与所述候选地图之间的匹配度,包括:采用多线程分别并行计算候选地图与所述图像集中的每一张图像之间的匹配度。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:实时监测光照条件,当监测到当前光照条件发生变化时,则进入根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集的步骤。
- 一种机器人,包括存储器和处理器,所述存储器存储有计算机程序,所述计算机程序被所述处理器执行时,执行以下步骤:通过设置于机器人上的光感元件获取当前光照条件;根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图;采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条 件下的目标地图;基于所述目标地图进行视觉定位。
- 根据权利要求8所述的机器人,其特征在于,所述当前光照条件包括:当前时间和当前天气状况;所述根据所述当前光照条件在地图库中查找与所述光照条件匹配的候选地图集,包括:根据当前时间从所述地图库中选择出与当前时间的时间差在预设范围内的第一地图集;根据所述当前天气状况从所述第一候选地图集中选出与当前天气状况匹配的第二地图集,将所述第二地图集作为所述候选地图集。
- 根据权利要求8所述的机器人,其特征在于,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图,包括:计算所述候选地图集中每一张候选地图与所述图像的相似度;根据所述相似度在所述候选地图集中选出与所述图像匹配的目标地图。
- 根据权利要求8所述的机器人,其特征在于,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,得到与所述图像匹配的地图,包括:通过移动摄像头采集一段时间的图像,得到图像集;将所述图像集中的每一张图像与所述候选地图集中的候选地图进行匹配,得到每一张图像与候选地图之间的匹配度;根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度;根据所述图像集与每一候选地图之间的匹配度确定与所述图像集匹配的目标地图。
- 根据权利要求11所述的机器人,其特征在于,所述根据所述图像集 中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度,包括:将所述图像集中述每一张图像与候选地图之间的匹配度进行累加得到所述图像集与所述候选地图之间的匹配度。
- 根据权利要求12所述的机器人,其特征在于,所述计算图像集中每一张图像与所述候选地图之间的匹配度,包括:采用多线程分别并行计算候选地图与所述图像集中的每一张图像之间的匹配度。
- 根据权利要求8所述的机器人,其特征在于,所述计算机程序被所述处理器执行时,还用于执行以下步骤:实时监测光照条件,当监测到当前光照条件发生变化时,则进入根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集的步骤。
- 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器执行以下步骤:通过设置于机器人上的光感元件获取当前光照条件;根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集,所述地图库中存储了在不同光照条件下的地图;采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图;基于所述目标地图进行视觉定位。
- 根据权利要求15所述的存储介质,其特征在于,所述当前光照条件包括:当前时间和当前天气状况;所述根据所述当前光照条件在地图库中查找与所述光照条件匹配的候选地图集,包括:根据当前时间从所述地图库中选择出与当前时间的时间差在预设范围内的第一地图集;根据所述当前天气状况从所述第一候选地图集中选出与当前天气状况匹配的第二地图集,将所述第二地图集作为所述候选地图集。
- 根据权利要求15所述的存储介质,其特征在于,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,确定与所述图像匹配的地图,将匹配到的所述地图作为所述当前光照条件下的目标地图,包括:计算所述候选地图集中每一张候选地图与所述图像的相似度;根据所述相似度在所述候选地图集中选出与所述图像匹配的目标地图。
- 根据权利要求15所述的存储介质,其特征在于,所述采集当前光照条件下的图像,将所述图像与所述候选地图集中的地图进行匹配,得到与所述图像匹配的地图,包括:通过移动摄像头采集一段时间的图像,得到图像集;将所述图像集中的每一张图像与所述候选地图集中的候选地图进行匹配,得到每一张图像与候选地图之间的匹配度;根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度;根据所述图像集与每一候选地图之间的匹配度确定与所述图像集匹配的目标地图。
- 根据权利要求18所述的存储介质,其特征在于,所述根据所述图像集中每一张图像与候选地图之间的匹配度计算得到所述图像集与所述候选地图之间的匹配度,包括:将所述图像集中述每一张图像与候选地图之间的匹配度进行累加得到所述图像集与所述候选地图之间的匹配度。
- 根据权利要求19所述的存储介质,其特征在于,所述计算图像集中 每一张图像与所述候选地图之间的匹配度,包括:采用多线程分别并行计算候选地图与所述图像集中的每一张图像之间的匹配度。
- 根据权利要求15所述的存储介质,其特征在于,所述计算机程序被所述处理器执行时,还用于执行以下步骤:实时监测光照条件,当监测到当前光照条件发生变化时,则进入根据所述当前光照条件在地图库中查找与所述当前光照条件匹配的候选地图集的步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/133919 WO2022116156A1 (zh) | 2020-12-04 | 2020-12-04 | 视觉定位方法、机器人及存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/133919 WO2022116156A1 (zh) | 2020-12-04 | 2020-12-04 | 视觉定位方法、机器人及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022116156A1 true WO2022116156A1 (zh) | 2022-06-09 |
Family
ID=81852788
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/133919 WO2022116156A1 (zh) | 2020-12-04 | 2020-12-04 | 视觉定位方法、机器人及存储介质 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022116156A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107223244A (zh) * | 2016-12-02 | 2017-09-29 | 深圳前海达闼云端智能科技有限公司 | 定位方法和装置 |
EP3306344A1 (en) * | 2016-10-07 | 2018-04-11 | Leica Geosystems AG | Flying sensor |
CN109074408A (zh) * | 2018-07-16 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | 一种地图加载的方法、装置、电子设备和可读存储介质 |
CN111161347A (zh) * | 2020-04-01 | 2020-05-15 | 亮风台(上海)信息科技有限公司 | 一种进行slam初始化的方法与设备 |
CN111652934A (zh) * | 2020-05-12 | 2020-09-11 | Oppo广东移动通信有限公司 | 定位方法及地图构建方法、装置、设备、存储介质 |
-
2020
- 2020-12-04 WO PCT/CN2020/133919 patent/WO2022116156A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3306344A1 (en) * | 2016-10-07 | 2018-04-11 | Leica Geosystems AG | Flying sensor |
CN107223244A (zh) * | 2016-12-02 | 2017-09-29 | 深圳前海达闼云端智能科技有限公司 | 定位方法和装置 |
CN109074408A (zh) * | 2018-07-16 | 2018-12-21 | 深圳前海达闼云端智能科技有限公司 | 一种地图加载的方法、装置、电子设备和可读存储介质 |
CN111161347A (zh) * | 2020-04-01 | 2020-05-15 | 亮风台(上海)信息科技有限公司 | 一种进行slam初始化的方法与设备 |
CN111652934A (zh) * | 2020-05-12 | 2020-09-11 | Oppo广东移动通信有限公司 | 定位方法及地图构建方法、装置、设备、存储介质 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111460984B (zh) | 一种基于关键点与梯度均衡损失的全局车道线检测方法 | |
CN110400332B (zh) | 一种目标检测跟踪方法、装置及计算机设备 | |
WO2022222095A1 (zh) | 轨迹预测方法、装置、计算机设备和存储介质 | |
US10388022B2 (en) | Image target tracking method and system thereof | |
CN110796679B (zh) | 一种面向航拍影像的目标跟踪方法 | |
CN109325060B (zh) | 一种基于数据特征的时间序列流数据快速搜索方法 | |
CN110175507B (zh) | 模型评估方法、装置、计算机设备和存储介质 | |
CN113298014B (zh) | 基于逆向索引关键帧选取策略的闭环检测方法、存储介质及设备 | |
CN113129335B (zh) | 一种基于孪生网络的视觉跟踪算法及多模板更新策略 | |
CN112949511A (zh) | 一种基于机器学习和图像识别的施工现场人员管理方法 | |
CN116188995B (zh) | 一种遥感图像特征提取模型训练方法、检索方法及装置 | |
CN116206453B (zh) | 一种基于迁移学习的交通流预测方法、装置及相关设备 | |
CN111985325A (zh) | 特高压环境评价中的航拍小目标快速识别方法 | |
CN112541403A (zh) | 一种利用红外摄像头的室内人员跌倒检测方法 | |
WO2022116156A1 (zh) | 视觉定位方法、机器人及存储介质 | |
CN118038494A (zh) | 一种损坏场景鲁棒的跨模态行人重识别方法 | |
CN110636248A (zh) | 目标跟踪方法与装置 | |
WO2022116154A1 (zh) | 地图库建立方法、计算机设备及存储介质 | |
CN112488007B (zh) | 视觉定位方法、装置、机器人及存储介质 | |
CN116958057A (zh) | 一种策略引导的视觉回环检测的方法 | |
CN110503663A (zh) | 一种基于抽帧检测的随机多目标自动检测跟踪方法 | |
CN109784291A (zh) | 基于多尺度的卷积特征的行人检测方法 | |
CN115797310A (zh) | 一种光伏电站组串倾角的确定方法及电子设备 | |
CN115371695A (zh) | 一种行为语义辅助回环检测的同步定位建图方法 | |
CN111666786B (zh) | 图像处理方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20964002 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20964002 Country of ref document: EP Kind code of ref document: A1 |