CN112488007B - Visual positioning method, device, robot and storage medium - Google Patents

Visual positioning method, device, robot and storage medium Download PDF

Info

Publication number
CN112488007B
CN112488007B CN202011406851.8A CN202011406851A CN112488007B CN 112488007 B CN112488007 B CN 112488007B CN 202011406851 A CN202011406851 A CN 202011406851A CN 112488007 B CN112488007 B CN 112488007B
Authority
CN
China
Prior art keywords
map
image
candidate
current
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011406851.8A
Other languages
Chinese (zh)
Other versions
CN112488007A (en
Inventor
刘志超
黄明强
赖有仿
谷雨隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202011406851.8A priority Critical patent/CN112488007B/en
Publication of CN112488007A publication Critical patent/CN112488007A/en
Application granted granted Critical
Publication of CN112488007B publication Critical patent/CN112488007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model

Abstract

The application discloses a visual positioning method, which comprises the following steps: acquiring a current illumination condition; searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, wherein maps under different illumination conditions are stored in the map library; collecting an image under the current illumination condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current illumination condition; and performing visual positioning based on the target map. The visual positioning method can still accurately position under the condition of changing illumination conditions. In addition, a visual positioning device, a robot and a storage medium are also provided.

Description

Visual positioning method, device, robot and storage medium
Technical Field
The application relates to the technical field of positioning navigation, in particular to a visual positioning method, a visual positioning device, a robot and a storage medium.
Background
Currently, indoor positioning is increasingly studied by using vision SLAM (simultaneous localization and mapping) because of the abundance of visual information and the low cost. But practical applications are few, which is mainly limited by that the illumination change in indoor environment can significantly influence the positioning accuracy and robustness of the visual SLAM.
The current visual SLAM positioning is positioned by means of the built map, but one map only contains the illumination condition during the map building and does not contain the illumination conditions in other times and other weather conditions, so that the positioning effect is poor when the current visual SLAM positioning is used in other weather conditions in other times. Meanwhile, the illumination condition of the indoor scene changes frequently, for example, the illumination conditions in the morning, afternoon and evening on the same day are greatly different, and meanwhile, weather changes are added, such as rainy days and cloudy days. Therefore, the current visual SLAM positioning technology can only perform positioning under the condition similar to the illumination condition in the process of drawing, and cannot perform correct positioning when the illumination condition changes.
Disclosure of Invention
In view of the above, it is necessary to provide a visual positioning method, a visual positioning device, a robot, and a storage medium that can accurately position the object even when the light conditions change.
A visual positioning method, comprising:
acquiring the current illumination condition through a light sensing element arranged on the robot;
searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, wherein maps under different illumination conditions are stored in the map library;
collecting an image under the current illumination condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current illumination condition;
and performing visual positioning based on the target map.
A robot comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
acquiring the current illumination condition through a light sensing element arranged on the robot;
searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, wherein maps under different illumination conditions are stored in the map library;
collecting an image under the current illumination condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current illumination condition;
and performing visual positioning based on the target map.
A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring the current illumination condition through a light sensing element arranged on the robot;
searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, wherein maps under different illumination conditions are stored in the map library;
collecting an image under the current illumination condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current illumination condition;
and performing visual positioning based on the target map.
According to the visual positioning method, the visual positioning device, the robot and the storage medium, the candidate map set matched with the visual positioning device is searched in the map library according to the current illumination condition, then the target map matched with the image is selected from the candidate map set, and then the visual positioning is performed based on the target map. According to the method, the matched target map can be selected for visual positioning according to the current illumination condition, so that the visual positioning can be accurately performed even when the illumination condition changes.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a flow diagram of a visual positioning method in one embodiment;
FIG. 2 is a schematic diagram of a gallery in one embodiment;
FIG. 3 is a schematic diagram of screening out candidate atlases based on time and weather in one embodiment;
FIG. 4 is a flow diagram of a method of determining a map matching an image in one embodiment;
FIG. 5 is a schematic diagram of selecting a map from a candidate set of maps that matches the set of images in one embodiment;
FIG. 6 is a schematic diagram of parallel computing using multithreading in one embodiment;
FIG. 7 is a block diagram of the visual positioning device in one embodiment;
fig. 8 is an internal structural view of the robot in one embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As shown in fig. 1, a visual positioning method is proposed, which can be applied to an intelligent terminal, and this embodiment is exemplified by a robot. The visual positioning method specifically comprises the following steps:
step 102, acquiring the current illumination condition through a light sensing element arranged on the robot.
The visual positioning is greatly affected by illumination, so that the illumination condition needs to be detected in real time, and the light sensing element is preferably a camera. The current lighting conditions refer to current lighting conditions. The lighting conditions are affected by time and weather conditions, e.g. the lighting conditions are different in the morning, noon, evening during the day. The lighting conditions are also different for different weather, such as sunny, cloudy, rainy, etc. In one embodiment, the current time and current weather conditions corresponding to the current lighting conditions may be represented.
And 104, searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, wherein maps under different illumination conditions are stored in the map library.
In order to accurately perform visual positioning, a map matched with the current lighting condition needs to be selected from the map library, so that subsequent positioning is facilitated. First, a matched candidate atlas is found according to the current illumination condition. The current lighting conditions include: and screening the candidate atlas from the map library according to the current time and the current weather condition. The maps in the map gallery are stored according to time and weather conditions, as shown in FIG. 2, which is a schematic diagram of the map gallery in one embodiment. First, it is divided into a plurality of sections according to weather conditions, and then divided into a plurality of sub-sections according to time for each section.
And 106, collecting an image under the current illumination condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current illumination condition.
The image acquisition is carried out under the current illumination condition. And matching the acquired image with a map in the candidate map set, and taking the map in the candidate map set, which is the most similar to the acquired image, as a matched target map. The image matching method can be implemented by adopting a method for calculating the similarity between images, for example, adopting a DBoW2 method for calculation.
And step 108, performing visual positioning based on the target map.
After the target map is determined, the visual positioning is performed by adopting a SLAM positioning technology based on the target map. The visual positioning method solves the problem that the indoor environment illumination change affects the positioning accuracy and robustness of the visual SLAM, so that the visual SLAM positioning technology can be used for a long time in an indoor environment, positioning failure caused by illumination, weather and other reasons can be avoided, and the visual positioning method is suitable for various indoor operation robots needing real-time positioning.
According to the visual positioning method, the candidate map set matched with the current illumination condition is searched in the map library, then the target map matched with the image is selected from the candidate map set, and then visual positioning is performed based on the target map. According to the method, the matched target map can be selected for visual positioning according to the current illumination condition, so that the visual positioning can be accurately performed even when the illumination condition changes.
In one embodiment, the current lighting conditions include: current time and current weather conditions; the searching the candidate map set matched with the illumination condition in the map library according to the current illumination condition comprises the following steps: selecting a first map set with the time difference within a preset range from the map library according to the current time; and selecting a second map set matched with the current weather condition from the first candidate map set according to the current weather condition, and taking the second map set as the candidate map set.
Wherein, according to the current time, the current time is compared with the time of all the maps in the map library, and all the maps conforming to the time difference dt (for example, half an hour) are selected as the first map set. And acquiring the current weather condition, and selecting all maps similar to the current weather condition from the first map set according to the weather condition to obtain a second map set, wherein the second map set is the candidate map set. FIG. 3 is a schematic diagram illustrating the screening of candidate atlases based on time and weather, in one embodiment.
In one embodiment, the collecting an image under the current lighting condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current lighting condition includes: calculating the similarity between each candidate map in the candidate map set and the image; and selecting a target map matched with the image from the candidate map set according to the similarity.
Wherein the matching between the image and the map is performed by calculating the similarity. And calculating the similarity between each candidate map in the candidate map set and the image, and taking the candidate map with the maximum similarity as a target map. The similarity can be calculated by adopting a plurality of calculation modes, such as a DBow2 algorithm, a SIFT algorithm, an ORB algorithm and the like, and the calculation modes can be calculated by adopting the existing similarity calculation modes, and are not repeated here.
As shown in fig. 4, in one embodiment, the collecting an image under the current illumination condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current illumination condition includes:
and 106A, acquiring images for a period of time by moving the camera to obtain an image set.
The mobile camera is arranged on the mobile robot, and preferably, the mobile camera is the camera for acquiring the current illumination condition, and of course, the mobile camera and the camera can be respectively arranged. The image acquisition is carried out under the current illumination condition, so that a more accurate map can be matched, a period of images are acquired, a plurality of images are obtained, and an image set is formed.
And 106B, matching each image in the image set with a candidate map in the candidate map set to obtain the matching degree between each image and the candidate map.
The matching degree between each image and the candidate map can be obtained by matching each image in the image set with the candidate map in the candidate map set.
And 106C, calculating the matching degree between the image set and the candidate map according to the matching degree between each image in the image set and the candidate map.
After the matching degree between each image and the candidate map is obtained, the matching degree between the image set and the candidate map can be calculated. In one embodiment, an average degree of matching of the images in the image set to the candidate map may be calculated, with the average degree of matching being taken as the degree of matching between the image set and the candidate map.
And 106D, determining a target map matched with the image set according to the matching degree between the image set and each candidate map.
And sequencing according to the matching degree between the image set and each candidate map, and taking the candidate map with the largest matching degree as a target map. By accurately selecting the target map matched with the current illumination condition, the accuracy and stability of the subsequent visual positioning based on the target map can be improved.
In one embodiment, the calculating the matching degree between the image set and the candidate map according to the matching degree between each image in the image set and the candidate map includes: and accumulating the matching degree between each image in the image set and the candidate map to obtain the matching degree between the image set and the candidate map.
And accumulating the matching degree between each image in the image set and the candidate map, and taking the accumulated value as the matching degree between the image set and the candidate map. In one embodiment, the following formula may be employed:where i represents the ith image in the image set, j represents the jth candidate map, s j Representing the matching degree s of the image set and the candidate map j ij And representing the matching degree between the ith image of the image set and the candidate map j. In one embodiment, the matching degree is calculated by using a DBoW2 algorithm, and the calculated matching degree is represented by a BOW score, wherein the higher the BOW score is, the higher the matching degree is. FIG. 5 is a schematic diagram of a map matching an image set selected from a candidate map set according to a calculated BOW score, in one embodiment.
In one embodiment, the calculating the matching degree between each image in the image set and the candidate map includes: and respectively and parallelly calculating the matching degree between the candidate map and each image in the image set by adopting multithreading.
The problem of calculation amount is considered, and when the matching degree between the image set and the candidate map is calculated, a multithreading mode is adopted. As shown in fig. 6, for a schematic diagram using multi-threaded parallel computation, 8-threaded parallel computation is used, and thread i computes the matching score of the image set and candidate maps i, i+8, i+16, etc.
In one embodiment, the method further comprises: and monitoring illumination conditions in real time, and when the current illumination conditions are monitored to change, entering a step of searching a candidate map set matched with the current illumination conditions in a map library according to the current illumination conditions.
Wherein the illumination is slowly changing with increasing operation time, such as from afternoon to afternoon, such as after a period of operation, and starts to rain. These change the lighting conditions such that the previously selected map is no longer suitable and a re-selection of the map is required. And after the current illumination condition is monitored to change, searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, and then matching a target map.
In one embodiment, after the target map is determined, the positioning algorithm selects an ORB-SLAM2 that is open source. First, ORB features and descriptors are extracted for each new image acquired by the camera, and then, depending on whether the speed of the camera is provided, the last key frame is selected to be tracked or a speed model is used for tracking. If there is a speed to provide the camera, the speed model is used for tracking, and if not, the last key frame is selected for tracking. And finally counting the number of the tracked characteristic points, if the number is too small, starting a repositioning module, otherwise, continuing to process the next frame of image until the visual positioning is completed.
As shown in fig. 7, in one embodiment, a visual positioning device is provided, comprising:
an acquisition module 702, configured to acquire a current illumination condition through a light sensing element disposed on the robot;
a searching module 704, configured to search, according to the current lighting condition, a candidate map set that matches the current lighting condition in a map gallery, where maps under different lighting conditions are stored;
the matching module 706 is configured to collect an image under a current lighting condition, match the image with a map in the candidate map set, determine a map matched with the image, and use the matched map as a target map under the current lighting condition;
a positioning module 708 for performing visual positioning based on the target map.
In one embodiment, the current lighting conditions include: current time and current weather conditions;
the searching module 704 is further configured to select, according to the current time, a first atlas with a time difference within a preset range from the map library, select, according to the current weather condition, a second atlas matching the current weather condition from the first candidate atlas, and use the second atlas as the candidate atlas.
In one embodiment, the matching module is further configured to calculate a similarity between each candidate map in the candidate map set and the image; and selecting a target map matched with the image from the candidate map set according to the similarity.
In one embodiment, the matching module is further configured to acquire an image for a period of time by moving the camera, so as to obtain an image set; matching each image in the image set with a candidate map in the candidate map set to obtain the matching degree between each image and the candidate map; calculating the matching degree between each image in the image set and the candidate map according to the matching degree between the image set and the candidate map; and determining a target map matched with the image set according to the matching degree between the image set and each candidate map.
In one embodiment, the matching module is further configured to accumulate the matching degree between each image in the image set and the candidate map to obtain the matching degree between the image set and the candidate map.
In one embodiment, the matching module is further configured to calculate, in parallel, a degree of matching between the candidate map and each image in the set of images, respectively, using multithreading.
In one embodiment, the apparatus further comprises: and the map replacement module is used for monitoring the illumination condition in real time, and notifying the searching module to search the candidate map set matched with the current illumination condition in the map library according to the current illumination condition when the current illumination condition is monitored to change.
Fig. 8 shows an internal structural view of the robot in one embodiment. As shown in fig. 8, the robot includes a processor, a memory, a light sensing element, and a network interface connected through a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the robot stores an operating system, and may also store a computer program that, when executed by a processor, causes the processor to implement the visual positioning method described above. The internal memory may also store a computer program which, when executed by the processor, causes the processor to perform the visual positioning method described above. It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the robots to which the present inventive arrangements are applied, and that a particular robot may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a robot is presented comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of: acquiring a current illumination condition; searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, wherein maps under different illumination conditions are stored in the map library; collecting an image under the current illumination condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current illumination condition; and performing visual positioning based on the target map.
In one embodiment, the current lighting conditions include: current time and current weather conditions; the searching the candidate map set matched with the illumination condition in the map library according to the current illumination condition comprises the following steps: selecting a first map set with the time difference within a preset range from the map library according to the current time; and selecting a second map set matched with the current weather condition from the first candidate map set according to the current weather condition, and taking the second map set as the candidate map set.
In one embodiment, the collecting an image under the current lighting condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current lighting condition includes: calculating the similarity between each candidate map in the candidate map set and the image; and selecting a target map matched with the image from the candidate map set according to the similarity.
In one embodiment, the collecting the image under the current illumination condition, and matching the image with the map in the candidate map set to obtain a map matched with the image includes: the mobile camera collects images for a period of time to obtain an image set; matching each image in the image set with a candidate map in the candidate map set to obtain the matching degree between each image and the candidate map; calculating the matching degree between each image in the image set and the candidate map according to the matching degree between the image set and the candidate map; and determining a target map matched with the image set according to the matching degree between the image set and each candidate map.
In one embodiment, the calculating the matching degree between the image set and the candidate map according to the matching degree between each image in the image set and the candidate map includes: and accumulating the matching degree between each image in the image set and the candidate map to obtain the matching degree between the image set and the candidate map.
In one embodiment, the calculating the matching degree between each image in the image set and the candidate map includes: and respectively and parallelly calculating the matching degree between the candidate map and each image in the image set by adopting multithreading.
In one embodiment, the computer program, when executed by the processor, is further configured to perform the steps of: and monitoring illumination conditions in real time, and when the current illumination conditions are monitored to change, entering a step of searching a candidate map set matched with the current illumination conditions in a map library according to the current illumination conditions.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of: acquiring a current illumination condition; searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, wherein maps under different illumination conditions are stored in the map library; collecting an image under the current illumination condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current illumination condition; and performing visual positioning based on the target map.
In one embodiment, the current lighting conditions include: current time and current weather conditions; the searching the candidate map set matched with the illumination condition in the map library according to the current illumination condition comprises the following steps: selecting a first map set with the time difference within a preset range from the map library according to the current time; and selecting a second map set matched with the current weather condition from the first candidate map set according to the current weather condition, and taking the second map set as the candidate map set.
In one embodiment, the collecting an image under the current lighting condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current lighting condition includes: calculating the similarity between each candidate map in the candidate map set and the image; and selecting a target map matched with the image from the candidate map set according to the similarity.
In one embodiment, the collecting the image under the current illumination condition, and matching the image with the map in the candidate map set to obtain a map matched with the image includes: the mobile camera collects images for a period of time to obtain an image set; matching each image in the image set with a candidate map in the candidate map set to obtain the matching degree between each image and the candidate map; calculating the matching degree between each image in the image set and the candidate map according to the matching degree between the image set and the candidate map; and determining a target map matched with the image set according to the matching degree between the image set and each candidate map.
In one embodiment, the calculating the matching degree between the image set and the candidate map according to the matching degree between each image in the image set and the candidate map includes: and accumulating the matching degree between each image in the image set and the candidate map to obtain the matching degree between the image set and the candidate map.
In one embodiment, the calculating the matching degree between each image in the image set and the candidate map includes: and respectively and parallelly calculating the matching degree between the candidate map and each image in the image set by adopting multithreading.
In one embodiment, the computer program, when executed by the processor, is further configured to perform the steps of: and monitoring illumination conditions in real time, and when the current illumination conditions are monitored to change, entering a step of searching a candidate map set matched with the current illumination conditions in a map library according to the current illumination conditions.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (9)

1. A method of visual localization comprising:
acquiring the current illumination condition through a light sensing element arranged on the robot;
searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, wherein maps under different illumination conditions are stored in the map library;
collecting an image under the current illumination condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current illumination condition;
performing visual positioning based on the target map;
the step of collecting the image under the current illumination condition, and matching the image with the map in the candidate map set to obtain a map matched with the image, includes:
acquiring images for a period of time by a mobile camera to obtain an image set;
matching each image in the image set with a candidate map in the candidate map set to obtain the matching degree between each image and the candidate map;
calculating the matching degree between each image in the image set and the candidate map according to the matching degree between the image set and the candidate map;
and determining a target map matched with the image set according to the matching degree between the image set and each candidate map.
2. The method of claim 1, wherein the current lighting conditions comprise: current time and current weather conditions;
the searching the candidate map set matched with the illumination condition in the map library according to the current illumination condition comprises the following steps:
selecting a first map set with the time difference within a preset range from the map library according to the current time;
and selecting a second map set matched with the current weather condition from the first map set according to the current weather condition, and taking the second map set as the candidate map set.
3. The method of claim 1, wherein the acquiring an image under the current lighting condition, matching the image with a map in the candidate map set, determining a map matched with the image, and taking the matched map as a target map under the current lighting condition comprises:
calculating the similarity between each candidate map in the candidate map set and the image;
and selecting a target map matched with the image from the candidate map set according to the similarity.
4. The method of claim 1, wherein the calculating the degree of matching between the image set and the candidate map based on the degree of matching between each image in the image set and the candidate map comprises:
and accumulating the matching degree between each image in the image set and the candidate map to obtain the matching degree between the image set and the candidate map.
5. The method of claim 4, wherein the calculating a degree of matching between each image in the set of images and the candidate map comprises:
and respectively and parallelly calculating the matching degree between the candidate map and each image in the image set by adopting multithreading.
6. The method according to claim 1, wherein the method further comprises:
and monitoring illumination conditions in real time, and when the current illumination conditions are monitored to change, entering a step of searching a candidate map set matched with the current illumination conditions in a map library according to the current illumination conditions.
7. A visual positioning device, comprising:
the acquisition module is used for acquiring the current illumination condition through a light sensing element arranged on the robot;
the searching module is used for searching a candidate map set matched with the current illumination condition in a map library according to the current illumination condition, and maps under different illumination conditions are stored in the map library;
the matching module is used for collecting images under the current illumination condition, matching the images with the maps in the candidate map set, determining the map matched with the images, and taking the matched map as a target map under the current illumination condition;
the positioning module is used for performing visual positioning based on the target map;
the matching module is further configured to:
acquiring images for a period of time by a mobile camera to obtain an image set;
matching each image in the image set with a candidate map in the candidate map set to obtain the matching degree between each image and the candidate map;
calculating the matching degree between each image in the image set and the candidate map according to the matching degree between the image set and the candidate map;
and determining a target map matched with the image set according to the matching degree between the image set and each candidate map.
8. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the visual positioning method of any one of claims 1 to 6.
9. A robot comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the visual positioning method of any one of claims 1 to 6.
CN202011406851.8A 2020-12-04 2020-12-04 Visual positioning method, device, robot and storage medium Active CN112488007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011406851.8A CN112488007B (en) 2020-12-04 2020-12-04 Visual positioning method, device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011406851.8A CN112488007B (en) 2020-12-04 2020-12-04 Visual positioning method, device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN112488007A CN112488007A (en) 2021-03-12
CN112488007B true CN112488007B (en) 2023-10-13

Family

ID=74939306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011406851.8A Active CN112488007B (en) 2020-12-04 2020-12-04 Visual positioning method, device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN112488007B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN107223244A (en) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 Localization method and device
CN109074408A (en) * 2018-07-16 2018-12-21 深圳前海达闼云端智能科技有限公司 Map loading method and device, electronic equipment and readable storage medium
CN109269493A (en) * 2018-08-31 2019-01-25 北京三快在线科技有限公司 A kind of localization method and device, mobile device and computer readable storage medium
CN110672102A (en) * 2019-10-18 2020-01-10 劢微机器人科技(深圳)有限公司 Visual auxiliary robot initialization positioning method, robot and readable storage medium
CN111435538A (en) * 2019-01-14 2020-07-21 上海欧菲智能车联科技有限公司 Positioning method, positioning system, and computer-readable storage medium
CN111652929A (en) * 2020-06-03 2020-09-11 全球能源互联网研究院有限公司 Visual feature identification and positioning method and system
CN111652934A (en) * 2020-05-12 2020-09-11 Oppo广东移动通信有限公司 Positioning method, map construction method, device, equipment and storage medium
CN111780763A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN107223244A (en) * 2016-12-02 2017-09-29 深圳前海达闼云端智能科技有限公司 Localization method and device
CN109074408A (en) * 2018-07-16 2018-12-21 深圳前海达闼云端智能科技有限公司 Map loading method and device, electronic equipment and readable storage medium
CN109269493A (en) * 2018-08-31 2019-01-25 北京三快在线科技有限公司 A kind of localization method and device, mobile device and computer readable storage medium
CN111435538A (en) * 2019-01-14 2020-07-21 上海欧菲智能车联科技有限公司 Positioning method, positioning system, and computer-readable storage medium
CN110672102A (en) * 2019-10-18 2020-01-10 劢微机器人科技(深圳)有限公司 Visual auxiliary robot initialization positioning method, robot and readable storage medium
CN111652934A (en) * 2020-05-12 2020-09-11 Oppo广东移动通信有限公司 Positioning method, map construction method, device, equipment and storage medium
CN111652929A (en) * 2020-06-03 2020-09-11 全球能源互联网研究院有限公司 Visual feature identification and positioning method and system
CN111780763A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map

Also Published As

Publication number Publication date
CN112488007A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN112419710A (en) Traffic congestion data prediction method, traffic congestion data prediction device, computer equipment and storage medium
CN109325060B (en) Time series stream data fast searching method based on data characteristics
CN110175507B (en) Model evaluation method, device, computer equipment and storage medium
CN110889863A (en) Target tracking method based on target perception correlation filtering
CA3166088A1 (en) Training method and pedestrian re-identification method of multi-task classification network
CN112232971A (en) Anti-fraud detection method, anti-fraud detection device, computer equipment and storage medium
RU2019119595A (en) METHOD FOR FORECASTING TRAFFIC DYNAMICS IN ROAD SYSTEM
CN112818821A (en) Human face acquisition source detection method and device based on visible light and infrared light
CN112633671A (en) Project cost supervision method, system, storage medium and intelligent terminal
CN112488007B (en) Visual positioning method, device, robot and storage medium
CN112530159B (en) Self-calibration type multi-lane-level traffic flow detection method and electronic equipment
CN116188995B (en) Remote sensing image feature extraction model training method, retrieval method and device
Ding et al. Mit-avt clustered driving scene dataset: Evaluating perception systems in real-world naturalistic driving scenarios
CN116363557B (en) Self-learning labeling method, system and medium for continuous frames
CN109784291B (en) Pedestrian detection method based on multi-scale convolution characteristics
CN111177440A (en) Target image retrieval method and device, computer equipment and storage medium
WO2022116156A1 (en) Visual positioning method, robot, and storage medium
CN116091596A (en) Multi-person 2D human body posture estimation method and device from bottom to top
CN113095232B (en) Target real-time tracking method
US20220180609A1 (en) Map database creation method, mobile machine using the same, and computer readable storage medium
CN114216467A (en) Road positioning method, device, computer equipment and storage medium
CN112697156A (en) Map library establishing method, robot, computer device and storage medium
CN107330105B (en) Robustness evaluation method and device for similar image retrieval algorithm
CN113657145A (en) Sweat pore feature and neural network-based fingerprint retrieval method
CN111445062B (en) Pest location prediction method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant