CN112348865B - Loop detection method and device, computer readable storage medium and robot - Google Patents

Loop detection method and device, computer readable storage medium and robot Download PDF

Info

Publication number
CN112348865B
CN112348865B CN202011192052.5A CN202011192052A CN112348865B CN 112348865 B CN112348865 B CN 112348865B CN 202011192052 A CN202011192052 A CN 202011192052A CN 112348865 B CN112348865 B CN 112348865B
Authority
CN
China
Prior art keywords
camera
loop
detection
cameras
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011192052.5A
Other languages
Chinese (zh)
Other versions
CN112348865A (en
Inventor
蒋晨晨
刘志超
郭睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202011192052.5A priority Critical patent/CN112348865B/en
Publication of CN112348865A publication Critical patent/CN112348865A/en
Application granted granted Critical
Publication of CN112348865B publication Critical patent/CN112348865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application belongs to the technical field of robots, and particularly relates to a loop detection method and device, a computer readable storage medium and a robot. The method is applied to a robot, the robot comprises more than two cameras, and each camera synchronously performs image acquisition in different detection directions, and the method comprises the following steps: performing loop detection according to an image acquired by a first camera to obtain an initial detection result, wherein the first camera is the camera with the largest viewing angle in each camera; if the initial detection result is that the detection fails, the cross loop detection is carried out among the cameras according to the images acquired by the cameras, and a cross detection result is obtained. By the method and the device, when initial loop detection fails, cross detection in multiple detection directions can be performed, limitation of loop detection on path directions in the prior art is broken through, excessive repeated paths in the same direction are avoided, and accordingly the graph construction efficiency is greatly improved.

Description

Loop detection method and device, computer readable storage medium and robot
Technical Field
The application belongs to the technical field of robots, and particularly relates to a loop detection method and device, a computer readable storage medium and a robot.
Background
In the prior art, a robot requires that a loop detection path and a loop detection direction are consistent with a path and a loop detection direction in the process of building a map, so that an accurate loop detection result can be output, and when the map is built, a mobile device always needs to walk more repeated paths in the same direction, so that the problem of low map building efficiency is caused.
Disclosure of Invention
In view of the above, the embodiments of the present application provide a loop detection method, a device, a computer readable storage medium and a robot, so as to solve the problem that the existing loop detection method needs a mobile device to walk more repeated paths in the same direction, thereby causing low efficiency of drawing.
A first aspect of an embodiment of the present application provides a loop detection method, which is applied to a robot, where the robot includes more than two cameras, and each camera performs image acquisition synchronously in different detection directions, and the method may include:
performing loop detection according to an image acquired by a first camera to obtain an initial detection result, wherein the first camera is the camera with the largest viewing angle in each camera;
if the initial detection result is that the detection fails, the cross loop detection is carried out among the cameras according to the images acquired by the cameras, and a cross detection result is obtained.
Further, the cross loop detection between the cameras according to the images acquired by the cameras may include:
respectively counting the number of characteristic points in the images acquired by each camera;
selecting the camera with the largest number of characteristic points in the image from the cameras which are not selected as a target camera;
sequentially performing cross loop detection on the target camera and other cameras which are not selected;
and if the loop is not detected, returning to the step of selecting the camera with the largest number of characteristic points in the image from the cameras which are not selected as the target camera and the subsequent steps until the preset termination condition is met.
Further, the termination condition is that a loop is detected, the detection duration is greater than a preset duration threshold, or each camera has been selected through traversal.
Further, the step of sequentially performing cross loop detection on the target camera and other cameras which have not been selected may include:
calculating target characteristic information of an image acquired by the target camera;
and carrying out loop detection in a loop database corresponding to the selected camera according to the target characteristic information, wherein the selected camera is any camera which is not selected yet.
Further, the calculating the target feature information of the image acquired by the target camera may include:
extracting a corner feature set of an image acquired by the target camera;
calculating a descriptor set corresponding to the corner feature set;
and taking the corner feature set and the descriptor set as the target feature information.
Further, the performing loop detection in the loop database corresponding to the selected camera according to the target feature information may include:
calculating the number of the matching of the convergence points between the target feature information and the selected feature information, wherein the selected feature information is the feature information of any historical image stored in the loop database;
and if the number of the inleakage point matches between the target characteristic information and the selected characteristic information is larger than a preset number threshold, determining that loop detection is performed.
Further, before loop detection from the image acquired by the first camera, the method may further include:
synchronizing each camera to obtain synchronized cameras;
and carrying out joint initialization processing on each synchronized camera to obtain initialized cameras.
A second aspect of the embodiments of the present application provides a loop detection device, which is applied to a robot, where the robot includes more than two cameras, and each camera performs image acquisition synchronously in different detection directions, and the device may include:
the initial loop detection module is used for carrying out loop detection according to the image acquired by the first camera to obtain an initial detection result, wherein the first camera is the camera with the largest visual angle in each camera;
and the cross loop detection module is used for carrying out cross loop detection among the cameras according to the images acquired by the cameras if the initial detection result is detection failure, so as to obtain a cross detection result.
Further, the cross loop detection module may include:
the characteristic point quantity counting sub-module is used for counting the quantity of characteristic points in the images acquired by each camera respectively;
a target camera selecting sub-module, configured to select, from among the cameras that have not been selected, a camera with the largest number of feature points in the image as a target camera;
the cross loop detection sub-module is used for sequentially carrying out cross loop detection on the target camera and other cameras which are not selected; if loop is not detected, continuing to execute the functions of the target camera selecting sub-module and the cross loop detecting sub-module until a preset termination condition is met.
Further, the termination condition is that a loop is detected, the detection duration is greater than a preset duration threshold, or each camera has been selected through traversal.
Further, the cross loop detection sub-module may include:
the target feature information calculation unit is used for calculating target feature information of the image acquired by the target camera;
and the loop detection unit is used for carrying out loop detection in a loop database corresponding to the selected camera according to the target characteristic information, wherein the selected camera is any one camera which is not selected yet.
Further, the target feature information calculating unit may include:
the angular point feature extraction subunit is used for extracting an angular point feature set of the image acquired by the target camera;
a descriptor calculating subunit, configured to calculate a descriptor set corresponding to the corner feature set;
and the target feature information determining subunit is used for taking the corner feature set and the descriptor set as the target feature information.
Further, the loop detection unit may include:
an inner convergence point matching number calculating subunit, configured to calculate the number of inner convergence point matching between the target feature information and selected feature information, where the selected feature information is feature information of any historical image stored in the loop database;
and the loop determining subunit is used for determining that the loop is detected if the number of the inter-convergence point matches between the target characteristic information and the selected characteristic information is larger than a preset number threshold value.
Further, the loop detection device may further include:
the synchronization module is used for carrying out synchronization processing on each camera to obtain synchronous cameras;
and the initialization module is used for carrying out joint initialization processing on each synchronized camera to obtain each initialized camera.
A third aspect of the embodiments of the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the loop detection methods described above.
A fourth aspect of the embodiments of the present application provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of any one of the loop detection methods described above when executing the computer program.
A fifth aspect of an embodiment of the application provides a computer program product for, when run on a robot, causing the robot to perform the steps of any of the loop detection methods described above.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the embodiment of the application, more than two cameras are arranged on the robot, the cameras synchronously acquire images in different detection directions, loop detection is performed according to the images acquired by the camera with the largest visual angle to obtain an initial detection result, and if the initial detection result is detection failure, cross loop detection is performed among the cameras according to the images acquired by the cameras to obtain a cross detection result. By the embodiment of the application, when initial loop detection fails, the cross detection in a plurality of detection directions can be performed, the limitation of loop detection on the path direction in the prior art is broken through, and excessive repeated paths in the same direction are avoided, so that the graph construction efficiency is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a layout of cameras on a robot;
FIG. 2 is a schematic diagram of a process of robot mapping and loop detection by each camera;
FIG. 3 is a schematic diagram of a process for joint initialization processing for each camera;
FIG. 4 is a flowchart of an embodiment of a loop detection method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of cross loop detection between cameras based on images acquired by the cameras;
FIG. 6 is a diagram illustrating an exemplary embodiment of a loop detection apparatus according to an exemplary embodiment of the present application;
fig. 7 is a schematic block diagram of a robot in an embodiment of the application.
Detailed Description
In order to make the objects, features and advantages of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in detail below with reference to the accompanying drawings, and it is apparent that the embodiments described below are only some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," etc. are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Visual localization technology has undergone over the last decade of development, and has gradually transitioned from academic results to resolution of commercialization. Currently, although visual positioning technology based on various types of cameras exists, a single camera or a single camera still cannot meet the extremely high stability and precision requirements of a robot in actual use. The advantages and disadvantages of each type of camera are as follows:
the monocular camera has a problem of scale uncertainty, and the fusion of the monocular camera with the inertial measurement unit (Inertial Measurement Unit, IMU) can improve its accuracy to some extent, but in some cases, for example when the IMU measurement exceeds the range, its positioning accuracy still deteriorates. But the monocular camera has low cost and low requirements on the structure of the robot, and can be flexibly placed on the robot.
The binocular camera has the advantages that the scale is unchanged, and the depth estimation can be carried out on distant objects because the length of the base line can be adjusted at will. However, the depth estimation of the space point is not as accurate as that of an RGBD camera, the image construction precision is affected, and the requirements on a mechanical structure are high and the size is large.
RGBD cameras can provide more accurate depth information, but have limited depth map ranges, and the effective range is generally within 30cm to 8m, so that the robot cannot accurately position and map in a relatively open hall.
Each camera can be further divided into different imaging models, commonly known as pinhole models or fish eye models. Pinhole model camera distortion is small, but the field of view is narrow; the fisheye camera is distorted a lot, but the field of view is wide, up to 270 °.
In order to fully utilize the advantages of each camera and maximally improve the sensing range and positioning accuracy of the robot, in the embodiment of the application, more than two cameras of various types can be preset on the robot, and the specific number of cameras can be set according to actual conditions, so that the embodiment of the application is not particularly limited. The layout of each camera on the robot can be set according to actual conditions, but the detection directions of each camera are ensured to be different as far as possible, and the situation that the fields of view overlap exists between each camera is avoided. Fig. 1 is a schematic layout diagram of each camera on the robot, in which a cube represents the robot, a solid circle represents the cameras, and 6 cameras are respectively arranged on 6 surfaces of the robot as shown in the figure, and synchronously acquire images in the upper, lower, left, right, front and rear 6 detection directions of the robot. It is to be understood that fig. 1 is only a specific example of a layout of a camera in the embodiment of the present application, and in practical application, other layout forms may be set according to the specific situation, and the embodiment of the present application is not limited thereto in particular.
The embodiment of the application does not specifically limit the type of the camera, and the camera can be any one or any combination of an RGBD camera, a binocular camera and a monocular camera. The imaging model of each camera may be a pinhole, a fish eye, or the like. Furthermore, the IMU can be fused to ensure the positioning accuracy and speed.
As shown in fig. 2, when the robot builds a graph, the robot may perform synchronization processing on each camera to obtain each camera after synchronization, and then perform joint initialization processing on each camera after synchronization to obtain each initialized camera.
In the embodiment of the application, each camera can be triggered simultaneously by a serial port signal by utilizing a software development kit (Software Development Kit, SDK) of the camera, so that images are acquired, and the camera and the IMU are synchronized as much as possible. After the system is initialized, the image of each camera is subjected to feature extraction and matching, then feature point tracking is performed, firstly, matching of 3D map points and 2D points of the current frame is performed, a projection error function is generated, then, matching of the 2D points of the previous frame and the current frame and the 2D point features is performed, triangularization is performed on the relative pose between the two calculated by using the integration of the IMU, a new 3D map point is calculated, and a new projection error function is generated. If the camera is a binocular camera, performing more accurate triangularization between the binocular camera to generate 3D map points, and if the camera is an RGBD camera, directly back-projecting the feature points into a three-dimensional space, and setting the depth of the feature points as a depth measurement value. Finally, in a period of time, the projection error function is minimized, and the local 3D map points and the robot pose in the period of time are optimized. After the optimization is completed, the old image frames are removed, new frames are added, and then map points are added and optimized in a circulating mode.
Because the visual SLAM initialization based on the binocular camera and the RGBD camera is simpler, and the visual inertia joint initialization of the monocular camera is relatively troublesome because the monocular camera has no scale information, in order to be compatible with different types of cameras, redundant initialization steps can be adopted in the embodiment of the application, namely, the joint initialization is required no matter how the cameras are matched. The initialization target comprises: absolute scale, gyroscope bias, accelerometer bias, gravitational acceleration, and corresponding velocity at each key image frame instant. The difference between the binocular camera, the RGBD camera and the monocular camera is that the binocular camera and the RGBD camera can provide a more accurate scale(s) for system optimization in advance, so that under the multi-view system, if the binocular camera or the RGBD camera is included, one scale information is calculated preferentially, and then joint initialization is performed. The initialization process is shown in fig. 3, wherein the visual initialization is based on feature point matching of pure vision, and the PnP (peer-n-point) method is used to calculate the camera pose of all initial frames in the sliding window and the 3D positions of the feature points. The gyroscope bias calibration is to calculate the difference value by using the rotation measured by the gyroscope and the rotation observed visually, and minimize the sum of the difference values in a period of time to obtain the gyroscope bias. In the calibration process of the speed (marked as v), the gravity acceleration (marked as g) and the scale (marked as s), pose information solved between every two frames through three-dimensional reconstruction based on SfM (Structure from motion) and the position and the speed pre-integrated by the IMU are mainly used for constructing a least square problem, so that v, g and s are solved. Gravity optimization is to use the gravity acceleration obtained in the previous step, fix the module value of the gravity acceleration, re-parameterize the gravity acceleration in tangent space, change the gravity acceleration from 3 degrees of freedom to 2 degrees of freedom, and finely adjust the direction of the gravity acceleration.
After initialization is completed, the robot can build a graph based on the images acquired by the cameras, and loop detection is performed in the graph building process. As shown in fig. 4, an embodiment of a loop detection method in an embodiment of the present application may include:
and S401, performing loop detection according to the image acquired by the first camera to obtain an initial detection result.
The first camera is the camera with the largest view angle in each camera, namely, in the initial loop detection process, loop detection is performed only by using the image acquired by the camera with the largest view angle, and at the moment, cross detection among the cameras is not performed, so that the problem is converted into the existing loop detection based on a single camera, the specific detection process can refer to any loop detection method in the prior art, and the embodiment of the application is not repeated.
In the initial loop detection process, if the loop is successfully detected, the subsequent step S402 is not required to be executed, otherwise, in the initial loop detection process, if the loop is not detected, that is, the initial detection result is a detection failure, the step S402 is executed.
And step S402, performing cross loop detection among the cameras according to the images acquired by the cameras to obtain a cross detection result.
As shown in fig. 5, step S402 may specifically include the following procedures:
step S4021, counting the number of feature points in the images acquired by each camera.
Step S4022, selecting, from among the cameras that have not been selected, the camera with the largest number of feature points in the image as the target camera.
Step S4023, performing cross loop detection on the target camera and other cameras that have not been selected.
First, target feature information of an image acquired by the target camera is calculated.
And extracting a corner feature set of the image acquired by the target camera, and calculating a descriptor set corresponding to the corner feature set. The angular point feature extraction and the corresponding descriptor calculation process are all in the prior art, and the angular point feature in the embodiment of the present application may be any common angular point feature including harris, ORB, SIFT or SURF, etc., and the specific process of the embodiment of the present application is not repeated. After the calculation is completed, the corner feature set and the descriptor set can be used as the target feature information.
And then, carrying out loop detection in a loop database corresponding to the selected camera according to the target characteristic information.
The selected camera is any one of other cameras which are not selected, and for each of the other cameras which are not selected, the selected camera can be used as the selected camera by sequentially traversing the selected camera according to the sequence of the number of the characteristic points in the acquired images from more to less.
Specifically, the number of inter-converging point matches between the target feature information and selected feature information, which is feature information of any one history image stored in the loop database, may be calculated. The loop database is used for storing and processing the characteristic information of all the historical images. The construction and retrieval of the loop database can use any common method including DBOW, CNN or hash table, and the specific process of the embodiment of the application is not repeated.
According to the angular point feature set and the corresponding descriptor set in the target feature information and the angular point feature set and the corresponding descriptor set in the selected feature information, the number of the converging point matches between the angular point feature set and the corresponding descriptor set can be calculated. Any common method including RANSAC may be used for calculating the inner convergence point, and the specific process of the embodiment of the present application is not described again.
And if the number of the inleakage point matches between the target characteristic information and the selected characteristic information is larger than a preset number threshold, determining that loop detection is performed. If all the historical images in the loop database are traversed, and all other cameras which are not selected are traversed, and the situation that the matching number of the convergence points is larger than the number threshold value does not exist, determining that no loop is detected.
Step S4024, determining whether a preset termination condition is satisfied.
The termination condition is that a loop is detected, the detection duration is greater than a preset duration threshold, or each camera is selected through traversal. The detection duration may be set according to the number of computing units and cameras of the robot, specifically, the detection duration is inversely related to the number of computing units and positively related to the number of cameras, that is, the more computing units and the fewer cameras, the shorter the detection duration, and vice versa, the fewer computing units and the more cameras, the longer the detection duration.
If the termination condition is not satisfied, returning to execute the step S4022 and the subsequent steps, namely, continuing to select the next target camera for cross detection until the termination condition is satisfied; and if the termination condition is met, ending the loop detection process.
In summary, in the embodiment of the application, more than two cameras are arranged on the robot, and each camera synchronously performs image acquisition in different detection directions, first performs loop detection according to the image acquired by the camera with the largest viewing angle to obtain an initial detection result, and if the initial detection result is a detection failure, performs cross loop detection between each camera according to the image acquired by each camera to obtain a cross detection result. By the embodiment of the application, when initial loop detection fails, the cross detection in a plurality of detection directions can be performed, the limitation of loop detection on the path direction in the prior art is broken through, and excessive repeated paths in the same direction are avoided, so that the graph construction efficiency is greatly improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 6 shows a block diagram of an embodiment of a loop detection device according to an embodiment of the present application, which corresponds to a loop detection method described in the foregoing embodiment, and the loop detection device may be applied to the foregoing robot. As shown, the loop detection device may include:
the initial loop detection module 601 is configured to perform loop detection according to an image acquired by a first camera, so as to obtain an initial detection result, where the first camera is a camera with a largest viewing angle in each camera;
and the cross loop detection module 602 is configured to perform cross loop detection between the cameras according to the images acquired by the cameras if the initial detection result is a detection failure, so as to obtain a cross detection result.
Further, the cross loop detection module may include:
the characteristic point quantity counting sub-module is used for counting the quantity of characteristic points in the images acquired by each camera respectively;
a target camera selecting sub-module, configured to select, from among the cameras that have not been selected, a camera with the largest number of feature points in the image as a target camera;
the cross loop detection sub-module is used for sequentially carrying out cross loop detection on the target camera and other cameras which are not selected; if loop is not detected, continuing to execute the functions of the target camera selecting sub-module and the cross loop detecting sub-module until a preset termination condition is met.
Further, the termination condition is that a loop is detected, the detection duration is greater than a preset duration threshold, or each camera has been selected through traversal.
Further, the cross loop detection sub-module may include:
the target feature information calculation unit is used for calculating target feature information of the image acquired by the target camera;
and the loop detection unit is used for carrying out loop detection in a loop database corresponding to the selected camera according to the target characteristic information, wherein the selected camera is any one camera which is not selected yet.
Further, the target feature information calculating unit may include:
the angular point feature extraction subunit is used for extracting an angular point feature set of the image acquired by the target camera;
a descriptor calculating subunit, configured to calculate a descriptor set corresponding to the corner feature set;
and the target feature information determining subunit is used for taking the corner feature set and the descriptor set as the target feature information.
Further, the loop detection unit may include:
an inner convergence point matching number calculating subunit, configured to calculate the number of inner convergence point matching between the target feature information and selected feature information, where the selected feature information is feature information of any historical image stored in the loop database;
and the loop determining subunit is used for determining that the loop is detected if the number of the inter-convergence point matches between the target characteristic information and the selected characteristic information is larger than a preset number threshold value.
Further, the loop detection device may further include:
the synchronization module is used for carrying out synchronization processing on each camera to obtain synchronous cameras;
and the initialization module is used for carrying out joint initialization processing on each synchronized camera to obtain each initialized camera.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described apparatus, modules and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Fig. 7 shows a schematic block diagram of a robot provided in an embodiment of the present application, and for convenience of explanation, only parts related to the embodiment of the present application are shown.
As shown in fig. 7, the robot 7 of this embodiment includes: a processor 70, a memory 71, and a computer program 72 stored in the memory 71 and executable on the processor 70. The steps of the embodiments of the loop detection method described above, such as steps S401 to S402 shown in fig. 4, are implemented when the processor 70 executes the computer program 72. Alternatively, the processor 70 may perform the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 601-602 shown in fig. 6, when executing the computer program 72.
By way of example, the computer program 72 may be partitioned into one or more modules/units that are stored in the memory 71 and executed by the processor 70 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 72 in the robot 7.
It will be appreciated by those skilled in the art that fig. 7 is merely an example of a robot 7 and is not limiting of the robot 7, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the robot 7 may also include input and output devices, network access devices, buses, etc.
The processor 70 may be a central processing unit (Central Processing Unit, CPU) or may be another general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the robot 7, such as a hard disk or a memory of the robot 7. The memory 71 may be an external storage device of the robot 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the robot 7. Further, the memory 71 may also include both an internal memory unit and an external memory device of the robot 7. The memory 71 is used for storing the computer program as well as other programs and data required by the robot 7. The memory 71 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/robot and method may be implemented in other ways. For example, the apparatus/robot embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable storage medium may include content that is subject to appropriate increases and decreases as required by jurisdictions and by jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunications signals.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. The loop detection method is applied to a robot, and is characterized in that the robot comprises more than two cameras, and each camera synchronously performs image acquisition in different detection directions, and the method comprises the following steps:
performing loop detection according to an image acquired by a first camera to obtain an initial detection result, wherein the first camera is the camera with the largest viewing angle in each camera;
if the initial detection result is that the detection fails, respectively counting the number of characteristic points in the images acquired by each camera;
selecting the camera with the largest number of characteristic points in the image from the cameras which are not selected as a target camera;
sequentially performing cross loop detection on the target camera and other cameras which are not selected;
and if the loop is not detected, returning to the step of selecting the camera with the largest number of characteristic points in the image from the cameras which are not selected as the target camera and the subsequent steps until the preset termination condition is met.
2. The loop detection method according to claim 1, wherein the termination condition is that a loop is detected, a detection duration is greater than a preset duration threshold, or each camera has been selected by traversal.
3. The loop-back detection method according to claim 1, wherein the step of sequentially performing cross loop-back detection on the target camera and other cameras that have not been selected includes:
calculating target characteristic information of an image acquired by the target camera;
and carrying out loop detection in a loop database corresponding to the selected camera according to the target characteristic information, wherein the selected camera is any camera which is not selected yet.
4. The loop back detection method of claim 3, wherein the calculating the target feature information of the image acquired by the target camera comprises:
extracting a corner feature set of an image acquired by the target camera;
calculating a descriptor set corresponding to the corner feature set;
and taking the corner feature set and the descriptor set as the target feature information.
5. The loop detection method according to claim 3, wherein the loop detection in the loop database corresponding to the selected camera according to the target feature information includes:
calculating the number of the matching of the convergence points between the target feature information and the selected feature information, wherein the selected feature information is the feature information of any historical image stored in the loop database;
and if the number of the inleakage point matches between the target characteristic information and the selected characteristic information is larger than a preset number threshold, determining that loop detection is performed.
6. The loop detection method according to any one of claims 1 to 5, characterized by further comprising, before loop detection from the image acquired by the first camera:
synchronizing each camera to obtain synchronized cameras;
and carrying out joint initialization processing on each synchronized camera to obtain initialized cameras.
7. The utility model provides a loop detection device, is applied to in the robot, its characterized in that, the robot includes more than two cameras, and each camera carries out image acquisition in step in different detection direction, and the device includes:
the initial loop detection module is used for carrying out loop detection according to the image acquired by the first camera to obtain an initial detection result, wherein the first camera is the camera with the largest visual angle in each camera;
the cross loop detection module is used for counting the number of characteristic points in the images acquired by each camera if the initial detection result is detection failure; selecting the camera with the largest number of characteristic points in the image from the cameras which are not selected as a target camera; sequentially performing cross loop detection on the target camera and other cameras which are not selected; and if the loop is not detected, returning to the step of selecting the camera with the largest number of characteristic points in the image from the cameras which are not selected as the target camera and the subsequent steps until the preset termination condition is met.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the loop detection method according to any one of claims 1 to 6.
9. Robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the loop detection method according to any of claims 1 to 6 when the computer program is executed.
CN202011192052.5A 2020-10-30 2020-10-30 Loop detection method and device, computer readable storage medium and robot Active CN112348865B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011192052.5A CN112348865B (en) 2020-10-30 2020-10-30 Loop detection method and device, computer readable storage medium and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011192052.5A CN112348865B (en) 2020-10-30 2020-10-30 Loop detection method and device, computer readable storage medium and robot

Publications (2)

Publication Number Publication Date
CN112348865A CN112348865A (en) 2021-02-09
CN112348865B true CN112348865B (en) 2023-12-01

Family

ID=74356767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011192052.5A Active CN112348865B (en) 2020-10-30 2020-10-30 Loop detection method and device, computer readable storage medium and robot

Country Status (1)

Country Link
CN (1) CN112348865B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711365A (en) * 2018-12-29 2019-05-03 佛山科学技术学院 A kind of vision SLAM winding detection method and device merging semantic information
CN109785387A (en) * 2018-12-17 2019-05-21 中国科学院深圳先进技术研究院 Winding detection method, device and the robot of robot
CN111832484A (en) * 2020-07-14 2020-10-27 星际(重庆)智能装备技术研究院有限公司 Loop detection method based on convolution perception hash algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785387A (en) * 2018-12-17 2019-05-21 中国科学院深圳先进技术研究院 Winding detection method, device and the robot of robot
CN109711365A (en) * 2018-12-29 2019-05-03 佛山科学技术学院 A kind of vision SLAM winding detection method and device merging semantic information
CN111832484A (en) * 2020-07-14 2020-10-27 星际(重庆)智能装备技术研究院有限公司 Loop detection method based on convolution perception hash algorithm

Also Published As

Publication number Publication date
CN112348865A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
US10984554B2 (en) Monocular vision tracking method, apparatus and non-volatile computer-readable storage medium
CN110246147B (en) Visual inertial odometer method, visual inertial odometer device and mobile equipment
CN111354042B (en) Feature extraction method and device of robot visual image, robot and medium
CN112797897B (en) Method and device for measuring geometric parameters of object and terminal
KR102367361B1 (en) Location measurement and simultaneous mapping method and device
CN109752003B (en) Robot vision inertia point-line characteristic positioning method and device
WO2017077925A1 (en) Method and system for estimating three-dimensional pose of sensor
WO2015134795A2 (en) Method and system for 3d capture based on structure from motion with pose detection tool
US20210183100A1 (en) Data processing method and apparatus
CN113888639B (en) Visual odometer positioning method and system based on event camera and depth camera
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
KR20210084622A (en) Time synchronization processing methods, electronic devices and storage media
CN108154533A (en) A kind of position and attitude determines method, apparatus and electronic equipment
CN112880687A (en) Indoor positioning method, device, equipment and computer readable storage medium
WO2024045632A1 (en) Binocular vision and imu-based underwater scene three-dimensional reconstruction method, and device
CN111609868A (en) Visual inertial odometer method based on improved optical flow method
CN113012224B (en) Positioning initialization method and related device, equipment and storage medium
CN112598706B (en) Multi-camera moving target three-dimensional track reconstruction method without accurate time-space synchronization
CN111882655B (en) Method, device, system, computer equipment and storage medium for three-dimensional reconstruction
CN110825079A (en) Map construction method and device
CN113450334B (en) Overwater target detection method, electronic equipment and storage medium
CN112348865B (en) Loop detection method and device, computer readable storage medium and robot
CN114092564B (en) External parameter calibration method, system, terminal and medium for non-overlapping vision multi-camera system
Kutulakos Affine surface reconstruction by purposive viewpoint control
CN114234959B (en) Robot, VSLAM initialization method, device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant