CN111665826A - Depth map acquisition method based on laser radar and monocular camera and sweeping robot - Google Patents

Depth map acquisition method based on laser radar and monocular camera and sweeping robot Download PDF

Info

Publication number
CN111665826A
CN111665826A CN201910168751.7A CN201910168751A CN111665826A CN 111665826 A CN111665826 A CN 111665826A CN 201910168751 A CN201910168751 A CN 201910168751A CN 111665826 A CN111665826 A CN 111665826A
Authority
CN
China
Prior art keywords
sweeping robot
acquired
key frame
determining
current position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910168751.7A
Other languages
Chinese (zh)
Inventor
潘俊威
谢晓佳
栾成志
刘坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201910168751.7A priority Critical patent/CN111665826A/en
Publication of CN111665826A publication Critical patent/CN111665826A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/30Interpretation of pictures by triangulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a depth map acquisition method based on a laser radar and a monocular camera and a sweeping robot, and belongs to the technical field of robots. The method comprises the following steps: the method comprises the steps of determining depth information of a sweeping robot in an environment space based on a laser radar and a monocular camera, avoiding the adoption of a structured light camera which is greatly influenced by illumination conditions, is easily interfered and is high in price, so that the influence of the illumination conditions is small, further improving the accuracy of the sweeping robot in sensing the environment, and meanwhile reducing the cost of the sweeping robot.

Description

Depth map acquisition method based on laser radar and monocular camera and sweeping robot
Technical Field
The application relates to the technical field of robots, in particular to a depth map acquisition method based on a laser radar and a monocular camera and a sweeping robot.
Background
The floor sweeping robot is used as an intelligent electric appliance capable of automatically sweeping an area to be swept, can replace a person to sweep the ground, reduces housework burden of the person, and is more and more accepted by the person. In order to improve the automatic operation of the sweeping robot, how to enable the sweeping robot to have certain environmental perception capability becomes a problem.
At present, as a scheme for realizing environment perception of a sweeping robot, the sweeping robot can directly acquire depth information of an obstacle in an environment space where the sweeping robot is located by configuring a structured light camera, so that perception of the sweeping robot to the environment space is realized, however, the structured light camera is poor in performance in a strong light environment, is easily interfered, and is expensive. Therefore, how to provide a technical scheme with small influence by illumination conditions and low cost to realize the perception of the sweeping robot to the environment becomes a problem to be solved urgently.
Disclosure of Invention
The application provides a depth map obtaining method based on a laser radar and a monocular camera and a sweeping robot, which are used for improving the accuracy of environment perception of the sweeping robot and reducing the cost, and the technical scheme adopted by the application is as follows:
in a first aspect, the present application provides a depth map obtaining method based on a laser radar and a monocular camera, including:
performing key frame extraction based on a multi-frame image acquired by a monocular camera to obtain two key frame images, wherein the two key frame images comprise a current position image of the sweeping robot at the current position acquired by the monocular camera;
determining pose information respectively corresponding to the sweeping robot when two frames of key frame images are acquired through a monocular camera based on laser point cloud data acquired through a laser radar;
determining depth values corresponding to all pixel points in the current position image based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are obtained;
and determining a depth map of the sweeping robot at the current position based on the determined depth values corresponding to the pixel points in the current position image.
Optionally, the determining, based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are obtained, a depth value corresponding to each pixel point in the current position image includes:
and determining depth values corresponding to all pixel points in the current position image by a triangulation method based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are shot.
Optionally, the extracting key frames based on the multi-frame image acquired by the monocular camera to obtain two key frame images includes:
determining that the obtained current position image of the sweeping robot at the current position is one key frame image of the two key frame images;
and determining that a certain candidate image which is in a preset relationship with the current position image in the multi-frame images acquired by the monocular camera is the other key frame image in the two key frame images, wherein the preset relationship comprises that the rotation angle and/or the position change of the sweeping robot meets a preset threshold condition when the current position image is acquired and compared with when the certain candidate image is acquired.
Optionally, the determining, based on the laser point cloud data obtained by the laser radar, pose information respectively corresponding to the sweeping robot when the two frames of key frame images are obtained by the monocular camera includes:
determining pose information of the sweeping robot at each position through a corresponding point cloud matching algorithm based on laser point cloud data acquired through a laser radar;
and determining pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through the monocular camera based on the determined pose information of the sweeping robot at each position through a time mapping relation.
Optionally, the method further comprises:
determining travel information of the sweeping robot based on the obstacle distance information determined through the depth map, wherein the travel information comprises direction information and/or speed information for controlling the sweeping robot to travel.
Optionally, the method further comprises:
and constructing a two-dimensional map of the sweeping robot in an environment space based on the laser point cloud data acquired by the laser radar.
Optionally, the method further comprises:
planning a working path of the sweeping robot based on the constructed two-dimensional map of the sweeping robot in the environment space, wherein the working path comprises a route of the sweeping robot reaching a sweeping target area and/or a route of the sweeping robot sweeping the sweeping target area
In a second aspect, a sweeping robot is provided, which comprises
The system comprises a laser radar, a monocular camera and a determination device;
the laser radar is used for acquiring laser point cloud data of the sweeping robot at a corresponding position in an environment space;
the monocular camera is used for acquiring images of the corresponding position of the sweeping robot in the environment space;
the determination device comprises:
the device comprises an extraction module, a control module and a display module, wherein the extraction module is used for extracting key frames based on a multi-frame image acquired by a monocular camera to obtain two key frame images, and the two key frame images comprise a current position image of the sweeping robot at a current position acquired by the monocular camera;
the first determining module is used for determining pose information respectively corresponding to the sweeping robot when the two frames of key frame images are obtained through the extraction module and acquired through a monocular camera based on laser point cloud data acquired through a laser radar;
a second determining module, configured to determine, based on the pose information determined by the first determining module and corresponding to the sweeping robot when the two frames of key frame images are acquired, a depth value corresponding to each pixel point in the current position image;
and the third determining module is used for determining a depth map of the sweeping robot at the current position based on the depth values corresponding to the pixel points in the current position image determined by the second determining module.
Optionally, the second determining module is specifically configured to determine, based on the determined pose information respectively corresponding to the sweeping robot when the two frames of keyframe images are captured, a depth value corresponding to each pixel point in the current position image by using a triangulation method.
Optionally, the extraction module comprises:
the first determining unit is used for determining that the acquired current position image of the sweeping robot at the current position is one key frame image of the two key frame images;
and the second determining unit is used for determining that a certain candidate image which accords with a preset relationship with the current position image in the multi-frame images acquired by the monocular camera is another key frame image in the two key frame images, wherein the preset relationship comprises that the rotation angle and/or the position change of the sweeping robot accords with a preset threshold condition when the current position image is acquired and compared with when the certain candidate image is acquired.
Optionally, the first determining module includes:
the third determining unit is used for determining pose information of the sweeping robot at each position through a corresponding point cloud matching algorithm based on laser point cloud data acquired through a laser radar;
and the fourth determining unit is used for determining the pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through the monocular camera based on the pose information of the sweeping robot at each position determined by the fourth determining unit through a time mapping relation.
Further, the determining device further includes:
and the fourth determination module is used for determining the traveling information of the sweeping robot based on the obstacle distance information determined through the depth map, wherein the traveling information comprises direction information and/or speed information for controlling the sweeping robot to travel.
Further, the determining device further includes:
and the building module is used for building a two-dimensional map of the sweeping robot in an environment space based on the laser point cloud data acquired by the laser radar.
Further, the determining device further includes:
and the planning module is used for describing a two-dimensional map of the sweeping robot in an environment space and planning a working path of the sweeping robot, wherein the working path comprises a route of the sweeping robot to a sweeping target area and/or a route of the sweeping robot to sweep the sweeping target area.
In a third aspect, the present application provides an electronic device comprising: a processor and a memory;
a memory for storing operating instructions;
a processor, configured to execute the method for obtaining a depth map based on a lidar and a monocular camera as shown in any implementation manner of the first aspect of the present application by calling an operation instruction.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the lidar and monocular camera-based depth map acquisition method shown in any of the embodiments of the first aspect of the present application.
The application provides a depth map obtaining method based on a laser radar and a monocular camera and a sweeping robot, compared with the depth information obtained through a configuration structure light camera in the prior art, the method comprises the steps of extracting key frames through multi-frame images obtained through the monocular camera to obtain two frames of key frame images, wherein the two frames of key frame images comprise current position images of the sweeping robot at the current position, the current position images are obtained through the monocular camera, then based on laser point cloud data obtained through the laser radar, the pose information corresponding to the sweeping robot is determined when the two frames of key frame images are determined through the monocular camera, then based on the determined pose information corresponding to the sweeping robot when the two frames of key frame images are obtained, the depth values corresponding to all pixels in the current position images are determined, and the depth values corresponding to all pixels in the current position images are determined when the sweeping robot is at the current position A depth map of the location. The method and the device for determining the position and orientation information of the sweeping robot based on the monocular camera have the advantages that the depth information of the sweeping robot in the environment space is determined based on the laser radar and the monocular camera, the adoption of a structured light camera which is greatly influenced by illumination conditions, is easily interfered and is high in price is avoided, the influence of the illumination conditions is small, the accuracy of the sweeping robot in sensing the environment can be improved, meanwhile, the cost of the sweeping robot is reduced, in addition, the position and orientation information of the sweeping robot is determined through laser point cloud data obtained by the laser radar, compared with the position and orientation information of the sweeping robot determined through a characteristic matching method based on images obtained by the monocular camera, the required calculated amount is relatively small, the position and orientation information of the sweeping robot can be determined with a small calculated amount, and then the calculated amount of determining the depth map of the environment space is reduced.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flowchart of a depth map acquisition method based on a laser radar and a monocular camera according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a sweeping robot provided in the embodiment of the present application;
fig. 3 is a schematic structural view of another sweeping robot provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
An embodiment of the present application provides a depth map acquisition method based on a laser radar and a monocular camera, as shown in fig. 1, the method includes:
step S101, extracting key frames based on a multi-frame image acquired by a monocular camera to obtain two key frame images, wherein the two key frame images comprise a current position image of the sweeping robot at the current position acquired by the monocular camera;
specifically, the sweeping robot is provided with a corresponding monocular camera, wherein the monocular camera can be a common camera, a plurality of frames of images of the sweeping robot in an environmental space can be acquired through the monocular camera, and two frames of the images are extracted from the plurality of frames of images through a corresponding key frame extraction method to serve as key frame images; the two frames of key frame images comprise a current position image of the sweeping robot at the current position, which is acquired by a monocular camera; the two frames of key frame images can also be obtained by acquiring corresponding videos of the sweeping robot in an environmental space through a monocular camera and performing video frame extraction on the videos through a corresponding key frame extraction method.
Step S102, determining pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through a monocular camera based on laser point cloud data acquired through a laser radar;
specifically, the sweeping robot is provided with a corresponding laser radar, laser point cloud data of the sweeping robot in an environment room can be obtained through the laser radar, the obtained laser point cloud data can be processed through a corresponding data processing method, and pose information respectively corresponding to the sweeping robot when the two frames of key frame images are obtained through a monocular camera is determined, wherein the pose information comprises position information and pose information of the sweeping robot; the laser radar can be a mechanical laser radar (such as a single-line laser radar and a multi-line laser radar) or a solid-state laser radar, wherein the mechanical laser radar is structurally characterized by having a mechanical rotating mechanism so as to rotate, and the solid-state laser radar is structurally characterized by having no rotating part so as to occupy relatively small space; the implementation manner of the solid-state laser radar can be any one of the following: based on a phased array approach; based on a Flash mode; based on a micro-electro-mechanical system approach.
Step S103, determining depth values corresponding to all pixel points in the current position image based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are obtained;
for the embodiment of the application, the depth information of the scene restored from the two-dimensional image is one of the core problems in the field of computer vision, the image acquired by the monocular camera configured by the sweeping robot is the two-dimensional image, the depth information of the environment space where the sweeping robot is located is lost, specifically, the depth values corresponding to the pixel points in the current position image can be determined based on the pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired, and the depth values can be the distances from the sweeping robot to the obstacle.
And step S104, determining a depth map of the sweeping robot at the current position based on the determined depth values corresponding to the pixel points in the current position image.
Specifically, the determined depth value corresponding to each pixel point in the current position image is correspondingly processed, and a depth map of the sweeping robot at the current position is determined, wherein the depth map can be point cloud data, so that depth information of a scene is recovered from a two-dimensional image.
The embodiment of the application provides a depth map acquisition method based on a laser radar and a monocular camera, compared with the depth information acquired by a structured light camera in the prior art, the method comprises the steps of extracting key frames through multi-frame images acquired through the monocular camera to obtain two frames of key frame images, wherein the two frames of key frame images comprise current position images of a sweeping robot at the current position acquired through the monocular camera, determining pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through the monocular camera based on laser point cloud data acquired through the laser radar, determining pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired based on the determination, determining depth values corresponding to all pixel points in the current position image based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired, and determining the depth values corresponding to all pixel points in the current position image based on the determined depth values corresponding to all pixel points in the current position image A depth map of the location. The method and the device for determining the position and orientation information of the sweeping robot based on the monocular camera have the advantages that the depth information of the sweeping robot in the environment space is determined based on the laser radar and the monocular camera, the adoption of a structured light camera which is greatly influenced by illumination conditions, is easily interfered and is high in price is avoided, the influence of the illumination conditions is small, the accuracy of the sweeping robot in sensing the environment can be improved, meanwhile, the cost of the sweeping robot is reduced, in addition, the position and orientation information of the sweeping robot is determined through laser point cloud data obtained by the laser radar, compared with the position and orientation information of the sweeping robot determined through a characteristic matching method based on images obtained by the monocular camera, the required calculated amount is relatively small, the position and orientation information of the sweeping robot can be determined with a small calculated amount, and then the calculated amount of determining the depth map of the environment space is reduced.
The embodiment of the present application provides a possible implementation manner, and specifically, step S103 includes:
and step S1031 (not shown in the figure), determining depth values corresponding to the pixel points in the current position image by a triangulation method based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are shot.
In particular, triangulation simply means the observation of the same three-dimensional point P (X, y, z) at different positions, knowing the two-dimensional projection X of the three-dimensional points observed at the different positions1(x1,y1),X2(x2,y2) And recovering the depth information z of the three-dimensional point by utilizing the triangular relation.
Specifically, image feature extraction can be performed on two frames of key frame images through corresponding feature extraction methods, then a plurality of same image features are determined through corresponding image feature matching methods, positions of the same image features in the two frames of key frame images are determined respectively based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are shot, and then depth information respectively corresponding to the same image features is determined through corresponding triangular relations; wherein the image feature may be a corner (cornerdection) feature. Specifically, according to the depth information of a plurality of same image features in two frames of key frame images, the depth value corresponding to each pixel point in the current position image is determined through an average absolute difference algorithm, an error average sum algorithm, an absolute error sum algorithm, a normalized product correlation algorithm or an absolute transformation error sum algorithm.
For the embodiment of the application, based on the pose information respectively corresponding to the sweeping robot when two frames of key frame images are shot, the depth value corresponding to each pixel point in the current position image is determined through a triangulation method, and the problem of determining the depth value corresponding to each pixel point in the current position is solved.
The embodiment of the present application provides a possible implementation manner, and step S101 includes:
in step S1011 (not shown), it is determined that the acquired current position image of the sweeping robot at the current position is one of the two key frame images.
Step S1012 (not shown in the figure), determining that a candidate image in a plurality of images acquired by a monocular camera, which matches a predetermined relationship with the current position image, is another key frame image in the two key frame images, where the predetermined relationship includes that a rotation angle and/or a position change of the sweeping robot matches a predetermined threshold condition when the current position image is acquired compared with when the candidate image is acquired.
Specifically, a current position image acquired by moving the sweeping robot to a current position is determined to be one key frame image of two key frame images, then a preset condition is selected based on a preset key frame, and a certain candidate image is determined to be the other key frame image of the two key frame images from a plurality of frame images acquired by a monocular camera; the predetermined relationship may be that when the current position image is acquired, the change of the rotation angle of the sweeping robot reaches a certain threshold (for example, the change of the rotation angle reaches 5 degrees) compared with when the candidate image is acquired, or that the change of the position of the sweeping robot meets a predetermined threshold condition, for example, the sweeping robot moves 10 cm.
For the embodiment of the application, the problem of determining the two frames of key frame images is solved, and a foundation is provided for subsequently determining the depth map of the current position.
The embodiment of the present application provides a possible implementation manner, and specifically, step S102 includes:
step S1021 (not shown in the figure), determining pose information of the sweeping robot at each position by a corresponding point cloud matching algorithm based on laser point cloud data acquired by the laser radar;
the point cloud matching is a process of obtaining perfect coordinate transformation through calculation, and uniformly integrating point cloud data under different visual angles to a specified coordinate system through rigid transformation such as rotation and translation. In other words, two point clouds subjected to registration can be completely overlapped with each other through position transformation such as rotation and translation, so that the two point clouds belong to rigid transformation, namely the shape and the size are completely the same, and only the coordinate positions are different, and point cloud registration is to find the coordinate position transformation relation between the two point clouds.
Specifically, the acquired laser point cloud data can be correspondingly matched through a corresponding point cloud matching algorithm, and further pose information of the sweeping robot at each position is determined; wherein, the corresponding point cloud matching algorithm can be an iterative nearest neighbor algorithm or a probability model-based correlation matching algorithm; specifically, the process of determining the pose of the sweeping robot at the current position based on the Iterative Closest Point (ICP) algorithm may be: 1. respectively extracting the characteristics of the acquired two frames of adjacent laser point cloud data; 2. performing associated characteristic point pairing on two adjacent frames of laser point cloud data; 3. solving an integral matching parameter rotation matrix R and a translation matrix T of two adjacent frames of laser point cloud data by adopting a fractional iteration method; 4. and calculating the motion increment of the sweeping robot in the adjacent sampling period, and determining the pose of the sweeping robot at the current position. Where a matching threshold may be set to filter out invalid correlation features to accurately find the transformation parameters (R, T).
Step S1022 (not shown in the figure), based on the determined pose information of the sweeping robot at each position through a time mapping relationship, determining pose information respectively corresponding to the sweeping robot when the two frames of keyframe images are acquired by the monocular camera.
Specifically, the sweeping robot records corresponding time information when acquiring laser point cloud data through a laser radar, so that the pose information of the sweeping robot at each corresponding moment can be determined, the corresponding time information is also recorded when the sweeping robot acquires an image through a monocular camera, and then the pose information respectively corresponding to the sweeping robot when acquiring two frames of key frame images can be determined according to the corresponding time mapping relation.
For the embodiment of the application, the pose information of the sweeping robot at each position is determined based on the laser point cloud data acquired by the laser radar, and the pose information corresponding to the sweeping robot when two frames of key frame images are acquired is determined based on the determined pose information at each position, so that the problem of determining the pose information corresponding to the sweeping robot when two frames of key frame images are acquired is solved.
The embodiment of the present application provides a possible implementation manner, and further, the method further includes:
step S105 (not shown in the figure), determining traveling information of the sweeping robot based on the obstacle distance information determined by the depth map, where the traveling information includes direction information and/or speed information for controlling the sweeping robot to travel.
Specifically, each pixel value of the depth map is distance information from an obstacle to the sweeping robot, and the travel information of the sweeping robot can be determined based on the distance information; the travel information may be travel speed information of the sweeping robot, for example, when the distance between the sweeping robot and the obstacle is greater than a first threshold, the sweeping robot is controlled to move at a first travel speed, when the distance between the sweeping robot and the obstacle is less than the first threshold and greater than a second threshold, the sweeping robot is controlled to move at a second travel speed, and when the distance between the sweeping robot and the obstacle is less than the second threshold, the sweeping robot is controlled to move at a third travel speed, wherein the first travel speed is greater than the second travel speed, and the second travel speed is greater than the third travel speed; the travel information may also be travel direction information, for example, when the distance between the sweeping robot and the obstacle is greater than a fourth threshold, the sweeping robot travels in the current direction, and when the distance between the sweeping robot and the obstacle is less than the fourth threshold, the sweeping robot is controlled to change the travel direction, for example, the traveling direction can be kept to have a certain radian.
For the embodiment of the application, the traveling information of the sweeping robot is determined based on the obstacle distance information determined through the depth map, the determination problem of the traveling information of the sweeping robot is solved, and meanwhile the determined obstacle distance information also provides a basis for avoiding obstacles for the sweeping robot.
The embodiment of the present application provides a possible implementation manner, and further, the method further includes:
step S106 (not shown in the figure), a two-dimensional map of the sweeping robot in an environmental space is constructed based on the laser point cloud data acquired by the laser radar.
Specifically, the Simultaneous Localization and Mapping (SLAM) problem can be described as: whether there is a way to let a robot move while drawing a map of the environment that is completely consistent step by step is determined by placing the robot at an unknown position in an unknown environment. Specifically, based on laser point cloud data acquired by a laser radar, a two-dimensional map of an environment space where the sweeping robot is located can be constructed through an SLAM algorithm.
According to the embodiment of the application, the two-dimensional map of the sweeping robot in the environment space is constructed based on the laser point cloud data acquired through the laser radar, so that the construction problem of the map of the environment space is solved, and a foundation is provided for navigation of the sweeping robot.
The embodiment of the present application provides a possible implementation manner, and further, the method further includes:
step S107 (not shown in the figure), based on the constructed two-dimensional map of the sweeping robot in the environmental space, planning a working path of the sweeping robot, where the working path includes a route of the sweeping robot to the cleaning target area and/or a route of the sweeping robot to clean the cleaning target area.
Specifically, according to the received cleaning instruction, a working path of the sweeping robot may be planned according to the two-dimensional map of the constructed environment space, where the working path may include a route of the sweeping robot reaching the cleaning area and/or a route of how the sweeping robot cleans the cleaning target area.
According to the embodiment of the application, the working path of the sweeping robot is planned based on the constructed global three-dimensional map, and the problem of navigation of the sweeping robot in advancing is solved.
The embodiment of the present application further provides a sweeping robot, as shown in fig. 2, the sweeping robot 20 may include: a laser radar 201, a monocular camera 202, and a determination device 203;
the laser radar 201 is used for acquiring laser point cloud data of the sweeping robot at a corresponding position in an environment space;
the monocular camera 202 is used for acquiring images of the corresponding position of the sweeping robot in the environment space;
the determining means 203 comprises:
an extracting module 2031, configured to perform key frame extraction based on the multi-frame image acquired by the monocular camera 202 to obtain two key frame images, where the two key frame images include a current position image of the sweeping robot at a current position acquired by the monocular camera;
a first determining module 2032, configured to determine, based on the laser point cloud data acquired by the laser radar 201, pose information respectively corresponding to the sweeping robot when the two frames of keyframe images acquired by the monocular camera 202 are extracted by the extracting module 2031;
a second determining module 2033, configured to determine, based on the pose information determined by the first determining module 2032 and corresponding to the sweeping robot when the two frames of key frame images are acquired, depth values corresponding to each pixel point in the current position image;
a third determining module 2034, configured to determine a depth map of the sweeping robot at the current location based on the depth values corresponding to the pixel points in the current location image determined by the second determining module 2033.
Compared with the prior art that the depth information is acquired by configuring a structured light camera, the embodiment of the application performs key frame extraction based on a multi-frame image acquired by a monocular camera to acquire two key frame images, the two frames of key frame images comprise a current position image of the sweeping robot at a current position acquired by a monocular camera, then determining the pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through a monocular camera based on the laser point cloud data acquired through the laser radar, then determining the depth value corresponding to each pixel point in the current position image based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired, and determining a depth map of the sweeping robot at the current position based on the determined depth values corresponding to the pixel points in the current position image. The method and the device for determining the position and orientation information of the sweeping robot based on the monocular camera have the advantages that the depth information of the sweeping robot in the environment space is determined based on the laser radar and the monocular camera, the adoption of a structured light camera which is greatly influenced by illumination conditions, is easily interfered and is high in price is avoided, the influence of the illumination conditions is small, the accuracy of the sweeping robot in sensing the environment can be improved, meanwhile, the cost of the sweeping robot is reduced, in addition, the position and orientation information of the sweeping robot is determined through laser point cloud data obtained by the laser radar, compared with the position and orientation information of the sweeping robot determined through a characteristic matching method based on images obtained by the monocular camera, the required calculated amount is relatively small, the position and orientation information of the sweeping robot can be determined with a small calculated amount, and then the calculated amount of determining the depth map of the environment space is reduced.
The sweeping robot of this embodiment can execute the depth map obtaining method based on the laser radar and the monocular camera provided in the above embodiments of this application, and the implementation principles thereof are similar, and are not described herein again.
The embodiment of the present application provides another robot for sweeping floor, as shown in fig. 3, a robot for sweeping floor 30 of the present embodiment includes: laser radar 301, monocular camera 302, and determination device 303;
the laser radar 301 is configured to acquire laser point cloud data of the corresponding position of the sweeping robot in an environmental space;
therein, lidar 301 in fig. 3 functions the same as or similar to lidar 201 in fig. 2.
The monocular camera 302 is used for acquiring images of the corresponding position of the sweeping robot in the environmental space;
therein, the monocular camera 302 in fig. 3 functions the same as or similar to the monocular camera 202 in fig. 2.
The determining means 303 comprises:
an extracting module 3031, configured to perform key frame extraction based on the multi-frame image acquired by the monocular camera 302 to obtain two key frame images, where the two key frame images include a current position image of the sweeping robot at a current position acquired by the monocular camera;
the extracting module 3031 in fig. 3 has the same or similar function as the extracting module 2031 in fig. 2.
A first determining module 3032, configured to determine, based on the laser point cloud data obtained by the laser radar 301, pose information respectively corresponding to the sweeping robot when the two frames of keyframe images are obtained by the monocular camera 302 and extracted by the extracting module 3031;
the first determining module 3032 in fig. 3 has the same or similar function as the first determining module 2032 in fig. 2.
A second determining module 3033, configured to determine, based on the pose information determined by the first determining module 3032 and corresponding to the sweeping robot when the two frames of key frame images are acquired, depth values corresponding to each pixel point in the current position image;
the second determining module 3033 in fig. 3 has the same or similar function as the second determining module 2033 in fig. 2.
A third determining module 3034, configured to determine a depth map of the sweeping robot at the current position based on the depth values corresponding to the respective pixel points in the current position image determined by the second determining module 3033.
The third determining module 3034 in fig. 3 has the same or similar function as the third determining module 2034 in fig. 2.
The embodiment of the application provides a possible implementation manner, and specifically, the second determining module is specifically configured to determine, based on pose information respectively corresponding to the sweeping robot when the two frames of key frame images are shot, depth values corresponding to each pixel point in the current position image by a triangulation method.
For the embodiment of the application, based on the pose information respectively corresponding to the sweeping robot when two frames of key frame images are shot, the depth value corresponding to each pixel point in the current position image is determined through a triangulation method, and the problem of determining the depth value corresponding to each pixel point in the current position is solved.
The embodiment of the present application provides a possible implementation manner, and specifically, the extracting module 3031 includes:
a first determining unit 30311, configured to determine that the obtained current position image of the sweeping robot at the current position is one of the two key frame images;
a second determining unit 30312, configured to determine that a candidate image in the multi-frame images acquired by the monocular camera, which matches the predetermined relationship with the current position image, is another key frame image in the two key frame images, where the predetermined relationship includes that a rotation angle and/or a position change of the sweeping robot matches a predetermined threshold condition when the current position image is acquired and when the candidate image is acquired.
For the embodiment of the application, the problem of determining the two frames of key frame images is solved, and a foundation is provided for subsequently determining the depth map of the current position.
The embodiment of the present application provides a possible implementation manner, and specifically, the first determining module 3032 includes:
a third determining unit 30321, configured to determine pose information of the sweeping robot at each position through a corresponding point cloud matching algorithm based on laser point cloud data obtained by the laser radar;
a fourth determining unit 30322, configured to determine, based on the pose information of the sweeping robot at each position determined by the fourth determining unit through a time mapping relationship, pose information corresponding to the sweeping robot when the two frames of keyframe images are acquired by a monocular camera.
For the embodiment of the application, the pose information of the sweeping robot at each position is determined based on the laser point cloud data acquired by the laser radar, and the pose information corresponding to the sweeping robot when two frames of key frame images are acquired is determined based on the determined pose information at each position, so that the problem of determining the pose information corresponding to the sweeping robot when two frames of key frame images are acquired is solved.
The embodiment of the present application provides a possible implementation manner, and further, the determining device 303 further includes:
a fourth determining module 3035, configured to determine, based on the obstacle distance information determined by the depth map, traveling information of the sweeping robot, where the traveling information includes direction information and/or speed information for controlling the sweeping robot to travel.
For the embodiment of the application, the traveling information of the sweeping robot is determined based on the obstacle distance information determined through the depth map, the determination problem of the traveling information of the sweeping robot is solved, and meanwhile the determined obstacle distance information also provides a basis for avoiding obstacles for the sweeping robot.
The embodiment of the present application provides a possible implementation manner, and further, the determining device 303 further includes:
a building module 3036, configured to build a two-dimensional map of the sweeping robot in an environmental space based on the laser point cloud data obtained by the laser radar.
According to the embodiment of the application, the two-dimensional map of the sweeping robot in the environment space is constructed based on the laser point cloud data acquired through the laser radar, so that the construction problem of the map of the environment space is solved, and a foundation is provided for navigation of the sweeping robot.
The embodiment of the present application provides a possible implementation manner, and further, the determining device 303 further includes:
a planning module 3037, configured to plan a working path of the sweeping robot in an environmental space according to the two-dimensional map of the sweeping robot, where the working path includes a route of the sweeping robot reaching a cleaning target area and/or a route of the sweeping robot sweeping the cleaning target area.
According to the embodiment of the application, the working path of the sweeping robot is planned based on the constructed global three-dimensional map, and the problem of navigation of the sweeping robot in advancing is solved.
Compared with the prior art that the depth information is acquired by configuring a structured light camera, the embodiment of the application acquires two key frame images by extracting key frames based on a multi-frame image acquired by a monocular camera, the two frames of key frame images comprise a current position image of the sweeping robot at a current position acquired by a monocular camera, then determining the pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through a monocular camera based on the laser point cloud data acquired through the laser radar, then determining the depth value corresponding to each pixel point in the current position image based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired, and determining a depth map of the sweeping robot at the current position based on the determined depth values corresponding to the pixel points in the current position image. The method and the device for determining the position and orientation information of the sweeping robot based on the monocular camera have the advantages that the depth information of the sweeping robot in the environment space is determined based on the laser radar and the monocular camera, the adoption of a structured light camera which is greatly influenced by illumination conditions, is easily interfered and is high in price is avoided, the influence of the illumination conditions is small, the accuracy of the sweeping robot in sensing the environment can be improved, meanwhile, the cost of the sweeping robot is reduced, in addition, the position and orientation information of the sweeping robot is determined through laser point cloud data obtained by the laser radar, compared with the position and orientation information of the sweeping robot determined through a characteristic matching method based on images obtained by the monocular camera, the required calculated amount is relatively small, the position and orientation information of the sweeping robot can be determined with a small calculated amount, and then the calculated amount of determining the depth map of the environment space is reduced.
The sweeping robot provided by the embodiment of the application is suitable for the embodiment of the method, and is not described in detail herein.
An embodiment of the present application provides an electronic device, as shown in fig. 4, an electronic device 40 shown in fig. 4 includes: a processor 4001 and a memory 4003. Processor 4001 is coupled to memory 4003, such as via bus 4002. Further, the electronic device 40 may also include a transceiver 4004. In addition, the transceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 400 is not limited to the embodiment of the present application.
The processor 4001 is applied in the embodiment of the present application to implement the functions of the lidar, the monocular camera, and the determination device shown in fig. 2 or fig. 3. The transceiver 4004 includes a receiver and a transmitter.
Processor 4001 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 4002 may include a path that carries information between the aforementioned components. Bus 4002 may be a PCI bus, EISA bus, or the like. The bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
Memory 4003 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, an optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 4003 is used for storing application codes for executing the scheme of the present application, and the execution is controlled by the processor 4001. The processor 4001 is configured to execute the application code stored in the memory 4003 to implement the functions of the sweeping robot provided by the embodiments shown in fig. 2 or fig. 3.
The embodiment of the application provides an electronic device suitable for the method embodiment. And will not be described in detail herein.
Compared with the prior art that the depth information is acquired by configuring a structured light camera, the electronic equipment provided by the embodiment of the application acquires two key frame images by extracting key frames based on a multi-frame image acquired by a monocular camera, the two frames of key frame images comprise a current position image of the sweeping robot at a current position acquired by a monocular camera, then determining the pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through a monocular camera based on the laser point cloud data acquired through the laser radar, then determining the depth value corresponding to each pixel point in the current position image based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired, and determining a depth map of the sweeping robot at the current position based on the determined depth values corresponding to the pixel points in the current position image. The method and the device for determining the position and orientation information of the sweeping robot based on the monocular camera have the advantages that the depth information of the sweeping robot in the environment space is determined based on the laser radar and the monocular camera, the adoption of a structured light camera which is greatly influenced by illumination conditions, is easily interfered and is high in price is avoided, the influence of the illumination conditions is small, the accuracy of the sweeping robot in sensing the environment can be improved, meanwhile, the cost of the sweeping robot is reduced, in addition, the position and orientation information of the sweeping robot is determined through laser point cloud data obtained by the laser radar, compared with the position and orientation information of the sweeping robot determined through a characteristic matching method based on images obtained by the monocular camera, the required calculated amount is relatively small, the position and orientation information of the sweeping robot can be determined with a small calculated amount, and then the calculated amount of determining the depth map of the environment space is reduced.
The present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method shown in the above embodiments is implemented.
Compared with the prior art that the depth information is acquired by configuring a structured light camera, the embodiment of the application performs key frame extraction based on multi-frame images acquired by a monocular camera to acquire two key frame images, the two frames of key frame images comprise a current position image of the sweeping robot at a current position acquired by a monocular camera, then determining the pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through a monocular camera based on the laser point cloud data acquired through the laser radar, then determining the depth value corresponding to each pixel point in the current position image based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired, and determining a depth map of the sweeping robot at the current position based on the determined depth values corresponding to the pixel points in the current position image. The method and the device for determining the position and orientation information of the sweeping robot based on the monocular camera have the advantages that the depth information of the sweeping robot in the environment space is determined based on the laser radar and the monocular camera, the adoption of a structured light camera which is greatly influenced by illumination conditions, is easily interfered and is high in price is avoided, the influence of the illumination conditions is small, the accuracy of the sweeping robot in sensing the environment can be improved, meanwhile, the cost of the sweeping robot is reduced, in addition, the position and orientation information of the sweeping robot is determined through laser point cloud data obtained by the laser radar, compared with the position and orientation information of the sweeping robot determined through a characteristic matching method based on images obtained by the monocular camera, the required calculated amount is relatively small, the position and orientation information of the sweeping robot can be determined with a small calculated amount, and then the calculated amount of determining the depth map of the environment space is reduced.
The embodiment of the application provides a computer-readable storage medium which is suitable for the method embodiment. And will not be described in detail herein.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A depth map acquisition method based on a laser radar and a monocular camera is characterized by comprising the following steps:
performing key frame extraction based on a multi-frame image acquired by a monocular camera to obtain two key frame images, wherein the two key frame images comprise a current position image of the sweeping robot at the current position acquired by the monocular camera;
determining pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through a monocular camera based on laser point cloud data acquired through a laser radar;
determining depth values corresponding to all pixel points in the current position image based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are obtained;
and determining a depth map of the sweeping robot at the current position based on the determined depth values corresponding to the pixel points in the current position image.
2. The method according to claim 1, wherein the determining depth values corresponding to the pixel points in the current position image based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired comprises:
and determining depth values corresponding to all pixel points in the current position image by a triangulation method based on the determined pose information respectively corresponding to the sweeping robot when the two frames of key frame images are shot.
3. The method according to claim 1, wherein the extracting key frames based on the multi-frame images acquired by the monocular camera to obtain two key frame images comprises:
determining that the obtained current position image of the sweeping robot at the current position is one key frame image of the two key frame images;
and determining that a certain candidate image which is in a preset relationship with the current position image in the multi-frame images acquired by the monocular camera is the other key frame image in the two key frame images, wherein the preset relationship comprises that the rotation angle and/or the position change of the sweeping robot meets a preset threshold condition when the current position image is acquired and compared with when the certain candidate image is acquired.
4. The method of claim 1, wherein the determining pose information respectively corresponding to the sweeping robot when the two frames of keyframe images are acquired by a monocular camera based on the laser point cloud data acquired by the lidar comprises:
determining pose information of the sweeping robot at each position through a corresponding point cloud matching algorithm based on laser point cloud data acquired through a laser radar;
and determining pose information respectively corresponding to the sweeping robot when the two frames of key frame images are acquired through the monocular camera based on the determined pose information of the sweeping robot at each position through a time mapping relation.
5. The method of claim 1, further comprising:
determining travel information of the sweeping robot based on the obstacle distance information determined through the depth map, wherein the travel information comprises direction information and/or speed information for controlling the sweeping robot to travel.
6. The method of claim 1, further comprising:
and constructing a two-dimensional map of the sweeping robot in an environment space based on the laser point cloud data acquired by the laser radar.
7. The method of claim 6, further comprising:
planning a working path of the sweeping robot based on the constructed two-dimensional map of the sweeping robot in the environment space, wherein the working path comprises a route of the sweeping robot to the sweeping target area and/or a route of the sweeping robot to sweep the sweeping target area.
8. A robot of sweeping floor, characterized in that, should sweep floor the robot and include: the system comprises a laser radar, a monocular camera and a determination device;
the laser radar is used for acquiring laser point cloud data of the sweeping robot at a corresponding position in an environment space;
the monocular camera is used for acquiring images of the corresponding position of the sweeping robot in the environment space;
the determination device comprises:
the device comprises an extraction module, a control module and a display module, wherein the extraction module is used for extracting key frames based on a multi-frame image acquired by a monocular camera to obtain two key frame images, and the two key frame images comprise a current position image of the sweeping robot at a current position acquired by the monocular camera;
the first determining module is used for determining pose information respectively corresponding to the sweeping robot when the two frames of key frame images are obtained through the extraction module and acquired through a monocular camera based on laser point cloud data acquired through a laser radar;
a second determining module, configured to determine, based on the pose information determined by the first determining module and corresponding to the sweeping robot when the two frames of key frame images are acquired, a depth value corresponding to each pixel point in the current position image;
and the third determining module is used for determining a depth map of the sweeping robot at the current position based on the depth values corresponding to the pixel points in the current position image determined by the second determining module.
9. An electronic device, comprising a processor and a memory;
the memory is used for storing operation instructions;
the processor is configured to execute the method for acquiring the depth map based on the lidar and the monocular camera according to any one of claims 1 to 7 by calling the operation instruction.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the lidar and monocular camera-based depth map acquisition method according to any one of claims 1 to 7.
CN201910168751.7A 2019-03-06 2019-03-06 Depth map acquisition method based on laser radar and monocular camera and sweeping robot Pending CN111665826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910168751.7A CN111665826A (en) 2019-03-06 2019-03-06 Depth map acquisition method based on laser radar and monocular camera and sweeping robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910168751.7A CN111665826A (en) 2019-03-06 2019-03-06 Depth map acquisition method based on laser radar and monocular camera and sweeping robot

Publications (1)

Publication Number Publication Date
CN111665826A true CN111665826A (en) 2020-09-15

Family

ID=72381306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910168751.7A Pending CN111665826A (en) 2019-03-06 2019-03-06 Depth map acquisition method based on laser radar and monocular camera and sweeping robot

Country Status (1)

Country Link
CN (1) CN111665826A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462758A (en) * 2020-11-06 2021-03-09 深圳市优必选科技股份有限公司 Drawing establishing method and device, computer readable storage medium and robot
CN112799095A (en) * 2020-12-31 2021-05-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium
CN113281770A (en) * 2021-05-28 2021-08-20 东软睿驰汽车技术(沈阳)有限公司 Coordinate system relation obtaining method and device
CN114429432A (en) * 2022-04-07 2022-05-03 科大天工智能装备技术(天津)有限公司 Multi-source information layered fusion method and device and storage medium
CN114935341A (en) * 2022-07-25 2022-08-23 深圳市景创科技电子股份有限公司 Novel SLAM navigation calculation video identification method and device
CN117784797A (en) * 2024-02-23 2024-03-29 广东电网有限责任公司阳江供电局 Underwater intelligent robot navigation obstacle avoidance method based on visual images and laser radar
CN117784797B (en) * 2024-02-23 2024-05-24 广东电网有限责任公司阳江供电局 Underwater intelligent robot navigation obstacle avoidance method based on visual images and laser radar

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582171A (en) * 2009-06-10 2009-11-18 清华大学 Method and device for creating depth maps
KR20140009737A (en) * 2012-07-12 2014-01-23 한국과학기술원 Hybrid map based localization method of robot
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106913289A (en) * 2015-12-25 2017-07-04 北京奇虎科技有限公司 The cleaning treating method and apparatus of sweeping robot
CN107357297A (en) * 2017-08-21 2017-11-17 深圳市镭神智能系统有限公司 A kind of sweeping robot navigation system and its air navigation aid
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN108073167A (en) * 2016-11-10 2018-05-25 深圳灵喵机器人技术有限公司 A kind of positioning and air navigation aid based on depth camera and laser radar
US20180211399A1 (en) * 2017-01-26 2018-07-26 Samsung Electronics Co., Ltd. Modeling method and apparatus using three-dimensional (3d) point cloud
CN108594825A (en) * 2018-05-31 2018-09-28 四川斐讯信息技术有限公司 Sweeping robot control method based on depth camera and system
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and equipment
CN108827306A (en) * 2018-05-31 2018-11-16 北京林业大学 A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion
CN108885792A (en) * 2016-04-27 2018-11-23 克朗设备公司 It is detected using the goods plate that physical length unit carries out
CN108981693A (en) * 2018-03-22 2018-12-11 东南大学 VIO fast joint initial method based on monocular camera
CN109100731A (en) * 2018-07-17 2018-12-28 重庆大学 A kind of method for positioning mobile robot based on laser radar scanning matching algorithm
CN109300155A (en) * 2018-12-27 2019-02-01 常州节卡智能装备有限公司 A kind of obstacle-avoiding route planning method, device, equipment and medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582171A (en) * 2009-06-10 2009-11-18 清华大学 Method and device for creating depth maps
KR20140009737A (en) * 2012-07-12 2014-01-23 한국과학기술원 Hybrid map based localization method of robot
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
CN106913289A (en) * 2015-12-25 2017-07-04 北京奇虎科技有限公司 The cleaning treating method and apparatus of sweeping robot
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN108885792A (en) * 2016-04-27 2018-11-23 克朗设备公司 It is detected using the goods plate that physical length unit carries out
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN108073167A (en) * 2016-11-10 2018-05-25 深圳灵喵机器人技术有限公司 A kind of positioning and air navigation aid based on depth camera and laser radar
US20180211399A1 (en) * 2017-01-26 2018-07-26 Samsung Electronics Co., Ltd. Modeling method and apparatus using three-dimensional (3d) point cloud
CN107357297A (en) * 2017-08-21 2017-11-17 深圳市镭神智能系统有限公司 A kind of sweeping robot navigation system and its air navigation aid
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN108780577A (en) * 2017-11-30 2018-11-09 深圳市大疆创新科技有限公司 Image processing method and equipment
CN108981693A (en) * 2018-03-22 2018-12-11 东南大学 VIO fast joint initial method based on monocular camera
CN108594825A (en) * 2018-05-31 2018-09-28 四川斐讯信息技术有限公司 Sweeping robot control method based on depth camera and system
CN108827306A (en) * 2018-05-31 2018-11-16 北京林业大学 A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion
CN109100731A (en) * 2018-07-17 2018-12-28 重庆大学 A kind of method for positioning mobile robot based on laser radar scanning matching algorithm
CN109300155A (en) * 2018-12-27 2019-02-01 常州节卡智能装备有限公司 A kind of obstacle-avoiding route planning method, device, equipment and medium

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
GEORGES YOUNES等: "Keyframe-based monocular SLAM: design, survey, and future directions", 《ROBOTICS AND AUTONOMOUS SYSTEMS》, vol. 98, 28 September 2017 (2017-09-28), pages 67 - 88, XP055497793, DOI: 10.1016/j.robot.2017.09.010 *
ZHIPENG XIAO等: "Accurate extrinsic calibration between monocular camera and sparse 3D Lidar points without markers", 《2017 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV)》, 31 July 2017 (2017-07-31), pages 424 - 429 *
何立新;孔斌;杨静;: "单目图像内静态物体深度的自动测量方法", 武汉大学学报(信息科学版), no. 05, 5 May 2016 (2016-05-05), pages 635 - 641 *
刘坤: "基于视觉信息的室内移动机器人目标跟踪及路径规划", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 01, 15 January 2019 (2019-01-15), pages 140 - 1926 *
安帅: "基于单目相机与RGB-D相机的SLAM研究与设计", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 12, 15 December 2018 (2018-12-15), pages 138 - 1665 *
张彪;曹其新;何明超;: "基于多传感器融合的移动机器人三维地图创建", 中国科技论文, no. 08, 15 August 2013 (2013-08-15), pages 756 - 759 *
李帅鑫: "激光雷达/相机组合的3D SLAM技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 12, 15 December 2018 (2018-12-15), pages 136 - 642 *
杜立婵;覃团发;黎相成;: "基于单目双焦及SIFT特征匹配的深度估计方法", 电视技术, no. 09, 2 May 2013 (2013-05-02) *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462758A (en) * 2020-11-06 2021-03-09 深圳市优必选科技股份有限公司 Drawing establishing method and device, computer readable storage medium and robot
CN112462758B (en) * 2020-11-06 2022-05-06 深圳市优必选科技股份有限公司 Drawing establishing method and device, computer readable storage medium and robot
CN112799095A (en) * 2020-12-31 2021-05-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium
CN112799095B (en) * 2020-12-31 2023-03-14 深圳市普渡科技有限公司 Static map generation method and device, computer equipment and storage medium
CN113281770A (en) * 2021-05-28 2021-08-20 东软睿驰汽车技术(沈阳)有限公司 Coordinate system relation obtaining method and device
CN114429432A (en) * 2022-04-07 2022-05-03 科大天工智能装备技术(天津)有限公司 Multi-source information layered fusion method and device and storage medium
CN114429432B (en) * 2022-04-07 2022-06-21 科大天工智能装备技术(天津)有限公司 Multi-source information layered fusion method and device and storage medium
CN114935341A (en) * 2022-07-25 2022-08-23 深圳市景创科技电子股份有限公司 Novel SLAM navigation calculation video identification method and device
CN114935341B (en) * 2022-07-25 2022-11-29 深圳市景创科技电子股份有限公司 Novel SLAM navigation computation video identification method and device
CN117784797A (en) * 2024-02-23 2024-03-29 广东电网有限责任公司阳江供电局 Underwater intelligent robot navigation obstacle avoidance method based on visual images and laser radar
CN117784797B (en) * 2024-02-23 2024-05-24 广东电网有限责任公司阳江供电局 Underwater intelligent robot navigation obstacle avoidance method based on visual images and laser radar

Similar Documents

Publication Publication Date Title
US10796151B2 (en) Mapping a space using a multi-directional camera
CN111665826A (en) Depth map acquisition method based on laser radar and monocular camera and sweeping robot
CN108986161B (en) Three-dimensional space coordinate estimation method, device, terminal and storage medium
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
US9420265B2 (en) Tracking poses of 3D camera using points and planes
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
CN110801180B (en) Operation method and device of cleaning robot
WO2020113423A1 (en) Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
US8644557B2 (en) Method and apparatus for estimating position of moving vehicle such as mobile robot
CN111609852A (en) Semantic map construction method, sweeping robot and electronic equipment
KR101618030B1 (en) Method for Recognizing Position and Controlling Movement of a Mobile Robot, and the Mobile Robot Using the same
Meilland et al. A spherical robot-centered representation for urban navigation
CN110176032B (en) Three-dimensional reconstruction method and device
CN111679661A (en) Semantic map construction method based on depth camera and sweeping robot
CN108171715B (en) Image segmentation method and device
WO2017194962A1 (en) Real-time height mapping
GB2580691A (en) Depth estimation
CN111220148A (en) Mobile robot positioning method, system and device and mobile robot
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
Fiala et al. Robot navigation using panoramic tracking
CN113696180A (en) Robot automatic recharging method and device, storage medium and robot system
CN111609853A (en) Three-dimensional map construction method, sweeping robot and electronic equipment
CN111609854A (en) Three-dimensional map construction method based on multiple depth cameras and sweeping robot
CN111598927B (en) Positioning reconstruction method and device
WO2014203743A1 (en) Method for registering data using set of primitives

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination