CN116499453A - Electronic map generation method and device, mobile robot and storage medium - Google Patents

Electronic map generation method and device, mobile robot and storage medium Download PDF

Info

Publication number
CN116499453A
CN116499453A CN202310452684.8A CN202310452684A CN116499453A CN 116499453 A CN116499453 A CN 116499453A CN 202310452684 A CN202310452684 A CN 202310452684A CN 116499453 A CN116499453 A CN 116499453A
Authority
CN
China
Prior art keywords
depth image
under
roof
door frame
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310452684.8A
Other languages
Chinese (zh)
Inventor
邓志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Software Co Ltd
Original Assignee
Hangzhou Ezviz Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Software Co Ltd filed Critical Hangzhou Ezviz Software Co Ltd
Priority to CN202310452684.8A priority Critical patent/CN116499453A/en
Publication of CN116499453A publication Critical patent/CN116499453A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3811Point data, e.g. Point of Interest [POI]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the application provides an electronic map generation method, an electronic map generation device, a mobile robot and a storage medium, wherein the method comprises the following steps: and acquiring a plurality of depth images of the indoor scene acquired by the binocular camera, establishing a two-dimensional grid map according to each depth image, determining the position information of a roof and a door frame in the indoor scene, and determining the passable area boundary and the closed area boundary of the two-dimensional grid map according to the position information of the roof and the door frame to obtain the electronic map of the indoor scene. By applying the technical scheme provided by the embodiment of the application, the problem that the closed boundary and the passable area in the grid map cannot be detected when the visual field of the binocular vision sensor is upward can be avoided, and the accuracy of the grid map is improved.

Description

Electronic map generation method and device, mobile robot and storage medium
Technical Field
The present disclosure relates to the field of robot vision, and in particular, to a method and apparatus for generating an electronic map, a mobile robot, and a storage medium.
Background
In recent years, as artificial intelligence develops, indoor mobile robots are receiving more and more attention, and mapping is a major problem to be solved in unmanned navigation of indoor mobile robots. Two-dimensional grid maps are the most common type of map in unmanned navigation, and how to quickly construct accurate two-dimensional grid maps is a very important issue.
In the related art, in a process of constructing a two-dimensional grid map of an indoor scene by using a visual sensor, firstly, a depth image of the indoor scene is acquired by using the visual sensor, and then, the two-dimensional grid map is constructed by means of depth information of an obstacle on a plane of a robot in the depth image. However, the two-dimensional grid map constructed by the method has the problem of low boundary precision.
Disclosure of Invention
An object of an embodiment of the present application is to provide an electronic map generating method, an electronic map generating device, a mobile robot and a storage medium, which are used for improving the accuracy of a two-dimensional grid map boundary. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for generating an electronic map, where the method includes:
acquiring a plurality of depth images of an indoor scene acquired by a binocular camera;
according to each depth image, a two-dimensional grid map is established, and position information of a roof and a door frame in the indoor scene is determined;
and determining the passable area boundary and the closed area boundary of the two-dimensional grid map according to the position information of the roof and the door frame to obtain the electronic map of the indoor scene.
In one possible implementation manner, the building a two-dimensional grid map and determining the position information of the roof and the door frame in the indoor scene according to each depth image includes:
determining, for each depth image, a device pose of a device under the depth image, an obstacle position of each obstacle, and first position information of a roof and a door frame, wherein the device is the binocular camera or a mobile robot carrying the binocular camera;
establishing a two-dimensional grid map under the depth image according to the barrier position of each barrier under the depth image;
determining a passable area boundary and a closed area boundary of the two-dimensional grid map according to the position information of the door frame and the roof to obtain an electronic map of the indoor scene, wherein the electronic map comprises:
determining a passable area boundary and/or a closed area boundary of a two-dimensional grid map under the depth image by utilizing first position information of a door frame and/or a roof under the depth image aiming at each depth image to obtain a boundary two-dimensional grid map of the depth image;
and according to the equipment pose under each depth image, fusing and correcting the boundary two-dimensional grid map of each depth image to obtain the electronic map of the indoor scene.
In one possible implementation manner, the building a two-dimensional grid map and determining the position information of the roof and the door frame in the indoor scene according to each depth image includes:
determining, for each depth image, a device pose of a device under the depth image, an obstacle position of each obstacle, and first position information of a roof and a door frame, wherein the device is the binocular camera or a mobile robot carrying the binocular camera;
establishing a two-dimensional grid map of the indoor scene according to the equipment pose under each depth image and the obstacle position of each obstacle;
determining second position information of the roof and the door frame in the indoor scene according to the equipment pose under each depth image and the first position information of the roof and the door frame;
determining a passable area boundary and a closed area boundary of the two-dimensional grid map according to the position information of the roof and the door frame to obtain an electronic map of the indoor scene, wherein the method comprises the following steps:
and determining a passable area boundary and a closed area boundary of a two-dimensional grid map of the indoor scene according to second position information of the roof and the door frame in the indoor scene, so as to obtain an electronic map of the indoor scene.
In one possible implementation manner, the determining, for each depth image, the device pose of the device under the depth image, the obstacle position of each obstacle, and the first position information of the roof and the door frame includes:
for each depth image, acquiring a robot pose of the mobile robot and an obstacle position of each obstacle under the depth image, wherein the robot pose is a pose of the mobile robot under a world coordinate system of the indoor scene, and the obstacle position of each obstacle is a position of each obstacle in a camera coordinate system of the binocular camera;
detecting a single rectangular characteristic region and a rectangular characteristic region of a coordinate-like system on the depth image; under the condition that a single rectangular characteristic region is detected, acquiring the position information of the single rectangular characteristic region in the depth image in the camera coordinate system, and acquiring the first position information of a door frame under the depth image; and under the condition that the rectangular characteristic region of the class coordinate system is detected, acquiring the position information of the rectangular characteristic region of the class coordinate system in the camera coordinate system in the depth image, and acquiring the first position information of the roof under the depth image.
In one possible implementation manner, the determining, for each depth image, a passable area boundary and/or a closed area boundary of a two-dimensional grid map under the depth image by using first position information of a door frame and/or a roof under the depth image, to obtain a boundary two-dimensional grid map of the depth image includes:
for each depth image, converting the first position information of the door frame and/or the roof under the depth image into a robot coordinate system from a camera coordinate system to obtain third position information of the door frame and/or the roof under the depth image;
and determining the passable area boundary and/or the roof closed area boundary of the two-dimensional grid map under the depth image according to the third position information of the door frame and/or the roof under the depth image, wherein the two-dimensional grid map under the depth image is the map under the robot coordinate system.
In a possible implementation manner, the determining the passable area boundary and/or the roof closed area boundary of the two-dimensional grid map under the depth image according to the third position information of the door frame and/or the roof under the depth image includes:
according to third position information of the door frame and/or the roof under the depth image, outwards extending the corner points representing the door frame and/or the corner points representing the roof in the depth image for a preset length in the direction of the central point extending edge to obtain end point combinations of the door frame and/or the roof;
Projecting the end point combination of the door frame and/or the roof onto a two-dimensional grid map under the depth image to obtain a projection line segment of the door frame and/or a projection right angle of the roof;
under the condition that two projection right angles with collinear right angle sides exist in a two-dimensional grid map of the depth image, setting a grid state between the collinear right angle sides as an occupied state, and obtaining a roof closed area boundary of the two-dimensional grid map under the depth image;
and when two collinear projection line segments exist in the two-dimensional grid map of the depth image, setting a grid state between the two collinear projection line segments as a passable area, and obtaining a passable area boundary of the two-dimensional grid map under the depth image.
In one possible embodiment, the method further comprises:
calculating a first minimum Euclidean distance between the door frame and the mobile robot under the depth image according to third position information of the door frame under the depth image; when the first minimum Euclidean distance is smaller than a preset threshold value, judging the door frame in the depth image as false detection; and/or the number of the groups of groups,
calculating a second minimum Euclidean distance between the roof and the mobile robot under the depth image according to third position information of the roof under the depth image; and when the second minimum Euclidean distance is smaller than a preset threshold value, judging the roof in the depth image as false detection.
In a second aspect, an embodiment of the present application provides an electronic map generating apparatus, where the apparatus includes:
the image acquisition module is used for acquiring a plurality of depth images of the indoor scene acquired by the binocular camera;
the map building module is used for building a two-dimensional grid map and determining the position information of the roof and the door frame in the indoor scene according to each depth image;
and the boundary determining module is used for determining the passable area boundary and the closed area boundary of the two-dimensional grid map according to the position information of the roof and the door frame to obtain the electronic map of the indoor scene.
In one possible implementation manner, the map building module includes:
a position information confirming sub-module, configured to determine, for each depth image, a device pose of a device under the depth image, an obstacle position of each obstacle, and first position information of a roof and a door frame, where the device is the binocular camera or a mobile robot on which the binocular camera is mounted;
the first grid map generation sub-module is used for establishing a two-dimensional grid map under the depth image according to the obstacle positions of the obstacles under the depth image;
The boundary determination module includes:
the first boundary determining submodule is used for determining a passable area boundary and/or a closed area boundary of the two-dimensional grid map under the depth image by utilizing first position information of a door frame and/or a roof under the depth image for each depth image to obtain a boundary two-dimensional grid map of the depth image;
and the fusion correction sub-module is used for fusing and correcting the boundary two-dimensional grid map of each depth image according to the equipment pose under each depth image to obtain the electronic map of the indoor scene.
In one possible implementation manner, the map building module includes:
a position information confirming sub-module, configured to determine, for each depth image, a device pose of a device under the depth image, an obstacle position of each obstacle, and first position information of a roof and a door frame, where the device is the binocular camera or a mobile robot on which the binocular camera is mounted;
the second grid map generation sub-module is used for establishing a two-dimensional grid map of the indoor scene according to the equipment pose under each depth image and the obstacle position of each obstacle;
The second position information confirming sub-module is used for determining second position information of the roof and the door frame in the indoor scene according to the equipment pose under each depth image and the first position information of the roof and the door frame;
the boundary determination module includes:
and the second boundary determining submodule is used for determining the passable area boundary and the closed area boundary of the two-dimensional grid map of the indoor scene according to the second position information of the roof and the door frame in the indoor scene to obtain the electronic map of the indoor scene.
In a possible implementation manner, the location information confirming sub-module is specifically configured to:
for each depth image, acquiring a robot pose of the mobile robot and an obstacle position of each obstacle under the depth image, wherein the robot pose is a pose of the mobile robot under a world coordinate system of the indoor scene, and the obstacle position of each obstacle is a position of each obstacle in a camera coordinate system of the binocular camera;
detecting a single rectangular characteristic region and a rectangular characteristic region of a coordinate-like system on the depth image; under the condition that a single rectangular characteristic region is detected, acquiring the position information of the single rectangular characteristic region in the depth image in the camera coordinate system, and acquiring the first position information of a door frame under the depth image; and under the condition that the rectangular characteristic region of the class coordinate system is detected, acquiring the position information of the rectangular characteristic region of the class coordinate system in the camera coordinate system in the depth image, and acquiring the first position information of the roof under the depth image.
In one possible embodiment, the first boundary determination submodule includes:
the coordinate conversion unit is used for converting the first position information of the door frame and/or the roof under the depth image into a robot coordinate system from a camera coordinate system aiming at each depth image to obtain the third position information of the door frame and/or the roof under the depth image;
the boundary determining unit is used for determining the passable area boundary and/or the roof closed area boundary of the two-dimensional grid map under the depth image according to the third position information of the door frame and/or the roof under the depth image, wherein the two-dimensional grid map under the depth image is the map under the robot coordinate system.
In a possible embodiment, the boundary determining unit is specifically configured to:
according to third position information of the door frame and/or the roof under the depth image, outwards extending the corner points representing the door frame and/or the corner points representing the roof in the depth image for a preset length in the direction of the central point extending edge to obtain end point combinations of the door frame and/or the roof;
projecting the end point combination of the door frame and/or the roof onto a two-dimensional grid map under the depth image to obtain a projection line segment of the door frame and/or a projection right angle of the roof;
Under the condition that two projection right angles with collinear right angle sides exist in a two-dimensional grid map of the depth image, setting a grid state between the collinear right angle sides as an occupied state, and obtaining a roof closed area boundary of the two-dimensional grid map under the depth image;
and when two collinear projection line segments exist in the two-dimensional grid map of the depth image, setting a grid state between the two collinear projection line segments as a passable area, and obtaining a passable area boundary of the two-dimensional grid map under the depth image.
In one possible embodiment, the apparatus further comprises:
the Euclidean distance calculating module is used for calculating a first minimum Euclidean distance between the door frame and the mobile robot under the depth image according to the third position information of the door frame under the depth image; when the first minimum Euclidean distance is smaller than a preset threshold value, judging the door frame in the depth image as false detection; and/or the number of the groups of groups,
calculating a second minimum Euclidean distance between the roof and the mobile robot under the depth image according to third position information of the roof under the depth image; and when the second minimum Euclidean distance is smaller than a preset threshold value, judging the roof in the depth image as false detection.
In a third aspect, embodiments of the present application provide a mobile robot, including: the binocular camera is used for acquiring left and right eye images of an indoor scene and generating a depth image of the indoor scene based on the left and right eye images;
the memory is used for storing a computer program;
the processor is configured to implement any one of the electronic map generating methods described in the present application when executing the program stored in the memory.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, which when executed by a processor, implements the electronic map generating method of any of the present application.
According to the electronic map generation method, the device and the mobile robot, a two-dimensional grid map is built according to the depth image of an indoor scene collected by a binocular camera, position information of a roof and a door frame in the indoor scene is determined, and then a passable area and a closed area boundary of the two-dimensional grid map are determined according to the position information of the roof and the door frame, so that the electronic map of the indoor scene is obtained. Therefore, by the method, the passable area and the closed area boundary of the two-dimensional grid map are determined by utilizing the position information of the roof and the door frame while the two-dimensional grid map is constructed, so that the accuracy of the two-dimensional grid map boundary can be improved.
Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other embodiments may also be obtained according to these drawings to those skilled in the art.
Fig. 1 is a first schematic diagram of an electronic map generating method according to an embodiment of the present application;
fig. 2 is an exemplary diagram of a mobile robot carrying a binocular camera according to an embodiment of the present application;
fig. 3 is a second schematic diagram of the electronic map generating method provided in the embodiment of the present application;
fig. 4 is an exemplary diagram of an electronic map generating method provided in an embodiment of the present application;
fig. 5 is a third schematic diagram of an electronic map generating method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of one possible implementation of step S121 in the present application;
FIG. 7 is a schematic diagram of one possible implementation of step S131 in the present application;
FIG. 8 is a schematic diagram of one possible implementation of step S1312;
FIG. 9 is a flowchart of a method for generating an electronic map according to an embodiment of the present application;
fig. 10 is a schematic diagram of an electronic map generating apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. Based on the embodiments herein, a person of ordinary skill in the art would be able to obtain all other embodiments based on the disclosure herein, which are within the scope of the disclosure herein.
First, terms of art in the embodiments of the present application will be explained:
SLAM: english is named as Simultaneous Localization and Mapping, SLAM for short, and instant positioning and map construction are performed;
line characteristics: a representation unit taking the whole line segment as a constraint;
key frame: image frames capable of characterizing key information at a moment in the course of motion or in a scene change;
grid map: characterizing whether the map is a traffic state map in terms of whether a single grid is occupied;
Depth map hole: when the partial area has no texture information or reflects light, depth values are lost on the depth map, so that a cavity is formed;
and (3) path finding: the map is searched for a passable path to the next target point.
In order to improve the accuracy of the two-dimensional grid map boundary, the first aspect of the embodiments of the present application provides an electronic map generating method, which can be applied to an electronic device. In a specific application, the electronic device may have a computing function, for example, a server or a mobile robot, which are all within the protection scope of the present application.
Referring to fig. 1, an embodiment of the present application provides a method for generating an electronic map, where the method includes:
step S11, acquiring a plurality of depth images of an indoor scene acquired by a binocular camera;
wherein, binocular camera can be carried on mobile robot. As shown in fig. 2, the ground mobile robot with the logic operation unit is provided with a binocular camera, an included angle α between the binocular camera and a horizontal plane is set to be greater than or equal to 45 °, a vertical field angle β of the binocular camera, and the size of the included angle α and the vertical field angle β between the binocular camera sensor and the horizontal plane is not particularly limited in the embodiment of the present application.
Wherein the depth image is acquired in real time by a binocular camera. Specifically, the left eye and the right eye of the binocular camera respectively acquire a left eye image and a right eye image, pixel point matching is carried out on the left eye image and the right eye image, and the depth of each pixel is calculated according to a matching result, so that a depth image is obtained; specific algorithms for obtaining depth images based on left-eye images and right-eye images may be found in the prior art, and are not specifically limited in this application. In one example, the depth image may be a depth image of a key frame created after synchronization of the left and right images, and the key frame is continuously created, so as to obtain multiple depth images.
In an exemplary moving process, the mobile robot continuously collects left-eye images and right-eye images of an indoor scene through a binocular camera carried by the mobile robot, analyzes the left-eye images and the right-eye images collected at the same moment, and obtains depth images at the moment, so that depth images of the robot at a plurality of positions in the indoor scene are obtained.
Step S12, building a two-dimensional grid map and determining the position information of the roof and the door frame in the indoor scene according to each depth image.
The two-dimensional grid map in the embodiment of the application can be obtained by drawing the depth image acquired by the binocular camera; in one example, a three-dimensional point cloud of an indoor scene can be established through a depth image, the three-dimensional point cloud is projected on a two-dimensional map to form a two-dimensional discrete obstacle map, and then the two-dimensional discrete obstacle map is scanned to obtain a two-dimensional grid map. The specific manner of establishing the three-dimensional point cloud or the two-dimensional grid map of the indoor scene by using the depth image can be referred to as related SLAM algorithm, and is not specifically limited in this application.
The positions of the roof and the door frame in the indoor scene can be obtained by detecting roof features and door frame features in the depth image, for example, detecting whether point clouds in a rectangular form and a single rectangular form of a coordinate system appear in the depth image, determining the point clouds in the rectangular form of the coordinate system as the roof, determining the point clouds in the single rectangular form as the door frame, and further determining the position information of the roof and the door frame; the roof area and the door frame area in the left eye image and/or the right eye image can be detected by a computer vision technology, and then the positions of the roof and the door frame in the depth image can be obtained by a coordinate system conversion mode.
And S13, determining the passable area boundary and the closed area boundary of the two-dimensional grid map according to the position information of the roof and the door frame, and obtaining the electronic map of the indoor scene.
The passable area in the embodiment of the present application refers to a passable door in an indoor scene (the door in the embodiment of the present application is not used to refer to a structural door that can be opened, but may also be a channel that does not include a structural door), and the enclosed area boundary refers to a boundary (for example, a wall structure) of an indoor scene that is composed of between roofs and cannot pass through.
In the embodiment of the application, the determining the passable area and the closed area boundary of the two-dimensional grid map according to the position information of the roof and the door frame may be determining the closed area boundary of the two-dimensional grid map based on the position information of the roof after the two-dimensional grid map is obtained, and determining the passable area in the two-dimensional grid map based on the position information of the door frame.
Therefore, by the method, the passable area and the closed area boundary of the two-dimensional grid map are determined by utilizing the position information of the roof and the door frame while the two-dimensional grid map is constructed, so that the accuracy of the two-dimensional grid map boundary can be improved.
Fig. 3 is a second schematic diagram of an electronic map generating method provided in the embodiment of the present application, as shown in fig. 3, in a specific implementation manner, the step S12 may specifically include the following steps:
step S121, for each depth image, determining first position information of a pose of the device, a position of an obstacle, a roof and a door frame of the device under the depth image, wherein the device is a binocular camera or a mobile robot carrying the binocular camera.
The binocular camera is carried on the mobile robot, so that the relative pose between the binocular camera and the mobile robot can be considered not to be changed; the conversion relationship between the robot coordinate system and the camera coordinate system may be:
O r(w) =T rc *O c
Wherein O is r(w) Represents the robot coordinate system, O c Representing the camera coordinate system, T rc The transformation parameters in the binocular camera coordinate system and the robot coordinate system are represented, and in one example, the transformation parameters may be represented by a selection matrix and a translation vector. The conversion parameter calibration process is specifically obtained by a pre-calibration mode, and is not specifically limited in this application.
The mobile robot obtains a plurality of depth images at different positions, and thus the pose of the device (binocular camera or mobile robot) is different when the depth image is acquired for each depth image. In this embodiment, the pose of the device may be a pose of the device in an indoor scene coordinate system (also referred to as a world coordinate system in this application), and the change of the pose of the device may include rotation and translation. In one example, the world coordinate system may be established with the origin of the robot coordinate system of the mobile robot in the initial position as the origin of the world coordinate system. The mobile robot coordinate system Or can be established by using the mobile robot center O, the position of the mobile robot at the moment when the binocular camera is successfully initialized is taken as an initial position, and the origin O of the robot coordinate system of the mobile robot at the moment is taken as the origin of the world coordinate system.
In this embodiment of the present invention, the position of each obstacle in the depth image may be obtained after the obstacle is detected in the depth image, the position information of the obstacle represents a point cloud area of the obstacle in the depth image, and may be represented by coordinates of the point cloud area of the obstacle in a camera coordinate system, where the number of the obstacles included in the depth image may be one or more, and specifically is determined by an actual obstacle distribution situation of an indoor scene.
Step S122, a two-dimensional grid map under the depth image is established according to the obstacle positions of the obstacles under the depth image.
For each depth image, a two-dimensional grid map under the depth image may be created based on the depth image, which may be a two-dimensional grid map under a robot coordinate system or a camera coordinate system.
The step S13 may specifically include the following steps:
step S131, for each depth image, determining the passable area boundary and/or the closed area boundary of the two-dimensional grid map under the depth image by utilizing the first position information of the door frame and/or the roof under the depth image, and obtaining the boundary two-dimensional grid map of the depth image.
The first position information of the door frame and/or the roof under the depth image may be position information of the door frame and/or the roof under the robot coordinate system or the camera coordinate system in the depth image. Determining a closed area boundary of a two-dimensional grid map under the depth image according to first position information of a roof under the depth image; the passable region boundary of the two-dimensional grid map under the depth image can be determined according to the first position information of the door frame under the depth image. For example, a projected rectangular angle obtained by projecting a coordinate system representing a roof may be determined as a roof, a projected line segment obtained by projecting a single rectangular angle representing a door frame may be determined as a door frame, whether or not projected rectangular sides are collinear is verified, a grid state between the collinear projected rectangular sides is determined as a closed area boundary, whether or not projected line segments are collinear is verified, an occupied state of a grid between two door frames which are collinear is cleared, and the area is determined as a passable area. Illustratively, as shown in FIG. 4, the grid state between collinear right angle sides is set as a closed area boundary, and the area between collinear line segments is set as a passable area.
In general, an image coordinate system of the depth image is a camera coordinate system. In other embodiments, the image coordinate system of the depth image and the camera coordinate system are different, in which case, the conversion relationship between the image coordinate system and the camera coordinate system needs to be obtained, and the conversion relationship between the image coordinate system of the i-th depth image and the camera coordinate system for the depth image corresponding to the i-th key frame, i.e. the i-th depth image, may be expressed asTotal number of n depth images; the jth point in the ith depth image may be P ij =(x ij ,y ij ,z ij ) I=0, 1,2, … n, j=1, 2,3, …, m, m being the number of points in the i-th depth image.
The point cloud sets of door frame and roof features in the ith depth image are represented asAnd->Converting it into a robot coordinate system:
T rm for the point cloud set of the door frame in the ith depth image under the robot coordinate system, T rd And (3) a point cloud set of the roof in the ith depth image under the robot coordinate system.
According to T rm And T rd The passable region boundary and the closed region boundary of the two-dimensional grid map under the ith depth image can be obtained.
And step S132, according to the equipment pose under each depth image, fusing and correcting the boundary two-dimensional grid map of each depth image to obtain the electronic map of the indoor scene.
In one example, according to the pose of the device under each depth image, the boundary two-dimensional grid map of each depth image can be converted into the world coordinate system, and then the boundary two-dimensional grid map of each depth image under the world coordinate system is fused and corrected, so as to obtain the electronic map of the indoor scene. For example, for each boundary two-dimensional grid map in the world coordinate system, for the same boundary, the final boundary line can be obtained directly by means of averaging; for the same boundary, some boundary lines with larger obvious deviation can be removed first, and then the final boundary line and the like can be obtained through a mean value obtaining mode.
By adopting the embodiment of the application, the passable area and the closed area boundary of the two-dimensional grid map are determined by utilizing the position information of the roof and the door frame when the two-dimensional grid map is constructed, so that the problem that the closed boundary and the passable area in the grid map cannot be detected when the visual field of the binocular vision sensor is upward is avoided, and the accuracy of the grid map is improved.
Fig. 5 is a third schematic diagram of the electronic map generating method provided in the embodiment of the present application, as shown in fig. 5, in a specific implementation manner, the step S12 may specifically include the following steps:
Step S121, for each depth image, determining first position information of a pose of the device, a position of an obstacle, a roof and a door frame of the device under the depth image, wherein the device is a binocular camera or a mobile robot carrying the binocular camera.
Step S201, a two-dimensional grid map of the indoor scene is established according to the equipment pose under each depth image and the obstacle position of each obstacle.
In this embodiment, unlike step S122, after all the depth images are acquired, a two-dimensional grid map of an indoor scene may be established according to the obstacle information under each depth image, where the two-dimensional grid map may be a map of the indoor scene under the world coordinate system.
Step S202, determining second position information of the roof and the door frame in the indoor scene according to the equipment pose and the first position information of the roof and the door frame under each depth image.
In one example, according to the pose of the device (pose in world coordinate system) when the depth image is acquired, the first position information of the roof and the door frame in the depth image can be converted into the world coordinate system, so as to obtain the second position information of the roof and the door frame in the world coordinate system.
The step S13 may specifically include the following steps:
step S203, determining the passable area boundary and the closed area boundary of the two-dimensional grid map of the indoor scene according to the second position information of the roof and the door frame in the indoor scene, and obtaining the electronic map of the indoor scene.
In this embodiment of the present application, after all the depth images are acquired, second position information of the roof and the door frame in each depth image may be determined, where the second position information of the roof and the door frame may be a pose of the roof and the door frame in a world coordinate system.
By adopting the embodiment of the application, the passable area and the closed area boundary of the two-dimensional grid map are determined by utilizing the position information of the roof and the door frame after the two-dimensional grid map is constructed, so that the problem that the closed boundary and the passable area in the grid map cannot be detected when the visual field of the binocular vision sensor is upward is avoided, and the accuracy of the grid map is improved.
Fig. 6 is an intention of one possible implementation of step S121 in the present application, and as shown in fig. 6, in a specific implementation, the step S121 may include the following steps:
step S1211, for each depth image, acquiring a robot pose of the mobile robot and an obstacle position of each obstacle under the depth image, wherein the robot pose is a pose of the mobile robot under a world coordinate system of an indoor scene, and the obstacle position of the obstacle is a position of the obstacle in a camera coordinate system of a binocular camera;
In one example, the position of the obstacle in the camera coordinate system of the binocular camera may be converted to the position of the obstacle in the robot coordinate system, and then to the position of the obstacle in the world coordinate system. The conversion relationship between the camera coordinate system and the robot coordinate system is the same as that described in step S121, and will not be described here again.
Step S1212, detecting a single rectangular characteristic region and a rectangular characteristic region of a coordinate-like system for the depth image; under the condition that the single right angle feature region is detected, acquiring the position information of the single right angle feature region in the depth image in a camera coordinate system, and acquiring the first position information of the door frame under the depth image; and under the condition that the rectangular characteristic region of the class coordinate system is detected, acquiring the position information of the rectangular characteristic region of the class coordinate system in the depth image in a camera coordinate system, and acquiring the first position information of the roof under the depth image.
By adopting the embodiment of the application, the single right angle characteristic region and the quasi-coordinate system right angle characteristic region are detected in the depth image, so that the position information of the door frame and the roof in the depth image is obtained, all the door frames and the roofs in the indoor scene can be ensured to be detected, and then the two-dimensional grid map can be optimized according to the door frames and the roofs, so that the precision of the grid map is improved.
Optionally, in a specific implementation manner, the step S201 includes the following steps:
step S200, acquiring the pose of the device in the initialized state.
In the embodiment of the application, images acquired by the left and right eyes of the binocular camera at the same moment are aligned in time when the equipment is initialized, and the pose of the binocular camera and the pose of the robot at the moment are acquired, wherein the pose of the binocular camera can be the pose of the binocular camera under a camera coordinate system.
The process of building the two-dimensional grid map of the indoor scene may refer to a related SLAM algorithm, and in an optional specific implementation manner, the step S201 includes the following steps:
and step 1, calculating barrier information of each barrier based on the pose of the equipment in the initialized state to obtain a three-dimensional point cloud map.
And 2, projecting and scanning the three-dimensional point cloud map to obtain a two-dimensional grid map of the indoor scene.
Optionally, in a specific implementation manner, the step S202 includes the following steps:
step 11, according to first position information of the door frame and/or the roof under each depth image, extending the corner points representing the door frame and/or the corner points representing the roof in each depth image outwards for a preset length in the direction of the central point edge extension to obtain end point combinations of each door frame and/or roof;
Step 12, combining and projecting the end points of the door frame and/or the roof onto a two-dimensional grid map under each depth image to obtain projection line segments of the door frame and/or projection right angles of the roof under each depth image;
step 13, fusing projection line segments of the door frame under each depth image and/or projection right angles of the roof to obtain a third map;
step 14, under the condition that two projection right angles with collinear right angle sides exist in the third map, setting a grid state between the collinear right angle sides as an occupied state, and obtaining a roof closed area boundary of the third map;
and 15, under the condition that two collinear projection line segments exist in the third map, setting a grid state between the two collinear projection line segments as a passable area, and obtaining a passable area boundary of the third map.
Optionally, in a specific implementation manner, the step S203 includes the following steps:
and 22, fusing the third map with the two-dimensional grid map of the indoor scene, and determining the passable area boundary and the closed area boundary of the two-dimensional grid map of the indoor scene to obtain the first electronic map of the indoor scene.
Optionally, in a specific implementation manner, after the step 22, the method further includes the following steps:
Step 23, calculating a first minimum Euclidean distance between the door frame and the mobile robot under each depth image according to the first position information of the door frame under each depth image; when the first minimum Euclidean distance is smaller than a preset threshold value, judging the door frame in the depth image as false detection; and/or the number of the groups of groups,
step 24, calculating a second minimum Euclidean distance between the roof and the mobile robot under each depth image according to third position information of the roof under each depth image; and when the second minimum Euclidean distance is smaller than a preset threshold value, judging the roof in the depth image as false detection.
In this embodiment of the present application, the first minimum euclidean distance between the door frame or the roof and the mobile robot may be the minimum euclidean distance between the pose of the door frame or the roof in the depth image under the robot coordinate system and all the poses of the robot, or may be the minimum euclidean distance between the position of the door frame or the roof in the depth image under the world coordinate system and all the positions of the robot, and the preset threshold may be the radius of the robot, or may be set according to the actual situation. For example, in the world coordinate system, a coordinate point set of a certain door frame or roof is Pij, after the robot collects an indoor scene, all coordinate point sets of the robot are Qxy, the radius of the robot is 10cm, the euclidean distance between each coordinate point in all coordinate point sets Qxy of the robot and each coordinate point in the coordinate point set Pij of the door frame or roof is considered to be false detection when the minimum euclidean distance is less than 10 cm.
Fig. 7 is a schematic diagram of one possible implementation manner of step S131 in the present application, as shown in fig. 7, and in an alternative implementation manner, the step S131 may include the following steps:
step S1311, for each depth image, converts the first position information of the door frame and/or roof under the depth image from the camera coordinate system to the robot coordinate system, to obtain the third position information of the door frame and/or roof under the depth image.
In the embodiment of the present application, the conversion relationship between the camera coordinate system and the robot coordinate system is the same as the conversion relationship described in step S121, and will not be described here again.
Step S1312, determining a passable area boundary and/or a roof enclosed area boundary of the two-dimensional grid map under the depth image according to the third position information of the door frame and/or the roof under the depth image, wherein the two-dimensional grid map under the depth image is a map under the robot coordinate system.
In the embodiment of the application, the position information of the door frame and the roof under the camera coordinate system is converted into the world coordinate system, so that the characteristic information of the door frame and the roof in the world coordinate system can be directly projected when the two-dimensional grid map is optimized later, the passable area and the closed area boundary of the two-dimensional grid map are obtained, and the two-dimensional grid map can be optimized intuitively based on the position information of the door frame and the roof.
Fig. 8 is a schematic diagram of one possible implementation manner of step S1312 in the present application, as shown in fig. 8, in a specific implementation manner, step S1312 may include the following steps:
step S13121, according to the third position information of the door frame and/or the roof under the depth image, extending the corner points in the depth image, which represent the door frame and/or the corner points of the roof, to a preset length in the direction of the central point edge extension, to obtain the endpoint combination of the door frame and/or the roof.
By way of example, the embodiments of the present application may extend outward by 10cm with the corner points representing the door frame and/or the corner points of the roof as the center points, resulting in the end point combination.
Step S13122, projecting the end point combination of the door frame and/or the roof onto a two-dimensional grid map under the depth image to obtain a projection line segment of the door frame and/or a projection right angle of the roof;
step S13123, when two projection right angles with collinear right angle sides exist in the two-dimensional grid map of the depth image, setting the grid state between the collinear right angle sides as an occupied state, and obtaining the roof closed area boundary of the two-dimensional grid map under the depth image;
in step S13124, when there are two co-linear projected line segments in the two-dimensional grid map of the depth image, the grid state between the two co-linear projected line segments is set as the passable area, and the passable area boundary of the two-dimensional grid map in the depth image is obtained.
In the embodiment of the application, the end point combination representing the door frame and the roof is projected, when the projected right angles exist in the collinear way, the occupation state is determined between the collinear way edges, and the grid state between the two projected straight line segments in the collinear way is determined as the passable area, so that the initial closed area and the passable area of the two-dimensional grid map can be obtained rapidly.
Optionally, in a specific implementation manner, after step S1312, the method further includes the following steps:
step S1313, calculating a first minimum Euclidean distance between the door frame and the mobile robot under the depth image according to the third position information of the door frame under the depth image; when the first minimum Euclidean distance is smaller than a preset threshold value, judging the door frame in the depth image as false detection; and/or the number of the groups of groups,
step S1314, calculating a second minimum Euclidean distance between the roof and the mobile robot under the depth image according to the third position information of the roof under the depth image; and when the second minimum Euclidean distance is smaller than a preset threshold value, judging the roof in the depth image as false detection.
In this embodiment of the present application, the first minimum euclidean distance between the door frame or the roof and the mobile robot may be the minimum euclidean distance between the pose of the door frame or the roof in the depth image under the robot coordinate system and the pose of the robot, or may be the minimum euclidean distance between the position of the door frame or the roof in the depth image under the world coordinate system and the position of the robot, where the preset threshold may be an empirical value set according to an actual situation, and the principle of setting the preset threshold satisfies: when the Euclidean distance between the door frame/roof and the mobile robot is smaller than a preset threshold value, the door frame/roof is considered as false detection; for example, the preset threshold may be a radius of the robot. In one example, a coordinate point set of a door frame or a roof under a robot coordinate system in the depth image is Pij, a coordinate point of the robot under the depth image is Q, a robot radius is 10cm, euclidean distances between the coordinate point Q of the robot and each coordinate point in the coordinate point set Pij are calculated, and when the minimum euclidean distance is less than 10cm, the detected door frame or roof is considered as false detection.
In the embodiment of the application, the minimum Euclidean distance between the door frame and the mobile robot is compared with the threshold value, the minimum Euclidean distance between the roof and the mobile robot is compared, and whether the false detection occurs in the initial closed area and the passable area of the two-dimensional grid map or not is judged according to the comparison result, so that the accuracy of the grid map is further improved.
In order to illustrate the method of the embodiment of the present application, referring to fig. 9, fig. 9 is a flowchart of an electronic map generating method, first, whether left and right eye images of a binocular camera are synchronous (if not synchronous, the left and right eye images of the binocular camera need to be adjusted to be synchronous in time sequence) is determined, after the synchronization, the left and right eye images of the binocular camera are obtained to match to obtain an initialized pose of the camera, then, key frame data (the key frame data includes a depth image and may further include at least one of a left eye image, a right eye image and a left and right eye composite image) is continuously created, on one hand, the key frame data is detected to obtain positions of a door frame and a roof, and a passable area and a closed area boundary of a grid map are determined based on the positions of the door frame and the roof in each key frame, on the other hand, the pose of the camera and the map point position are estimated based on each key frame data, and the map point is projected to construct an obstacle map, and finally, the passable area and the closed area boundary of the grid map are optimized to obtain the electronic map after optimization.
Corresponding to the above method embodiment, the embodiment of the present application further provides an electronic map generating device, as shown in fig. 10, where the device may include the following modules:
an image acquisition module 1001 for acquiring a plurality of depth images of an indoor scene acquired by a binocular camera;
the map building module 1002 is configured to build a two-dimensional grid map according to each depth image and determine position information of a roof and a door frame in an indoor scene;
the boundary determining module 1003 is configured to determine a passable area boundary and a closed area boundary of the two-dimensional grid map according to the positional information of the roof and the door frame, and obtain an electronic map of the indoor scene.
Therefore, by the method, the position information of the roof and the door frame is utilized to determine the passable area and the closed area boundary of the two-dimensional grid map when the two-dimensional grid map is constructed, so that the problem that the closed boundary and the passable area in the grid map cannot be detected when the visual field of the binocular vision sensor is upward is avoided, and the accuracy of the grid map is improved.
Optionally, in a specific implementation manner, the map building module 1002 includes:
the position information confirming sub-module is used for determining the equipment pose, the obstacle position of each obstacle and the first position information of the roof and the door frame of the equipment under the depth image according to each depth image, wherein the equipment is a binocular camera or a mobile robot carrying the binocular camera;
The first grid map generation sub-module is used for establishing a two-dimensional grid map under the depth image according to the obstacle positions of the obstacles under the depth image;
the boundary determining module 1003 includes:
the first boundary determining submodule is used for determining a passable area boundary and/or a closed area boundary of the two-dimensional grid map under the depth image by utilizing first position information of a door frame and/or a roof under the depth image for each depth image to obtain a boundary two-dimensional grid map of the depth image;
and the fusion correction sub-module is used for fusing and correcting the boundary two-dimensional grid map of each depth image according to the equipment pose under each depth image to obtain the electronic map of the indoor scene.
Optionally, in a specific implementation manner, the map building module 1002 includes:
the position information confirming sub-module is used for determining the equipment pose, the obstacle position of each obstacle and the first position information of the roof and the door frame of the equipment under the depth image according to each depth image, wherein the equipment is a binocular camera or a mobile robot carrying the binocular camera;
the second grid map generation sub-module is used for establishing a two-dimensional grid map of the indoor scene according to the equipment pose under each depth image and the obstacle position of each obstacle;
The second position information confirming sub-module is used for determining second position information of the roof and the door frame in an indoor scene according to the equipment pose under each depth image and the first position information of the roof and the door frame;
the boundary determining module 1003 includes:
and the second boundary determining submodule is used for determining the passable area boundary and the closed area boundary of the two-dimensional grid map of the indoor scene according to the second position information of the roof and the door frame in the indoor scene to obtain the electronic map of the indoor scene.
Optionally, in a specific implementation manner, the location information confirmation sub-module is specifically configured to:
for each depth image, acquiring the pose of the robot of the mobile robot and the obstacle position of each obstacle under the depth image, wherein the pose of the robot is the pose of the mobile robot under the world coordinate system of an indoor scene, and the obstacle position of the obstacle is the position of the obstacle in the camera coordinate system of the binocular camera;
detecting a single rectangular characteristic region and a rectangular characteristic region of a coordinate-like system on the depth image; under the condition that the single right angle feature region is detected, acquiring the position information of the single right angle feature region in the depth image in a camera coordinate system, and acquiring the first position information of the door frame under the depth image; and under the condition that the rectangular characteristic region of the class coordinate system is detected, acquiring the position information of the rectangular characteristic region of the class coordinate system in the depth image in a camera coordinate system, and acquiring the first position information of the roof under the depth image.
Optionally, in a specific implementation manner, the first boundary determining submodule includes:
the coordinate conversion unit is used for converting the first position information of the door frame and/or the roof under the depth image into a robot coordinate system from a camera coordinate system aiming at each depth image to obtain the third position information of the door frame and/or the roof under the depth image;
the boundary determining unit is used for determining the passable area boundary and/or the roof closed area boundary of the two-dimensional grid map under the depth image according to the third position information of the door frame and/or the roof under the depth image, wherein the two-dimensional grid map under the depth image is a map under the robot coordinate system.
Optionally, in a specific implementation manner, the boundary determining unit is specifically configured to:
according to third position information of the door frame and/or the roof under the depth image, outwards extending the corner points representing the door frame and/or the corner points representing the roof in the depth image for a preset length in the direction of the central point extending edge to obtain an end point combination of the door frame and/or the roof;
projecting the end point combination of the door frame and/or the roof onto a two-dimensional grid map under the depth image to obtain a projection line segment of the door frame and/or a projection right angle of the roof;
Under the condition that two projection right angles with collinear right angle sides exist in a two-dimensional grid map of the depth image, setting a grid state between the collinear right angle sides as an occupied state, and obtaining a roof closed area boundary of the two-dimensional grid map under the depth image;
and when two collinear projection line segments exist in the two-dimensional grid map of the depth image, setting a grid state between the two collinear projection line segments as a passable area, and obtaining a passable area boundary of the two-dimensional grid map under the depth image.
Optionally, in a specific implementation manner, the apparatus further includes:
the Euclidean distance calculating module is used for calculating a first minimum Euclidean distance between the door frame and the mobile robot under the depth image according to the third position information of the door frame under the depth image; when the first minimum Euclidean distance is smaller than a preset threshold value, judging the door frame in the depth image as false detection; and/or the number of the groups of groups,
calculating a second minimum Euclidean distance between the roof and the mobile robot under the depth image according to third position information of the roof under the depth image; and when the second minimum Euclidean distance is smaller than a preset threshold value, judging the roof in the depth image as false detection.
In still another embodiment of the present invention, a robot is further provided, where the robot carries a binocular camera, and is configured to implement the electronic map generating method according to any one of the foregoing embodiments at runtime.
The embodiment of the application also provides a mobile robot, which comprises:
the binocular camera is used for acquiring left and right eye images of an indoor scene and generating a depth image of the indoor scene based on the left and right eye images;
the memory is used for storing a computer program;
the processor is configured to implement any one of the electronic map generating methods described in the present application when executing the program stored in the memory.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided herein, there is also provided a computer readable storage medium having stored therein a computer program which when executed by a processor implements the steps of any of the electronic map generating methods described above.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform any of the electronic map generating methods of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for embodiments of the apparatus, mobile robot, computer readable storage medium, computer program product, the description is relatively simple, as it is substantially similar to the method embodiments, as relevant see also part of the description of the method embodiments.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (11)

1. The electronic map generating method is characterized by comprising the following steps of:
acquiring a plurality of depth images of an indoor scene acquired by a binocular camera;
according to each depth image, a two-dimensional grid map is established, and position information of a roof and a door frame in the indoor scene is determined;
and determining the passable area boundary and the closed area boundary of the two-dimensional grid map according to the position information of the roof and the door frame to obtain the electronic map of the indoor scene.
2. The method of claim 1, wherein the creating a two-dimensional grid map and determining positional information of a roof and a door frame in the indoor scene from each of the depth images comprises:
determining, for each depth image, a device pose of a device under the depth image, an obstacle position of each obstacle, and first position information of a roof and a door frame, wherein the device is the binocular camera or a mobile robot carrying the binocular camera;
Establishing a two-dimensional grid map under the depth image according to the barrier position of each barrier under the depth image;
determining a passable area boundary and a closed area boundary of the two-dimensional grid map according to the position information of the door frame and the roof to obtain an electronic map of the indoor scene, wherein the electronic map comprises:
determining a passable area boundary and/or a closed area boundary of a two-dimensional grid map under the depth image by utilizing first position information of a door frame and/or a roof under the depth image aiming at each depth image to obtain a boundary two-dimensional grid map of the depth image;
and according to the equipment pose under each depth image, fusing and correcting the boundary two-dimensional grid map of each depth image to obtain the electronic map of the indoor scene.
3. The method of claim 1, wherein the creating a two-dimensional grid map and determining positional information of a roof and a door frame in the indoor scene from each of the depth images comprises:
determining, for each depth image, a device pose of a device under the depth image, an obstacle position of each obstacle, and first position information of a roof and a door frame, wherein the device is the binocular camera or a mobile robot carrying the binocular camera;
Establishing a two-dimensional grid map of the indoor scene according to the equipment pose under each depth image and the obstacle position of each obstacle;
determining second position information of the roof and the door frame in the indoor scene according to the equipment pose under each depth image and the first position information of the roof and the door frame;
determining a passable area boundary and a closed area boundary of the two-dimensional grid map according to the position information of the roof and the door frame to obtain an electronic map of the indoor scene, wherein the method comprises the following steps:
and determining a passable area boundary and a closed area boundary of a two-dimensional grid map of the indoor scene according to second position information of the roof and the door frame in the indoor scene, so as to obtain an electronic map of the indoor scene.
4. A method according to claim 2 or 3, wherein determining, for each depth image, the device pose of the device under the depth image, the obstacle position of each obstacle, and the first position information of the roof and the door frame comprises:
for each depth image, acquiring a robot pose of the mobile robot and an obstacle position of each obstacle under the depth image, wherein the robot pose is a pose of the mobile robot under a world coordinate system of the indoor scene, and the obstacle position of each obstacle is a position of each obstacle in a camera coordinate system of the binocular camera;
Detecting a single rectangular characteristic region and a rectangular characteristic region of a coordinate-like system on the depth image; under the condition that a single rectangular characteristic region is detected, acquiring the position information of the single rectangular characteristic region in the depth image in the camera coordinate system, and acquiring the first position information of a door frame under the depth image; and under the condition that the rectangular characteristic region of the class coordinate system is detected, acquiring the position information of the rectangular characteristic region of the class coordinate system in the camera coordinate system in the depth image, and acquiring the first position information of the roof under the depth image.
5. The method according to claim 2, wherein for each depth image, determining a passable area boundary and/or a closed area boundary of the two-dimensional grid map under the depth image by using the first position information of the door frame and/or the roof under the depth image, to obtain a boundary two-dimensional grid map of the depth image comprises:
for each depth image, converting the first position information of the door frame and/or the roof under the depth image into a robot coordinate system from a camera coordinate system to obtain third position information of the door frame and/or the roof under the depth image;
And determining the passable area boundary and/or the roof closed area boundary of the two-dimensional grid map under the depth image according to the third position information of the door frame and/or the roof under the depth image, wherein the two-dimensional grid map under the depth image is the map under the robot coordinate system.
6. The method according to claim 5, wherein determining passable area boundaries and/or roof-enclosed area boundaries of the two-dimensional grid map under the depth image based on the third position information of the door frame and/or roof under the depth image comprises:
according to third position information of the door frame and/or the roof under the depth image, outwards extending the corner points representing the door frame and/or the corner points representing the roof in the depth image for a preset length in the direction of the central point extending edge to obtain end point combinations of the door frame and/or the roof;
projecting the end point combination of the door frame and/or the roof onto a two-dimensional grid map under the depth image to obtain a projection line segment of the door frame and/or a projection right angle of the roof;
under the condition that two projection right angles with collinear right angle sides exist in a two-dimensional grid map of the depth image, setting a grid state between the collinear right angle sides as an occupied state, and obtaining a roof closed area boundary of the two-dimensional grid map under the depth image;
And when two collinear projection line segments exist in the two-dimensional grid map of the depth image, setting a grid state between the two collinear projection line segments as a passable area, and obtaining a passable area boundary of the two-dimensional grid map under the depth image.
7. The method of claim 5, wherein the method further comprises:
calculating a first minimum Euclidean distance between the door frame and the mobile robot under the depth image according to third position information of the door frame under the depth image; when the first minimum Euclidean distance is smaller than a preset threshold value, judging the door frame in the depth image as false detection; and/or the number of the groups of groups,
calculating a second minimum Euclidean distance between the roof and the mobile robot under the depth image according to third position information of the roof under the depth image; and when the second minimum Euclidean distance is smaller than a preset threshold value, judging the roof in the depth image as false detection.
8. An electronic map generation apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a plurality of depth images of the indoor scene acquired by the binocular camera;
the map building module is used for building a two-dimensional grid map and determining the position information of the roof and the door frame in the indoor scene according to each depth image;
And the boundary determining module is used for determining the passable area boundary and the closed area boundary of the two-dimensional grid map according to the position information of the roof and the door frame to obtain the electronic map of the indoor scene.
9. The apparatus of claim 8, wherein the map creation module comprises:
a position information confirming sub-module, configured to determine, for each depth image, a device pose of a device under the depth image, an obstacle position of each obstacle, and first position information of a roof and a door frame, where the device is the binocular camera or a mobile robot on which the binocular camera is mounted;
the first grid map generation sub-module is used for establishing a two-dimensional grid map under the depth image according to the obstacle positions of the obstacles under the depth image;
the boundary determination module includes:
the first boundary determining submodule is used for determining a passable area boundary and/or a closed area boundary of the two-dimensional grid map under the depth image by utilizing first position information of a door frame and/or a roof under the depth image for each depth image to obtain a boundary two-dimensional grid map of the depth image;
The fusion correction sub-module is used for fusing and correcting the boundary two-dimensional grid map of each depth image according to the equipment pose under each depth image to obtain the electronic map of the indoor scene;
the map building module comprises:
a position information confirming sub-module, configured to determine, for each depth image, a device pose of a device under the depth image, an obstacle position of each obstacle, and first position information of a roof and a door frame, where the device is the binocular camera or a mobile robot on which the binocular camera is mounted;
the second grid map generation sub-module is used for establishing a two-dimensional grid map of the indoor scene according to the equipment pose under each depth image and the obstacle position of each obstacle;
the second position information confirming sub-module is used for determining second position information of the roof and the door frame in the indoor scene according to the equipment pose under each depth image and the first position information of the roof and the door frame;
the boundary determination module includes:
the second boundary determining submodule is used for determining a passable area boundary and a closed area boundary of a two-dimensional grid map of the indoor scene according to second position information of the roof and the door frame in the indoor scene to obtain an electronic map of the indoor scene;
The location information confirming sub-module is specifically configured to:
for each depth image, acquiring a robot pose of the mobile robot and an obstacle position of each obstacle under the depth image, wherein the robot pose is a pose of the mobile robot under a world coordinate system of the indoor scene, and the obstacle position of each obstacle is a position of each obstacle in a camera coordinate system of the binocular camera;
detecting a single rectangular characteristic region and a rectangular characteristic region of a coordinate-like system on the depth image; under the condition that a single rectangular characteristic region is detected, acquiring the position information of the single rectangular characteristic region in the depth image in the camera coordinate system, and acquiring the first position information of a door frame under the depth image; under the condition that the rectangular characteristic region of the class coordinate system is detected, acquiring the position information of the rectangular characteristic region of the class coordinate system in the camera coordinate system in the depth image, and acquiring the first position information of the roof under the depth image;
the first boundary determination submodule includes:
the coordinate conversion unit is used for converting the first position information of the door frame and/or the roof under the depth image into a robot coordinate system from a camera coordinate system aiming at each depth image to obtain the third position information of the door frame and/or the roof under the depth image;
The boundary determining unit is used for determining the passable area boundary and/or the roof closed area boundary of the two-dimensional grid map under the depth image according to the third position information of the door frame and/or the roof under the depth image, wherein the two-dimensional grid map under the depth image is a map under the robot coordinate system;
the boundary determining unit is specifically configured to:
according to third position information of the door frame and/or the roof under the depth image, outwards extending the corner points representing the door frame and/or the corner points representing the roof in the depth image for a preset length in the direction of the central point extending edge to obtain end point combinations of the door frame and/or the roof;
projecting the end point combination of the door frame and/or the roof onto a two-dimensional grid map under the depth image to obtain a projection line segment of the door frame and/or a projection right angle of the roof;
under the condition that two projection right angles with collinear right angle sides exist in a two-dimensional grid map of the depth image, setting a grid state between the collinear right angle sides as an occupied state, and obtaining a roof closed area boundary of the two-dimensional grid map under the depth image;
under the condition that two collinear projection line segments exist in a two-dimensional grid map of the depth image, a grid state between the two collinear projection line segments is set as a passable area, and a passable area boundary of the two-dimensional grid map under the depth image is obtained;
The apparatus further comprises:
the Euclidean distance calculating module is used for calculating a first minimum Euclidean distance between the door frame and the mobile robot under the depth image according to the third position information of the door frame under the depth image; when the first minimum Euclidean distance is smaller than a preset threshold value, judging the door frame in the depth image as false detection; and/or the number of the groups of groups,
calculating a second minimum Euclidean distance between the roof and the mobile robot under the depth image according to third position information of the roof under the depth image; and when the second minimum Euclidean distance is smaller than a preset threshold value, judging the roof in the depth image as false detection.
10. A mobile robot, the mobile robot comprising: the binocular camera is used for acquiring left and right eye images of an indoor scene and generating a depth image of the indoor scene based on the left and right eye images;
the memory is used for storing a computer program;
the processor is configured to implement the electronic map generating method according to any one of claims 1 to 7 when executing the program stored in the memory.
11. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the electronic map generating method of any one of claims 1 to 7.
CN202310452684.8A 2023-04-21 2023-04-21 Electronic map generation method and device, mobile robot and storage medium Pending CN116499453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310452684.8A CN116499453A (en) 2023-04-21 2023-04-21 Electronic map generation method and device, mobile robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310452684.8A CN116499453A (en) 2023-04-21 2023-04-21 Electronic map generation method and device, mobile robot and storage medium

Publications (1)

Publication Number Publication Date
CN116499453A true CN116499453A (en) 2023-07-28

Family

ID=87321120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310452684.8A Pending CN116499453A (en) 2023-04-21 2023-04-21 Electronic map generation method and device, mobile robot and storage medium

Country Status (1)

Country Link
CN (1) CN116499453A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117387649A (en) * 2023-10-26 2024-01-12 苏州大学 Self-adaptive navigation method and system for uncertain environment robot with probability self-updating

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117387649A (en) * 2023-10-26 2024-01-12 苏州大学 Self-adaptive navigation method and system for uncertain environment robot with probability self-updating

Similar Documents

Publication Publication Date Title
US20230260151A1 (en) Simultaneous Localization and Mapping Method, Device, System and Storage Medium
CN110893617B (en) Obstacle detection method and device and storage device
Alismail et al. Automatic calibration of a range sensor and camera system
CN110807350A (en) System and method for visual SLAM for scan matching
Taylor et al. Multi‐modal sensor calibration using a gradient orientation measure
EP3818741A1 (en) Method, apparatus and computer program for performing three dimensional radio model construction
CN112464812B (en) Vehicle-based concave obstacle detection method
Zhang et al. Building a partial 3D line-based map using a monocular SLAM
WO2019136613A1 (en) Indoor locating method and device for robot
CN111862214A (en) Computer equipment positioning method and device, computer equipment and storage medium
CN111080784A (en) Ground three-dimensional reconstruction method and device based on ground image texture
Nedevschi Online cross-calibration of camera and lidar
CN116499453A (en) Electronic map generation method and device, mobile robot and storage medium
CN110736456A (en) Two-dimensional laser real-time positioning method based on feature extraction in sparse environment
Shacklock et al. Visual guidance for autonomous vehicles: capability and challenges
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision
KR20200142315A (en) Method and apparatus of updating road network
CN115272482A (en) Camera external reference calibration method and storage medium
CN112598736A (en) Map construction based visual positioning method and device
Ortega et al. Calibrating an outdoor distributed camera network using laser range finder data
Spero et al. 3D vision for large-scale outdoor environments
CN116597001A (en) Indoor top boundary position detection method, device, robot and storage medium
CN115077467B (en) Cleaning robot posture estimation method and device and cleaning robot
Amarasinghe et al. Integrated laser-camera sensor for the detection and localization of landmarks for robotic applications
Wang et al. P2O-Calib: Camera-LiDAR Calibration Using Point-Pair Spatial Occlusion Relationship

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination