WO2019242516A1 - 建立室内3d地图的方法和无人机 - Google Patents

建立室内3d地图的方法和无人机 Download PDF

Info

Publication number
WO2019242516A1
WO2019242516A1 PCT/CN2019/090503 CN2019090503W WO2019242516A1 WO 2019242516 A1 WO2019242516 A1 WO 2019242516A1 CN 2019090503 W CN2019090503 W CN 2019090503W WO 2019242516 A1 WO2019242516 A1 WO 2019242516A1
Authority
WO
WIPO (PCT)
Prior art keywords
scanning
sub
point
target
path
Prior art date
Application number
PCT/CN2019/090503
Other languages
English (en)
French (fr)
Inventor
李选富
何庭波
许占
胡慧
陈海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019242516A1 publication Critical patent/WO2019242516A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3852Data derived from aerial or satellite images
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot

Definitions

  • the present application relates to the field of drone technology, and more particularly, to a method and a drone for establishing an indoor 3D map.
  • indoor drones have been used to some extent. Indoor drones can move freely in indoor spaces and can perform a variety of tasks that ground robots cannot. Before an indoor drone works indoors, it is necessary to perform a complete scan of the room to generate a 3D map of the room (the 3D map can represent the spatial structure of the room), and then work based on the 3D map.
  • the traditional solution is to scan the room according to the preset rules and build a 3D map based on the 3D point cloud obtained from the scan. For example, control a drone to fly along an obstacle, and scan the room while flying, and then when the scan is complete, build a 3D map based on the scanned 3D point cloud.
  • the present application provides a method and an unmanned aerial vehicle for establishing an indoor 3D map, so as to realize a complete scan of the indoor, thereby constructing a more accurate 3D map.
  • a method for establishing an indoor 3D map is provided.
  • the method is used to scan each sub-region of a target room.
  • the method specifically includes: determining at least one sub-region to be scanned in the target room; and determining at least one sub-region to be scanned.
  • each sub-region when each sub-region is scanned at an optimal scanning point of each sub-region, the scanning degree of each sub-region reaches a first preset scanning degree.
  • the above method may be performed by a drone, and the above target room may be a room where 3D mapping is required.
  • the starting point of the target scanning path is the current scanning point
  • the ending point of the target scanning path is located in any one of the at least one subregion to be scanned.
  • any two sub-regions in the at least one region to be scanned are not connected to each other.
  • the 3D point cloud of the target room obtained during the scanning process specifically includes 3D coordinate information that has been scanned in the target room.
  • the 3D map of the target room may refer to a grid map containing objects occupied by the target room.
  • the above method further includes: determining a sub-area with a largest area among at least one sub-area to be scanned as a target sub-area;
  • the optimal scanning point of each sub-region in the region to determine the target scanning path includes: determining the target scanning path according to the current scanning point and the optimal scanning point of the target sub-region, wherein the starting point of the target scanning path is the current scanning point and the ending point The best scan point for the target subregion.
  • the larger area can be scanned as much as possible during the scanning process, and the scanning efficiency can be improved to a certain extent.
  • determining the target scanning path according to the current scanning point and the best scanning point of the target sub-region includes determining that the starting point is the current scanning point and the ending point is the most Multiple scan paths of the best scan points; the scan path with the shortest path among the multiple scan paths is determined as the target scan path.
  • determining the target scanning path according to the current scanning point and the best scanning point of the target sub-region includes: starting point being the current scanning point and ending point being the most The shortest path among the multiple scan paths of the best scan point is determined as the target scan path.
  • the moving distance can be reduced during scanning, the scanning time can be reduced to a certain extent, and the scanning efficiency can be improved.
  • the shortest scan path among the multiple scan paths may be a straight path from the current scan point to the best scan point of the target sub-region.
  • the straight path between the current scanning point and the optimal scanning point of the target sub-region cannot be reached to the target sub-region. Best scan point. Then, the above target scanning path is the shortest path starting from the current scanning point, bypassing the obstacle and reaching the optimal scanning point of the target sub-region.
  • determining the target scanning path according to the current scanning point and the best scanning point of the target sub-region includes determining that the starting point is the current scanning point and the ending point is the most Multiple scanning paths of the best scanning points; determine each scanning path based on the length of each of the multiple scanning paths and the scanning area when scanning along each scanning path through the best scanning point of other sub-areas to be scanned The path cost of each scanning path; the path with the smallest path cost among multiple paths is determined as the target scanning path.
  • determining the target scanning path according to the current scanning point and the best scanning point of the target sub-region includes: starting at the current scanning point and ending at the most Among the multiple scanning paths of the best scanning point, it is determined according to the length of each scanning path of the multiple scanning paths and the scanning area when scanning along each scanning path through the best scanning point of other sub-areas to be scanned.
  • the path cost of each scanning path, wherein the other sub-regions to be scanned are sub-regions other than the target sub-region in at least one sub-region to be scanned; the path with the smallest path cost among the multiple paths is determined as the target scanning path.
  • the other sub-regions to be scanned refer to sub-regions other than the target sub-region in the at least one sub-region to be scanned.
  • Bellman-Ford (BF) algorithm or integer programming that supports negative weight edges can be specifically used.
  • Other greedy algorithms such as genetic algorithms (GA) select paths from multiple paths. The least expensive path.
  • the scanning cost of each path is obtained according to the following formula:
  • C represents the path cost of the path
  • the value of d is proportional to the path length
  • the value of S is proportional to the area of other sub-scanned regions that can be scanned when passing through the best scanning point of other sub-scanned regions
  • ⁇ and ⁇ is the weight coefficient
  • the starting point of the target scanning path is the current scanning point
  • the target scanning path passes through the optimal scanning point of each of the at least one subregion to be scanned
  • the target scanning The end point of the path is the optimal scanning point of one of the at least one sub-region to be scanned.
  • determining a target scanning path according to a current scanning point and an optimal scanning point of each of the at least one subregion to be scanned includes: determining a starting point as the current scanning Points, multiple scanning paths that pass through the optimal scanning point of each of the at least one sub-region to be scanned and whose end point is the optimal scanning point of at least one of the sub-regions to be scanned; The scan path with the shortest path is determined as the target scan path.
  • determining a target scanning path according to a current scanning point and an optimal scanning point of each of the at least one sub-region to be scanned includes: The scan path with the shortest middle path is determined as the target scan path, where each scan path in the plurality of paths is the best scan point starting from the current scan point, passing through each sub-region in at least one sub-region to be scanned, and ending at The path of the best scanning point of one of the at least one sub-region to be scanned.
  • the target scanning path By using the shortest scanning path among the multiple scanning paths as the target scanning path, it is possible to move from the current scanning point to achieve scanning of at least one sub-area to be scanned by moving as few distances as possible, which can reduce scanning to a certain extent Time to improve scanning efficiency.
  • determining at least one sub-area to be scanned of the target room includes: obtaining boundary information of the target room; constructing a bounding box of the target room according to the boundary information; The initial grid map of the target room, where the initial grid map is at least one of a top view, a bottom view, a left view, a right view, a front view, and a back view of the target room; projecting boundary information onto the initial grid map, Obtain a target grid map; determine the area outside the 3D point cloud projection area in the target grid map as the area to be scanned, and the area to be scanned includes at least one sub-area to be scanned.
  • the initial value of each grid in the initial grid map is an invalid value, which indicates that the area in the initial grid map has not been scanned yet.
  • the above-mentioned boundary information may be a 3D point cloud obtained by scanning.
  • the bounding box is a cube corresponding to the target room, and the length, width, and height of the cube respectively correspond to the length, width, and height of the target room, and are used to represent the size of the target space.
  • an initial grid map also called a grid map
  • the top grid view, the bottom grid view, the left grid view, and the right of the room can be constructed according to the bounding box.
  • determining an optimal scanning point for each subregion in at least one subregion to be scanned includes: determining a geometric center of gravity of a geometry corresponding to a three-dimensional space to which each subregion belongs.
  • the method further includes: determining an exit of the current room according to the real-time 3D point cloud information of the current room; and when the scan degree of the current room satisfies a second preset scan degree, moving to the exit of the current room to the current room Scan the room other than the exit of the room.
  • a drone in a second aspect, includes various modules for performing the method in the first aspect described above.
  • a computer-readable storage medium stores program code, where the program code includes a method for executing a method in any implementation manner of the first aspect. instruction.
  • the drone can execute the program code in the computer-readable storage medium, and when the drone executes the program code in the computer-readable storage medium, the The man-machine can execute the method in any one of the implementation manners of the first aspect.
  • FIG. 1 is a schematic flowchart of a method for establishing an indoor 3D map according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of determining an optimal scanning point of a sub-area to be scanned
  • FIG. 3 is a schematic diagram of a bounding box
  • Figure 4 is a schematic diagram of a top-down grid diagram
  • FIG. 5 is a schematic diagram of a top-down grid diagram after extending a boundary
  • FIG. 6 is a grid diagram containing 3D point cloud information
  • FIG. 7 is a schematic diagram of a grid diagram
  • FIG. 8 is a schematic diagram of forming an occlusion region
  • FIG. 10 is a schematic block diagram of a drone according to an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of an unmanned aerial vehicle according to an embodiment of the present application.
  • the scanning points in this application may also be referred to as observation points or detection points, and the scanning points may be some key points that are passed when scanning indoors.
  • a complete scan of the room is achieved by acquiring these scan points and planning a scan path based on these scan points.
  • FIG. 1 shows a schematic flowchart of a method for establishing an indoor 3D map according to an embodiment of the present application.
  • the method shown in FIG. 1 can be performed by a drone, or other devices with automatic flight and scanning functions.
  • the method shown in FIG. 1 specifically includes steps 110 to 160. Steps 110 to 160 are described in detail below.
  • the above target room may be a room that needs 3D mapping.
  • the scanning degree of each sub-region reaches the first preset scanning degree.
  • the degree of scanning of a certain sub-region can be expressed by the ratio of the scanning area of the sub-region to the total area of the sub-region.
  • the above-mentioned first preset scanning degree may be 60%.
  • the best scanning point of the first sub-region is the scanning of the first sub-region when the first sub-region is scanned.
  • the ratio of the area to the total area of the first sub-region is greater than or equal to 60%.
  • the at least one sub-region to be scanned may be all sub-regions to be scanned (or referred to as unscanned sub-regions) in the target room.
  • determining the target scanning path according to the current scanning point and the optimal scanning point of each of the at least one sub-region to be scanned includes: determining that the starting point is the current scanning point and passing through at least one of the sub-regions to be scanned in the target room.
  • the path of the area is the target scan path.
  • scanning of the area to be scanned can be achieved.
  • the target path may be a scanning path starting from the current scanning point to a certain sub-region in the at least one sub-region to be scanned, or a scanning path starting from the current scanning point and passing through the at least one sub-region to be scanned.
  • the starting point of the target scanning path is the current scanning point
  • the ending point is the optimal scanning point of at least one of the subregions to be scanned.
  • a path from the current scanning point to the optimal scanning point of at least one sub-region with the largest area in the area to be scanned can be planned as the target scanning path.
  • the target scan path is determined according to the current scan point and the best scan point of the target sub-region, where the starting point of the target scan path is the current scan point and the end point is the path of the best scan point of the target sub-region.
  • a larger area can be scanned as much as possible during the scanning process, and the scanning efficiency can be improved to a certain extent.
  • the shortest available path can be selected as the target scanning path.
  • the specific process is as follows:
  • the scan path with the shortest path among the plurality of scan paths whose starting point is the current scan point and whose end point is the best scan point of the target sub-region is determined as the target scan path.
  • the moving distance can be reduced during scanning, the scanning time can be reduced to a certain extent, and the scanning efficiency can be improved.
  • the length of the shortest scanning path among the multiple scanning paths may be a straight line distance from the current scanning point to the optimal scanning point of the target sub-region.
  • an obstacle existing between the current scanning point and the optimal scanning point of the target sub-region causes no straight path between the current scanning point and the optimal scanning point of the target sub-region.
  • the above target scanning path is the shortest path starting from the current scanning point, bypassing the obstacle and reaching the optimal scanning point of the target sub-region.
  • a path whose starting point is the current scanning point and whose ending point is the optimal scanning point of the target sub-region passes through at least one sub-region to be scanned.
  • the scanning area at the best scanning point is comprehensively determined.
  • the path with the smallest path cost among multiple paths is determined as the target scanning path.
  • the other sub-regions to be scanned refer to sub-regions other than the target sub-region in the at least one sub-region to be scanned.
  • BF Bellman-Ford
  • GA genetic algorithms
  • the path cost of each path can be determined according to formula (1).
  • C represents the path cost of the path
  • the value of d is proportional to the path length
  • the value of S is proportional to the area of other sub-scanned regions that can be scanned when passing through the best scanning point of other sub-scanned regions
  • ⁇ and ⁇ is a weighting coefficient ( ⁇ is a weighting coefficient of d, ⁇ is a weighting coefficient of S).
  • the starting point of the target scanning path is the current scanning point, and the target scanning path passes through all the subregions in at least one subregion to be scanned,
  • the starting point is the current scanning point, and there are multiple paths through all the subregions of at least one subregion to be scanned. You can directly select a path from the multiple paths as the target scanning path.
  • the specific process is as follows :
  • the starting point of the target scanning path determined in the above process (6) is the path of the current scanning point and the ending point is the path of the optimal scanning point of one of the at least one subregion to be scanned, and the target scanning path passes through at least one The best scan point for each subregion in the subregion to be scanned.
  • a path with the shortest path may be selected as the target scanning path from a plurality of selectable paths.
  • the scan path with the shortest path among the plurality of scan paths is determined as the target scan path, wherein each scan path in the plurality of paths has a starting point as the current scan point and passes through each of the at least one sub-region to be scanned
  • the best scanning point of the path and the end point is the path of the best scanning point of one of the at least one subregion to be scanned.
  • the shortest scanning path as the target scanning path, it is possible to move from the current scanning point to achieve scanning of at least one sub-region to be scanned by moving as few distances as possible, which can reduce the scanning time to a certain extent and improve Scanning efficiency.
  • step 140 starts from the current scanning point, moves along the target scanning path, and scans at least one sub-region to be scanned in the target room.
  • the target room After all the sub-areas (all sub-areas) of the target room are scanned, it is equivalent to the target room being scanned. Whether the target room has been scanned or not can be determined according to the scanning degree of the target room.
  • the scanning of the target room reaches the second preset scanning degree, it can be considered that the scanning of the sub-region of the target room is completed.
  • the ratio of the scanned area of the target room to the total area of the target room reaches 95% or more, it can be considered that the scanning of the target room is completed.
  • the 3D point cloud of the target room obtained during the scanning process specifically includes the scan points that have been scanned in the target room, as well as the 3D coordinate information of these scan points, and the category attribute information of the points (semantic information, color texture information, etc.) .
  • the 3D map of the target room may be an occupied grid map representation, or a 3D grid map represented by an octree, or a semantic map with semantic information.
  • determining an optimal scanning point of each subregion in at least one subregion to be scanned includes: determining a geometric center of gravity of a geometry corresponding to a three-dimensional space to which each subregion belongs; and determining passing the geometric center of gravity and the geometry The lowest point of the upper surface edge of the edge and the reference plane perpendicular to the bottom surface of each sub-region; the target line segment is determined on the reference plane, where the starting point of the target line segment is the geometric center of gravity, the length of the target line segment is a preset length, The included angle of the bottom surface of each sub-region is a preset included angle; the position of the end point of the target line segment is determined as the position of the best scanning point of each sub-region; the end point of the target line segment is directed to the target The direction of the starting point of the line segment is determined as the scanning attitude of the optimal scanning point.
  • the geometry corresponding to the three-dimensional space to which each sub-region belongs can be a geometry composed of the three-dimensional space in which each sub-region is located, or the geometry is a geometry composed of a three-dimensional space occupied by an object in each sub-region. It can be regular or irregular.
  • the geometry corresponding to the sub-area to be scanned is an irregular cylinder.
  • the center of gravity A of the cylinder and the lowest point B on the upper edge of the cylinder form a reference plane along the height of the cylinder.
  • the reference The plane is perpendicular to the bottom surface of the cylinder.
  • a length can be drawn from point A on the reference plane to a preset length (based on the optimal observation distance of the sensor in the drone, it can be set to 2 meters or other values).
  • the end point of the line segment is the position of the best scanning point of the sub-area to be scanned.
  • the opposite direction of the line segment is relative to the height of the cylinder.
  • the angle formed by the directions is the scanning attitude (scanning angle) of the best scanning point of the sub-area to be scanned.
  • a region to be scanned in the target image may also be determined, and a specific process of determining the region to be scanned in the target image is as follows:
  • (10) Construct an initial grid map of the target room according to the bounding box, wherein the initial grid map is at least one of a top view, a bottom view, a left view, a right view, a front view, and a rear view of the target room;
  • the area outside the 3D point cloud projection area in the target grid map is determined as the area to be scanned, and the area to be scanned includes at least one sub-area to be scanned.
  • the boundary information may be information such as the approximate length, width, and height of the target room obtained during the initial scan.
  • the above boundary information may be obtained by performing a rough scan around the target room after the drone is initially lifted off.
  • the initial value of each grid in the initial grid map is an invalid value, indicating that the area in the initial grid map has not been scanned.
  • a camera or other detector may be used to roughly detect the interior of the room to obtain the approximate space of the room, and then the indoor enclosure box may be constructed according to the indoor space.
  • the flying height of the drone may be set in a certain proportion according to the height of the room, or may be set in a manner to maintain a certain safety distance from the top of the room.
  • the bounding box may be a cube, and the length, width, and height of the cube may correspond to the length, width, and height of the room, respectively, and are used to represent the size of the indoor space.
  • the initial grid map (or raster map) of the current room based on the bounding box
  • the lengths and widths of the top and bottom grids according to the indoor bounding box structure are the same as the lengths and widths of the room's bounding box, respectively, while the left-view grid, right-view grid, and front-view grid
  • the length and width of the graph and the rear view grid graph correspond to the length and height or width and height of the indoor bounding box.
  • the top-view grid diagram according to the bounding box structure is shown in FIG. 4, and the top-view grid diagram shown in FIG. 4 can be directly used as the initial grid diagram.
  • the length and width of the top-level grid diagram shown in FIG. 4 are the same as the length and width of the bounding box shown in FIG. 3, respectively.
  • the top-level grid diagram shown in FIG. 4 also contains many grids, and these grids can easily determine the scanning area and the area to be scanned.
  • top view can be divided into grid maps according to the size of 0.1 * 0.1m (also can be divided according to other sizes), so as to obtain the top grid map shown in FIG. 4, and use these grids to record the grid area Whether it has been scanned to determine the area to be scanned and the area to be scanned.
  • the boundary of the obtained grid image of a certain perspective may also be extended, and the expanded grid image may be determined as the initial grid image.
  • the expanded grid image may be determined as the initial grid image.
  • FIG. 4 by expanding the top-view grid diagram shown in FIG. 4, a top-view grid diagram after extending the boundary shown in FIG. 5 can be obtained.
  • the grid on the boundary in the top-view grid diagram shown in FIG. 5 is used to represent The walls around the room, where some of the grids on the boundary have the height h 0 of the wall, and the other grids on the boundary do not exist in these grids because they block or exceed the detection distance of the drone sensor Value.
  • the initial grid diagram is a top-down grid diagram shown in FIG. 5
  • the currently acquired 3D point cloud information can be projected (projected) into the top-down grid diagram shown in FIG. 5 to obtain the net shown in FIG. 6.
  • the area where the symbol “+” is located represents the area where the 3D point cloud is projected
  • the unit area where the symbol “-” is located represents the area which is not projected to the 3D point cloud. Therefore, the area on the right side of FIG. 6 is the area to be scanned, where The grid where h is located represents the boundary between the area to be scanned and the scan area, and the value of h represents the height value at the boundary.
  • the height value at the boundary between the area to be scanned and the scanning area shown in FIG. 6 may have a certain difference.
  • H in the figure is only a symbol representing the height at the boundary, and does not represent the actual height.
  • the area to be scanned shown in FIG. 6 is a complete area. At this time, the area to be scanned is substantially composed of a sub-area.
  • the area to be scanned includes two independent areas. At this time, the area to be scanned is substantially composed of two sub-areas (sub-area 1 to be scanned and sub-area 2 to be scanned).
  • occlusion areas may be generated during the scanning process, that is, there may be occlusion areas in the area to be scanned.
  • boundary line 1 and boundary line 2 are continuous inside the image.
  • the positions of the boundary line 1 and the boundary line 2 in the 3D coordinate system constitute a visual occlusion area.
  • the occlusion area formed by the occlusion is also included in the range of the sub-area to be scanned. In the present application, the occlusion area has been considered when planning the path according to the area to be scanned.
  • FIG. 9 is a schematic flowchart of scanning a plurality of rooms.
  • the process shown in Figure 9 can be performed by a drone or other device with automatic flight and scanning capabilities.
  • the process shown in FIG. 9 specifically includes steps 2001 to 2006. Steps 2001 to 2006 are described in detail below.
  • the scan information obtained during the scanning of the current room may be used to determine the exit information of the current room (the scan information is determined while scanning the current room), or the current room may be determined after the scan of the current room is completed. Exit information.
  • the above exit information may include the size of the exit (door, stairway, etc.) of the current room, and the position of the exit in the current room, and so on.
  • an image of the current room and a 3D point cloud obtained by scanning may be obtained, and then the exit information of the current room may be determined based on the image of the current room and the 3D point cloud.
  • the indoor exit can generally include doors, windows, stairs, etc.
  • some pictures marked with the positions of the doors and stairs can be collected to train the neural network. After training, use this network model. Enter photos with doors or stairs, and the neural network can output where the doors and stairs are in the image.
  • the position of the exit identified by the neural network is a 2D position.
  • the point cloud data belonging to the gate estimates the 3D information of the gate through a neural network (for example, a Frustum-Pointnet network) algorithm.
  • step 2004 is performed.
  • the scanning degree of the current room may be considered to be over; if the scanning degree of the current room does not reach the second preset scanning degree (less than 90%, that is, more than 10% of the area is not scanned). It can be considered that the scan of the current room is not over.
  • the above-mentioned second preset scanning degree can be set according to actual conditions, for example, it can be set to 90%, 95%, and so on.
  • the second preset scanning degree may be the scanning completion degree, such as the proportion of the scanned space to the entire space, or the scanning fineness, that is, the richness of the details of the scanned space in the scanning, etc. limited.
  • step 2005 is performed; if there are no exits in the current room, then step 2006 is performed.
  • the state of the exit of each room can be recorded, and the initial state of each exit is not passed. If a certain exit is passed, the state of the exit is recorded as passed.
  • step 2004, it may also be determined whether there are exits in the current room that have not passed. If there are unexited exits in the current room, then scan through the unexited exits of the current room to enter other unscanned rooms for scanning; if there are no unexited exits in the current room, return to the previous room in the current room. After returning to the previous room, determine whether there are exits in the previous room. If there are exits in the previous room, go to the other unscanned room from the exits of the previous room to scan, otherwise continue to return to the previous room. Until you return to the room that was scanned initially. If the room that was scanned initially has unexited exits, then continue scanning the other scanned rooms through the unexited exits; if the room that was originally scanned does not have unexited exits, Then end the scan.
  • step 2005 After performing step 2005, perform step 2002 again until all unscanned rooms are scanned.
  • the scanning process when scanning other unscanned rooms is the same as the scanning process of the target room in the method shown in FIG. 1, and is not repeated here.
  • a 3D map of each room can be constructed according to the 3D point cloud of each room.
  • FIG. 10 and FIG. 11 can perform each step of the method for establishing an indoor 3D map in the embodiment of the present application. Specifically, the drone shown in FIG. 10 and FIG. 11 can perform each step in FIG. 1 and FIG. 9. In order to avoid unnecessary repetition, the repeated description is appropriately omitted when introducing the drone of the embodiment of the present application.
  • FIG. 10 is a schematic block diagram of a drone according to an embodiment of the present application.
  • the drone 3000 shown in FIG. 10 includes:
  • the to-be-scanned sub-area determining module 3001 is further configured to determine an optimal scan point of each of the at least one to-be-scanned sub-area, where the best scan point of each of the sub-areas is performed on each of the sub-areas.
  • the scanning degree of each sub-region reaches a first preset scanning degree
  • a path planning module 3002 configured to determine a target scanning path according to the current scanning point and an optimal scanning point of each of the at least one subregion to be scanned;
  • a movement and scanning module 3003, configured to start at the current scanning point and scan at least one sub-area to be scanned of the target room according to the target scanning path;
  • a mapping module 3004 is configured to establish a 3D map of the target room according to the 3D point cloud of the target room obtained through the scanning when all the sub-areas of the target room have completed the scanning.
  • FIG. 11 is a schematic block diagram of an unmanned aerial vehicle according to an embodiment of the present application.
  • the drone 4000 shown in FIG. 11 includes a detection unit 4001, a flying unit 4002, and a control and processing unit 4003.
  • control and processing unit 4003 is equivalent to the above-mentioned sub-area determination module 3001, path planning module 3002, and mapping module 3004;
  • detection unit 4001 is equivalent to the scanning module in the motion and scanning module 3003, and the flight unit 4002 is equivalent to The motion module in the motion and scanning module 3003;
  • each module in the drone 4000 can also implement the functions of each module in the drone 3000.
  • control and processing unit 4003 may correspond to a processor inside the drone, and the processor inside the drone may specifically be a central processing unit (CPU) or an artificial intelligence chip.
  • CPU central processing unit
  • artificial intelligence chip an artificial intelligence chip
  • the detection unit 4001 may correspond to a camera of a drone and other detectors.
  • the flying unit 4002 may correspond to a motor, a propeller, and other components used to drive the drone to fly.
  • the disclosed methods and devices may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of units or modules in the device is only a logical function division.
  • there may be another division manner, such as multiple units or components. Can be combined or integrated into another system, or some features can be ignored or not implemented.
  • each unit or module in the apparatus of the present application may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the method in the present application is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially a part that contributes to the existing technology or a part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
  • the aforementioned storage media include: U disks, mobile hard disks, read-only memories (ROM), random access memories (RAM), magnetic disks or optical disks, and other media that can store program codes .

Abstract

一种建立室内3D地图的方法和无人机。建立室内3D地图的方法包括:确定目标房间的至少一个待扫描子区域(110);确定至少一个待扫描子区域中的每个子区域的最佳扫描点(120);根据当前扫描点和至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径(130);沿目标扫描路径,对目标房间的至少一个待扫描子区域进行扫描(140);当目标房间的全部子区域完成扫描时,根据通过扫描获得的目标房间的3D点云,建立目标房间的3D地图(160)。室内3D地图的方法可以在扫描时依据待扫描区域实时规划扫描路径,能够实现对室内的完整的扫描,进而构建出更准确的3D地图。

Description

建立室内3D地图的方法和无人机
本申请要求于2018年06月22日提交中国专利局、申请号为201810648788.5、申请名称为“建立室内3D地图的方法和无人机”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及无人机技术领域,并且更具体地,涉及一种建立室内3D地图的方法和无人机。
背景技术
随着无人机技术的快速发展,室内无人机得到了一定的应用。室内无人机可以在室内空间内自由移动,能够完成地面机器人无法完成的各种工作。室内无人机在室内工作之前,需要先对室内进行完整的扫描生成室内的3D地图(该3D地图可以表征室内的空间结构),然后再基于该3D地图进行工作。
传统方案一般是按照预先设置的规则对房间进行扫描,并根据扫描得到的3D点云来建立3D地图。例如,控制无人机沿障碍物飞行,并在飞行时对室内进行扫描,当扫描完成时再根据扫描得到的3D点云来建立3D地图。
传统方案中直接按照预先设定的规则对室内进行扫描时,可能会出现扫描不完整的情况,导致最终建立的3D地图的不够准确。
发明内容
本申请提供一种建立室内3D地图的方法和无人机,以实现对室内的完整扫描,进而构建出更准确的3D地图。
第一方面,提供一种建立室内3D地图的方法,该方法用于对目标房间的各个子区域进行扫描,该方法具体包括:确定目标房间的至少一个待扫描子区域;确定至少一个待扫描子区域中的每个子区域的最佳扫描点;根据当前扫描点和至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径;从当前扫描点出发,根据目标扫描路径,对目标房间的至少一个待扫描子区域进行扫描;当目标房间的所有子区域完成扫描时,根据扫描获得的目标房间的3D点云,建立目标房间的3D地图。
其中,在每个子区域的最佳扫描点对每个子区域进行扫描时,每个子区域的扫描程度达到第一预设扫描程度。
上述方法可以由无人机执行,上述目标房间可以是需要进行3D建图的房间。
可选地,上述目标扫描路径的起点为当前扫描点,上述目标扫描路径的终点位于至少一个待扫描子区域中的任意一个待扫描子区域中。
可选地,上述至少一个待扫描区域中的任意两个子区域之间互不联通。
应理解,在扫描过程中获得的目标房间的3D点云具体包括目标房间中已经扫描到的3D坐标信息。目标房间的3D地图可以是指包含目标房间中物体占据的网格图。
本申请中,在路径规划时只关注未扫描的区域,这样规划出来的扫描路径更具有针对性,能够提高对对房间进行扫描时的整体扫描效率,可以最终构建的3D地图的效果。
结合第一方面,在第一方面的某些实现方式中,上述方法还包括:将至少一个待扫描子区域中面积最大的子区域确定为目标子区域;根据当前扫描点和至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径,包括:根据当前扫描点和目标子区域的最佳扫描点确定目标扫描路径,其中,目标扫描路径的起点为当前扫描点,终点为目标子区域的最佳扫描点。
在扫描过程中通过优先扫描面积较大的子区域,能够在扫描过程中尽可能的扫描较大的面积,能够在一定程度上提高扫描的效率。
结合第一方面,在第一方面的某些实现方式中,根据当前扫描点和目标子区域的最佳扫描点确定目标扫描路径,包括:确定起点为当前扫描点,终点为目标子区域的最佳扫描点的多条扫描路径;将多条扫描路径中路径最短的扫描路径确定为目标扫描路径。
结合第一方面,在第一方面的某些实现方式中,根据当前扫描点和目标子区域的最佳扫描点确定目标扫描路径,包括:将起点为当前扫描点,终点为目标子区域的最佳扫描点的多条扫描路径中路径最短的扫描路径确定为目标扫描路径。
通过将最短的扫描路径作为目标扫描路径,能够在扫描的过程中减少移动的距离,在一定程度上可以减少扫描时间,提高扫描效率。
可选地,上述多条扫描路径中的最短的扫描路径可以是从当前扫描点到目标子区域的最佳扫描点的直线路径。
应理解,如果当前扫描点与目标子区域的最佳扫描点之间存在的障碍物时,导致沿着当前扫描点到目标子区域的最佳扫描点之间的直线路径无法到达目标子区域的最佳扫描点。那么,上述目标扫描路径就是从当前扫描点出发,绕过障碍物并且到达目标子区域的最佳扫描点的最短的路径。
结合第一方面,在第一方面的某些实现方式中,根据当前扫描点和目标子区域的最佳扫描点确定目标扫描路径,包括:确定起点为当前扫描点,终点为目标子区域的最佳扫描点的多条扫描路径;根据多条扫描路径中的每条扫描路径的长度,以及沿每条扫描路径进行扫描时经过其它待扫描子区域的最佳扫描点时的扫描面积,确定每条扫描路径的路径代价;将多条路径中路径代价最小的路径确定为目标扫描路径。
结合第一方面,在第一方面的某些实现方式中,根据当前扫描点和目标子区域的最佳扫描点确定目标扫描路径,包括:在起点为当前扫描点,终点为目标子区域的最佳扫描点的多条扫描路径中,根据多条扫描路径中的每条扫描路径的长度,以及沿每条扫描路径进行扫描时经过其它待扫描子区域的最佳扫描点时的扫描面积,确定每条扫描路径的路径代价,其中,其它待扫描子区域为至少一个待扫描子区域中除目标子区域之外的子区域;将多条路径中路径代价最小的路径确定为目标扫描路径。
其中,上述其它待扫描子区域是指上述至少一个待扫描子区域中除了目标子区域之外的子区域。
在确定目标扫描路径时,具体可以采用支持负权边贝尔曼-福特(Bellman-Ford,BF) 算法或整数规划、其他贪婪算法例如遗传算法(genetic algorithm,GA)从多种路径中选择出路径代价最小的路径。
通过综合每个路径的路径长度,以及沿该路径进行扫描时能够扫描到的其它待扫描子区域的面积,能够从多个路径中选择出更合理的路径作为目标扫描路径,提高扫描效率。
结合第一方面,在第一方面的某些实现方式中,每条路径的扫描代价根据如下公式获得:
C=α·d-β·S
其中,C表示路径的路径代价,d的数值与路径长度成正比,S的数值与经过其它待扫描子区域的最佳扫描点时能够扫描到的其它待扫描子区域的面积成正比,α和β为权重系数。
结合第一方面,在第一方面的某些实现方式中,目标扫描路径的起点为当前扫描点,目标扫描路径经过至少一个待扫描子区域中的每个子区域的最佳扫描点,且目标扫描路径的终点为至少一个待扫描子区域中的一个子区域的最佳扫描点。
结合第一方面,在第一方面的某些实现方式中,根据当前扫描点和至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径,包括:确定起点为当前扫描点,经过至少一个待扫描子区域中的每个子区域的最佳扫描点且终点为至少一个待扫描子区域中的一个子区域的最佳扫描点的多条扫描路径;将多条扫描路径中路径最短的扫描路径确定为目标扫描路径。
结合第一方面,在第一方面的某些实现方式中,根据当前扫描点和至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径,包括:将多条扫描路径中路径最短的扫描路径确定为目标扫描路径,其中,多条路径中的每条扫描路径是起点为当前扫描点,经过至少一个待扫描子区域中的每个子区域的最佳扫描点且终点为至少一个待扫描子区域中的一个子区域的最佳扫描点的路径。
通过将上述多条扫描路径中的最短的扫描路径作为目标扫描路径,能够通过移动尽量较少的距离从当前扫描点出发到实现对至少一个待扫描子区域的扫描,在一定程度上可以减少扫描时间,提高扫描效率。
结合第一方面,在第一方面的某些实现方式中,确定目标房间的至少一个待扫描子区域,包括:获取目标房间的边界信息;根据边界信息构建目标房间的包围盒;根据包围盒构建目标房间的初始网格图,其中,初始网格图为目标房间的俯视图、仰视图、左视图、右视图、前视图以及后视图中的至少一种;将边界信息投影到初始网格图,得到目标网格图;将目标网格图中3D点云投影区域之外的区域确定为待扫描区域,待扫描区域包括至少一个待扫描子区域。
应理解,初始网格图中的每个网格的初始值为无效值,表示该初始网格图中的区域还未进行扫描。
上述边界信息可以是已扫描获得的3D点云。
可选地,上述包围盒为与目标房间对应的立方体,该立方体的长宽高分别对应目标房间的长宽高,用于表示目标空间的大小。
应理解,在根据包围盒构建当前房间的初始网格图(或者称为栅格图)时,可以根据包围盒来构造房间的俯视网格图、仰视网格图、左视网格图、右视网格图、前视网格图以 及后视网格图,并选择其中的一种网格图作为初始网格图。或者还可以只构建一种视图的网格图,并将该视图的网格图确定为初始网格图。
结合第一方面,在第一方面的某些实现方式中,确定至少一个待扫描子区域中的每个子区域的最佳扫描点,包括:确定每个子区域所属的三维空间对应的几何体的几何重心;确定经过几何重心和几何体的上表面边缘的边缘最低点且与每个子区域的底面垂直的参考平面;在参考平面上确定目标线段,其中,目标线段的起点为几何重心,目标线段的长度为预设长度,目标线段与每个子区域的底面的夹角为预设夹角;将目标线段的终点所在的位置确定为每个子区域的最佳扫描点的位置;将目标线段的终点指向目标线段的起点的方向确定为最佳扫描点的扫描姿态。
可选地,上述方法还包括:根据当前房间的实时3D点云信息,确定当前房间的出口;在当前房间的扫描程度满足第二预设扫描程度时,移动到当前房间的出口,对当前房间的出口之外的其它房间进行扫描。
在扫描过程中,通过对当前房间的出口进行识别,能够实现对多个房间的连续扫描,能够提高扫描的效果。
第二方面,提供了一种无人机,该无人机包括用于执行上述第一方面中的方法的各个模块。
第三方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储了程序代码,其中,所述程序代码包括用于执行第一方面中的任意一种实现方式中的方法的指令。
当上述计算机可读存储介质设置在无人机内部时,无人机可以执行该计算机可读存储介质中的程序代码,当无人机执行该计算机可读存储介质中的程序代码时,该无人机可以执行上述第一方面中的任意一种实现方式中的方法。
附图说明
图1是本申请实施例的建立室内3D地图的方法的示意性流程图;
图2是确定待扫描子区域的最佳扫描点的示意图;
图3是包围盒的示意图;
图4是俯视网格图的示意图;
图5是扩展边界后的俯视网格图的示意图;
图6是包含3D点云信息的网格图;
图7是网格图的示意图;
图8是形成遮挡区域的示意图;
图9是对多个房间进行扫描的示意性流程图;
图10是本申请实施例的无人机的示意性框图;
图11是本申请实施例的无人机的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请中的扫描点还可以称为观测点或者探测点,扫描点可以是对室内进行扫描时经过的一些关键点。本申请中通过获取这些扫描点并根据这些扫描点来规划扫描路径从而实 现对室内的完整扫描。
图1示出了本申请实施例的建立室内3D地图的方法的示意性流程图。图1所示的方法可以由无人机执行,或者其它具有自动飞行和扫描功能的设备来执行。图1所示的方法具体包括步骤110至步骤160,下面对步骤110至步骤160进行详细的介绍。
110、确定目标房间的至少一个待扫描子区域。
上述目标房间可以是需要进行3D建图的房间。
120、确定至少一个待扫描子区域中的每个子区域的最佳扫描点。
其中,在每个子区域的最佳扫描点是对每个子区域进行扫描时,每个子区域的扫描程度达到第一预设扫描程度。
对某个子区域的扫描程度可以用该子区域的扫描面积与该子区域的全部面积的比值来表示。例如,上述第一预设扫描程度可以为60%,此时以第一子区域为例,第一子区域的最佳扫描点是在对第一子区域进行扫描时,第一子区域的扫描面积与第一子区域的全部面积的比值大于或者等于60%。
130、根据当前扫描点和至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径。
上述至少一个待扫描子区域可以是目标房间中的全部待扫描子区域(或者称为未扫描子区域)。
可选地,根据当前扫描点和至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径,包括:确定起点为当前扫描点,并且经过目标房间的至少一个待扫描子区域的路径为目标扫描路径。
通过规划一条从当前扫描点出发,并经过目标房间的至少一个待扫描子区域的路径能够实现对待扫描区域的扫描。
可选地,上述目标路径既可以是从当前扫描点出发到上述至少一个待扫描子区域中的某个子区域的扫描路径,也可以是从当前扫描点出发并且经过上述至少一个待扫描子区域中的全部子区域的最佳扫描点的扫描路径。
下面对分别对这两种情况下确定目标扫描路径的具体过程进行详细的介绍。
第一种情况:目标扫描路径的起点为当前扫描点,终点为至少一个待扫描子区域中的某个子区域的最佳扫描点。
在第一种情况下,可以规划一条从当前扫描点到至少一个待扫描区域中的面积最大的子区域的最佳扫描点的路径作为目标扫描路径,具体过程如下:
(1)将至少一个待扫描子区域中面积最大的子区域确定为目标子区域;
(2)根据当前扫描点和目标子区域的最佳扫描点确定目标扫描路径,其中,目标扫描路径的起点为当前扫描点,终点为目标子区域的最佳扫描点的路径。
本申请中,在扫描过程中通过优先扫描面积较大的子区域,能够在扫描过程中尽可能的扫描较大的面积,能够在一定程度上提高扫描的效率。
进一步地,为了在扫描的过程中尽可能的减少路径的长度,可以选择最短的可用路径作为目标扫描路径,具体过程如下:
(3)将起点为所述当前扫描点,终点为所述目标子区域的最佳扫描点的多条扫描路径中路径最短的扫描路径确定为所述目标扫描路径。
本申请中,通过将最短的扫描路径作为目标扫描路径,能够在扫描的过程中减少移动的距离,在一定程度上可以减少扫描时间,提高扫描效率。
可选地,上述多条扫描路径中的最短的扫描路径的长度可以是从当前扫描点到目标子区域的最佳扫描点的直线距离。
或者,当前扫描点与目标子区域的最佳扫描点之间存在的障碍物,导致沿着当前扫描点到目标子区域的最佳扫描点之间不存在直线路径。那么,上述目标扫描路径就是从当前扫描点出发,绕过障碍物并且到达目标子区域的最佳扫描点的最短的路径。
在第一种情况下,除了考虑路径的距离来确定目标扫描路径之外,还可以综合考虑起点为当前扫描点,终点为目标子区域的最佳扫描点的路径经过至少一个待扫描子区域的最佳扫描点时的扫描面积来综合确定,具体过程如下:
(4)在起点为当前扫描点,终点为目标子区域的最佳扫描点的多条扫描路径中根据多条扫描路径中的每条扫描路径的长度,以及沿每条扫描路径进行扫描时经过其它待扫描子区域的最佳扫描点时的扫描面积,确定每条扫描路径的路径代价;
(5)将多条路径中路径代价最小的路径确定为目标扫描路径。
其中,上述其它待扫描子区域是指上述至少一个待扫描子区域中除了目标子区域之外的子区域。
在确定目标扫描路径时,具体可以采用支持负权边贝尔曼-福特(Bellman-Ford,BF)算法或整数规划、其他贪婪算法例如遗传算法(GA)从多种路径中选择出路径代价最小的路径。
本申请中,通过综合每个路径的路径长度,以及沿该路径进行扫描时能够扫描到的其它待扫描子区域的面积,能够从多个路径中选择出更合理的路径作为目标扫描路径,提高扫描效率。
上述每条路径的路径代价可以根据公式(1)确定。
C=α·d-β·S   (1)
其中,C表示路径的路径代价,d的数值与路径长度成正比,S的数值与经过其它待扫描子区域的最佳扫描点时能够扫描到的其它待扫描子区域的面积成正比,α和β为权重系数(α为d的权重系数,β为S的权重系数)。
第二种情况:目标扫描路径的起点为当前扫描点,且目标扫描路径经过至少一个待扫描子区域中的全部子区域,
在第二种情况下,起点为当前扫描点,并且经过至少一个待扫描子区域的全部子区域的路径有多种,可以直接从该多条路径中选择一条路径作为目标扫描路径,具体过程如下:
(6)根据当前扫描点和至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径。
具体地,上述过程(6)中确定的目标扫描路径的起点为当前扫描点,终点为至少一个待扫描子区域中的一个子区域的最佳扫描点的路径,并且该目标扫描路径经过至少一个待扫描子区域中的每个子区域的最佳扫描点。
进一步地,在第二种情况下,还可以从多条可选的路径中选择路径最短的路径作为目标扫描路径,具体过程如下:
(7)将多条扫描路径中路径最短的扫描路径确定为目标扫描路径,其中,多条路径 中的每条扫描路径是起点为当前扫描点,经过至少一个待扫描子区域中的每个子区域的最佳扫描点且终点为至少一个待扫描子区域中的一个子区域的最佳扫描点的路径。
本申请中,通过将最短的扫描路径作为目标扫描路径,能够通过移动尽量较少的距离从当前扫描点出发到实现对至少一个待扫描子区域的扫描,在一定程度上可以减少扫描时间,提高扫描效率。
140、从当前扫描点出发,根据目标扫描路径,对目标房间的至少一个待扫描子区域进行扫描;
应理解,步骤140是从当前扫描点出发,沿着目标扫描路径进行移动,对目标房间的至少一个待扫描子区域进行扫描。
150、确定目标房间的全部子区域是否完成扫描。
目标房间的全部子区域(所有的子区域)被扫描完相当于目标房间被扫描完,目标房间是否被扫描完可以根据目标房间的扫描程度来进行判断。
具体地,当对目标房间的扫描达到第二预设扫描程度时,可以认为目标房间的子区域完成了扫描。例如,当目标房间的已扫描面积占目标房间的总面积的比例达到95%及以上时,可以认为完成了目标房间的扫描。
160、根据扫描获得的目标房间的3D点云,建立目标房间的3D地图。
应理解,在扫描过程中获得的目标房间的3D点云具体包括目标房间中已经扫描到的扫描点,以及这些扫描点的3D坐标信息,点的类别属性信息(语义信息、颜色纹理信息等)。目标房间的3D地图可以是占据栅格地图表示,也可以是采用八叉树(octree)表示的3D网格图,也可以是具有语义信息的语义地图等。
本申请中,在路径规划时只关注未扫描的区域,这样规划出来的扫描路径更具有针对性,能够提高对对房间进行扫描时的整体扫描效率,可以最终构建的3D地图的效果。
可选地,作为一个实施例,确定至少一个待扫描子区域中的每个子区域的最佳扫描点,包括:确定每个子区域所属的三维空间对应的几何体的几何重心;确定经过几何重心和几何体的上表面边缘的边缘最低点且与每个子区域的底面垂直的参考平面;在参考平面上确定目标线段,其中,目标线段的起点为几何重心,目标线段的长度为预设长度,目标线段与每个子区域的底面的夹角为预设夹角;将所述目标线段的终点所在的位置确定为所述每个子区域的最佳扫描点的位置;将所述目标线段的终点指向所述目标线段的起点的方向确定为所述最佳扫描点的扫描姿态。
应理解,每个子区域所属的三维空间对应的几何体可以是每个子区域所处的三维空间所构成的几何体,或者,该几何体是每个子区域中的物体所占据的三维空间构成的几何体,该几何体可以是规则的,也可以是不规则的。
如图2所示,待扫描子区域对应的几何体为一个不规则的柱体,柱体的重心A以及柱体上表面边缘的最低点B沿柱体的高度方向形成了一个参考平面,该参考平面与柱体的底面垂直,那么,可以在该参考平面上从A点引出一条长度为预设长度(根据无人机中的传感器的最佳观测距离,可以设置为2米或者其他值),与柱体的高度方向成一定夹角(例如,30度)的线段,该线段的终点所在的位置就为该待扫描子区域的最佳扫描点的位置,该线段的反方向与柱体高度方向所成的夹角就是该待扫描子区域的最佳扫描点的扫描姿态(扫描角度)。
应理解,在对至少一个待扫描子区域进行扫描之前,还可以先确定目标图像中的待扫描区域,确定目标图像中的待扫描区域的具体过程如下:
(8)获取目标房间的边界信息;
(9)根据边界信息构建目标房间的包围盒;
(10)根据包围盒构建目标房间的初始网格图,其中,初始网格图为目标房间的俯视图、仰视图、左视图、右视图、前视图以及后视图中的至少一种;
(11)将边界信息投影到初始网格图,得到目标网格图;
(12)将目标网格图中3D点云投影区域之外的区域确定为待扫描区域,待扫描区域包括至少一个待扫描子区域。
上述边界信息可以是初始扫描时获得的目标房间的大致的长宽高等信息。例如,当本申请的方法由无人机执行时,上述边界信息可以是无人机初始升空后对目标房间周围进行大致的扫描得到的。
另外,初始网格图中的每个网格的初始值为无效值,表示该初始网格图中的区域未被扫描。
具体地,无人机起飞后可以采用摄像头或者其它探测器对室内进行大致的探测,以获取室内的大致空间,然后根据室内的空间构造室内包围盒。
另外,无人机飞行的高度可以是根据房间高度按照一定的比例的方式进行设置,也可以按照与房间顶部保留一定的安全距离的方式来设置。
如图3所示,上述包围盒可以是一个立方体,该立方体的长宽高可以分别对应室内的长宽高,用于表示室内空间的大小。
在根据包围盒构建当前房间的初始网格图(或者称为栅格图)时,可以根据包围盒来构造房间的俯视网格图、仰视网格图、左视网格图、右视网格图、前视网格图以及后视网格图,并选择其中的一种网格图作为初始网格图。或者还可以只构建一种视图的网格图,并将该视图的网格图确定为初始网格图。
应理解,根据室内包围盒构造的俯视网格图和仰视网格图的长和宽与房间包围盒的长和宽分别相同,而左视网格图、右视网格图、前视网格图以及后视网格图的长和宽则是与室内包围盒的长和高或者宽和高是相对应的。
例如,根据包围盒构造的俯视网格图如图4所示,图4所示的俯视网格图可以直接作为初始网格图。应理解,图4所示的俯视网格图的长和宽与图3所示包围盒的长和宽分别相同。另外,在图4所示的俯视网格图中还包含很多网格,这些网格可以方便地确定扫描区域和待扫描区域。
应理解,可以按照0.1*0.1m的大小(也可以按照其它尺寸进行划分)将俯视图划分为网格图,从而得到图4所示的俯视网格图,使用这些网格来记录该网格区域是否被扫描过,从而确定扫描区域和待扫描区域。
此外,在确定初始网格图时,还可以将得到的某个视角的网格图的边界进行扩展,将扩展后的网格图确定为初始网格图。例如,对图4所示的俯视网格图进行扩展,可以得到图5所示的扩展边界后的俯视网格图,图5所示的俯视网格图中的边界上的网格用来表示室内周围的墙,其中,边界上的部分网格的取值为墙的高度h 0,而边界上的其它网格由于遮挡或者超出了无人机的传感器的探测距离,这些网格中不存在取值。
当初始网格图是图5所示的俯视网格图时,可以将当前获取到的3D点云信息投影(投射)到图5所示的俯视网格图中,得到图6所示的网格图。在图6中,符号“+”所在的区域表示3D点云投影的区域,符号“-”所在单位区域表示没有投影到3D点云的区域,因此,图6右侧区域就是待扫描区域,其中,h所在的格子表示待扫描区域和扫描区域的交界处,h的取值表示该交界处的高度值。应理解,图6所示的待扫描区域和扫描区域的交界处的高度值可以有一定的差异,图中的h仅仅是表示交界处的高度的一个符号,不表示实际的高度。图6所示的待扫描区域是一个完整的区域,此时,待扫描区域实质上由一个子区域组成。
如图7所示,待扫描区域包括两个独立的区域,此时,待扫描区域实质上有两个子区域(待扫描子区域1和待扫描子区域2)组成。
另外,在对室内进行扫描的过程中,由于物体的遮挡,有可能会在扫描的过程中产生遮挡区域,也就是说待扫描区域可能会存在遮挡区域,下面对遮挡区域的相关的内容进行介绍。
例如,如图8所示,由于当前房间内钢琴的遮挡,在对钢琴进行扫描时可能会获取两个边界线,边界线1和边界线2,这两个边界线在图像内部的是连续的,但是在3D坐标系是非连续的,边界线1和边界线2在3D坐标系中的位置构成了视觉遮挡区域。应理解,由于遮挡形成的遮挡区域也包含在待扫描子区域的范围内,本申请在根据待扫描区域规划路径时实际上已经考虑到了遮挡区域了。
应理解,在完成了对目标房间的扫描之后,还可以通过目标房间的出口进入其它房间对其它未扫描的房间进行扫描,从而实现在多个房间中的连续扫描,下面结合图9对这部分内容进行详细说明。
图9是对多个房间进行扫描的示意性流程图。图9所示的过程可以由无人机或者其它具有自动飞行和扫描功能的设备执行。图9所示的过程具体包括步骤2001至步骤2006,下面对步骤2001至步骤2006进行详细的介绍。
2001、开始;
当存在多个房间需要扫描时,可以从任意一个房间开始扫描。
2002、对当前房间进行扫描,并根据当前房间的扫描信息确定当前房间的出口信息。
应理解,这里的当前房间相当于图1所示的方法中的目标房间,对当前房间进行扫描的详细过程可以参见上文中图1所示的方法,详细过程不再重复。
具体地,可以在对当前房间的扫描过程中获取的扫描信息来确定当前房间的出口信息(边扫描边确定当前房间的出口信息),也可以在完成对当前房间的扫描后再确定当前房间的出口信息。
上述出口信息可以包括当前房间的出口(门、楼梯口等)的大小,以及出口在当前房间中的位置等等。
在确定当前房间的出口信息时,可以先获取当前房间的图像和扫描获得的3D点云,然后再根据当前房间的图像和3D点云,确定当前房间的出口信息。
在获取到了当前房间的图像之后,可以对当前房间的图像进行目标检测(可以将获得的图像通过神经网络或其他目标检测算法进行目标检测),获取当前房间的出口,在识别出来当前房间的出口之后再结合扫描获得的3D点云就可以得到出口的3D位置信息了。
具体地,室内的出口一般可以包括门、窗户、楼梯等,在进行出口识别时,可以采集一些有标注了门、楼梯位置的图片,对神经网络进行训练,训练出来后,使用这个网络模型,输入带有门或楼梯的照片,神经网络就能输出门、楼梯口在图像中的哪个位置。应理解,利用神经网络识别出的出口的位置是一个2D位置,为了获取当前房间的出口的真正位置,需要再结合当前房间的3D点云信息,并基于2D位置对点云进行分割,然后将分割后属于门的点云数据通过神经网络(例如,Frustum-Pointnet网络)算法估计出门的3D信息。
2003、确定当前房间是否扫描完毕;
如果当前房间没有扫描完毕,那么返回上一个步骤,继续执行步骤2002;如果当前房间已经扫描完毕,那么就执行步骤2004。
如果当前房间的扫描程度达到第二预设扫描程度(例如,扫描程度达到90%或者以上时)可以认为对当前房间的扫描结束;如果当前房间的扫描程度没有达到第二预设扫描程度(小于90%,也就是还有超过10%的区域没有扫描)可以认为对当前房间的扫描没有结束。
上述第二预设扫描程度可以根据实际情况来设置,例如,可以设置成90%、95%等等。
具体的,第二预设扫描程度可以是扫描的完成度,比如完成扫描的空间占全部空间的比例,也可以是扫描的精细度,即扫描中包含被扫描空间细节的丰富程度等,不做限定。
2004、确定已经扫描过的房间是否存在未经过的出口;
如果已经扫描过的房间存在未经过的出口,那么就执行步骤2005;如果当前房间不存在未经过的出口,那么就执行步骤2006。
应理解,在扫描过程中,可以记录每个房间的出口的状态,每个出口的初始状态均为未经过,如果经过了某个出口就记录该出口的状态为已经过。
在步骤2004中,还可以先判断当前房间是否存在未经过的出口。如果当前房间存在未经过的出口,那么就通过当前房间的未经过的出口进入其它未扫描房间进行扫描;如果当前房间不存在未经过的出口,那么就退回到当前房间的上一个房间。在退回到上一个房间之后判断上一个房间是否存在未经过的出口,如果存在未经过的出口的话就从上一个房间的未经过的出口进入其它未扫描的房间进行扫描,否则继续退回上一个房间,直到退回到初始扫描的房间,如果初始扫描的房间存在未经过的出口,那么就通过该未经过的出口继续对其它为扫描的房间进行扫描;如果初始扫描的房间不存在未经过的出口,那么就结束扫描。
2005、通过未经过的出口进入其它未扫描的房间,对其它未扫描的房间进行扫描。
执行完步骤2005之后,重新执行步骤2002,直到将所有未扫描的房间扫描完毕。
在对其它未扫描的房间进行扫描时的扫描过程与图1所示的方法中对目标房间的扫描过程相同,这里不再重复描述。
2006、结束。
在扫描结束后,可以根据获取的各个房间3D点云来构建各个房间的3D地图。
上文结合图1至图9对本申请实施例的建立室内3D地图的方法进行了详细的介绍,下面结合图10和图11对本申请实施例的无人机进行介绍,应理解,图10和图11所示的无人机能够执行本申请实施例的建立室内3D地图的方法的各个步骤,具体地,图10和图 11所示的无人机能够执行图1和图9中的各个步骤,为了避免不必要的重复下面在介绍本申请实施例的无人机时适当省略重复的描述。
图10是本申请实施例的无人机的示意性框图。图10所示的无人机3000包括:
待扫描子区域确定模块3001,用于确定目标房间的至少一个待扫描子区域;
待扫描子区域确定模块3001还用于确定所述至少一个待扫描子区域中的每个子区域的最佳扫描点,其中,在所述每个子区域的最佳扫描点对所述每个子区域进行扫描时,所述每个子区域的扫描程度达到第一预设扫描程度;
路径规划模块3002,用于根据所述当前扫描点和所述至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径;
运动和扫描模块3003,用于从所述当前扫描点出发,根据所述目标扫描路径,对所述目标房间的至少一个待扫描子区域进行扫描;
建图模块3004,用于当所述目标房间的所有子区域完成所述扫描时,根据通过所述扫描获得的所述目标房间的3D点云,建立所述目标房间的3D地图。
本申请中,在路径规划时只关注未扫描的区域,这样规划出来的扫描路径更具有针对性,能够提高对对房间进行扫描时的整体扫描效率,可以最终构建的3D地图的效果。
图11是本申请实施例的无人机的示意性框图。
图11所示的无人机4000包括:探测单元4001,飞行单元4002以及控制和处理单元4003。
其中,控制和处理单元4003相当于上文中的待扫描子区域确定模块3001、路径规划模块3002以及建图模块3004;探测单元4001相当于运动和扫描模块3003中的扫描模块,飞行单元4002相当于运动和扫描模块3003中的运动模块;无人机4000中的各个模块也可以实现无人机3000中的各个模块的功能。
应理解,上述控制和处理单元4003可以对应于无人机内部的处理器,无人机内部的处理器具体可以是中央处理器单元(central processing unit,CPU)或者是人工智能芯片。
探测单元4001可以对应于无人机的相机以及其它的探测器等。飞行单元4002可以对应于无人机的电机、螺旋桨等用于驱动无人机飞行的部件。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法和装置,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,装置中单元或模块的划分,仅仅为一种逻辑功能划分,实际实现时还可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。
另外,本申请装置中的各个单元或者模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
本申请中的方法如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代 码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (18)

  1. 一种建立室内3D地图的方法,其特征在于,对目标房间的各个子区域进行扫描,包括:
    确定目标房间的至少一个待扫描子区域;
    确定所述至少一个待扫描子区域中的每个子区域的最佳扫描点,其中,在所述每个子区域的最佳扫描点对所述每个子区域进行扫描时,所述每个子区域的扫描程度达到第一预设扫描程度;
    根据所述当前扫描点和所述至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径;
    从所述当前扫描点出发,根据所述目标扫描路径,对所述至少一个待扫描子区域进行扫描;
    当所述目标房间的所有子区域完成所述扫描时,根据扫描获得的所述目标房间的3D点云,建立所述目标房间的3D地图。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    将所述至少一个待扫描子区域中面积最大的子区域确定为目标子区域;
    根据所述当前扫描点和所述至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径,包括:
    根据所述当前扫描点和所述目标子区域的最佳扫描点确定所述目标扫描路径,其中,所述目标扫描路径的起点为所述当前扫描点,终点为所述目标子区域的最佳扫描点。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述当前扫描点和所述目标子区域的最佳扫描点确定所述目标扫描路径,包括:
    将起点为所述当前扫描点,终点为所述目标子区域的最佳扫描点的多条扫描路径中路径最短的扫描路径确定为所述目标扫描路径。
  4. 如权利要求2所述的方法,其特征在于,所述根据所述当前扫描点和所述目标子区域的最佳扫描点确定所述目标扫描路径,包括:
    在起点为所述当前扫描点,终点为所述目标子区域的最佳扫描点的多条扫描路径中,根据所述多条扫描路径中的每条扫描路径的长度,以及所述沿每条扫描路径进行扫描时经过其它待扫描子区域的最佳扫描点时的扫描面积,确定所述每条扫描路径的路径代价,其中,所述其它待扫描子区域为所述至少一个待扫描子区域中除所述目标子区域之外的子区域;
    将所述多条路径中路径代价最小的路径确定为所述目标扫描路径。
  5. 如权利要求4所述的方法,其特征在于,所述每条路径的扫描代价根据如下公式获得:
    C=α·d-β·S
    其中,C表示路径的路径代价,d的数值与路径长度成正比,S的数值与经过其它待扫描子区域的最佳扫描点时能够扫描到的其它待扫描子区域的面积成正比,α和β为权重系数。
  6. 如权利要求1所述的方法,其特征在于,所述目标扫描路径的起点为所述当前扫描点,所述目标扫描路径经过所述至少一个待扫描子区域中的每个子区域的最佳扫描点,且所述目标扫描路径的终点为所述至少一个待扫描子区域中的一个子区域的最佳扫描点。
  7. 如权利要6所述的方法,其特征在于,根据所述当前扫描点和所述至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径,包括:
    将多条扫描路径中路径最短的扫描路径确定为所述目标扫描路径,其中,所述多条路径中的每条扫描路径是起点为所述当前扫描点,经过所述至少一个待扫描子区域中的每个子区域的最佳扫描点且终点为所述至少一个待扫描子区域中的一个子区域的最佳扫描点的路径。
  8. 如权利要求1-7中任一项所述的方法,其特征在于,确定目标房间的至少一个待扫描子区域,包括:
    获取所述目标房间的边界信息;
    根据所述边界信息构建所述目标房间的包围盒;
    根据所述包围盒构建所述目标房间的初始网格图,其中,所述初始网格图为所述目标房间的俯视图、仰视图、左视图、右视图、前视图以及后视图中的至少一种;
    将所述边界信息投影到所述初始网格图,得到目标网格图;
    将所述目标网格图中3D点云投影区域之外的区域确定为待扫描区域,所述待扫描区域包括所述至少一个待扫描子区域。
  9. 如权利要求1-8中任一项所述的方法,其特征在于,确定所述至少一个待扫描子区域中的每个子区域的最佳扫描点,包括:
    确定所述每个子区域所属的三维空间对应的几何体的几何重心;
    确定经过所述几何重心和所述几何体的上表面边缘的边缘最低点且与所述每个子区域的底面垂直的参考平面;
    在所述参考平面上确定目标线段,其中,所述目标线段的起点为所述几何重心,所述目标线段的长度为预设长度,所述目标线段与所述每个子区域的底面的夹角为预设夹角;
    将所述目标线段的终点所在的位置确定为所述每个子区域的最佳扫描点的位置;
    将所述目标线段的终点指向所述目标线段的起点的方向确定为所述最佳扫描点的扫描姿态。
  10. 一种无人机,其特征在于,包括:
    待扫描子区域确定模块,用于确定目标房间的至少一个待扫描子区域;
    所述待扫描子区域确定模块还用于确定所述至少一个待扫描子区域中的每个子区域的最佳扫描点,其中,在所述每个子区域的最佳扫描点对所述每个子区域进行扫描时,所述每个子区域的扫描程度达到第一预设扫描程度;
    路径规划模块,用于根据所述当前扫描点和所述至少一个待扫描子区域中的每个子区域的最佳扫描点,确定目标扫描路径;
    运动和扫描模块,用于从所述当前扫描点出发,根据所述目标扫描路径,对所述至少一个待扫描子区域进行扫描;
    建图模块,用于当所述目标房间的所有子区域完成所述扫描时,根据通过所述扫描获得的所述目标房间的3D点云,建立所述目标房间的3D地图。
  11. 如权利要求10所述的无人机,其特征在于,所述待扫描子区域确定模块还用于:
    将所述至少一个待扫描子区域中面积最大的子区域确定为目标子区域;
    所述路径规划模块用于根据所述当前扫描点和所述目标子区域的最佳扫描点确定所述目标扫描路径,其中,所述目标扫描路径的起点为所述当前扫描点,终点为所述目标子区域的最佳扫描点。
  12. 如权利要求11所述的无人机,其特征在于,所述路径规划模块用于:
    将起点为所述当前扫描点,终点为所述目标子区域的最佳扫描点的多条扫描路径中路径最短的扫描路径确定为所述目标扫描路径。
  13. 如权利要求11所述的无人机,其特征在于,所述路径规划模块用于:
    在起点为所述当前扫描点,终点为所述目标子区域的最佳扫描点的多条扫描路径中,根据所述多条扫描路径中的每条扫描路径的长度,以及所述沿每条扫描路径进行扫描时经过所述至少一个待扫描子区域的最佳扫描点时的扫描面积,确定所述每条扫描路径的路径代价;
    将所述多条路径中路径代价最小的路径确定为所述目标扫描路径。
  14. 如权利要求13所述的无人机,其特征在于,所述每条路径的扫描代价根据如下公式获得:
    C=α·d-β·S
    其中,C表示路径的路径代价,d的数值与路径长度成正比,S的数值与经过其它待扫描子区域的最佳扫描点时能够扫描到的其它待扫描子区域的面积成正比,α和β为权重系数。
  15. 如权利要求10所述的无人机,其特征在于,所述目标扫描路径的起点为所述当前扫描点,所述目标扫描路径经过所述至少一个待扫描子区域中的每个子区域的最佳扫描点,且所述目标扫描路径的终点为所述至少一个待扫描子区域中的一个子区域的最佳扫描点。
  16. 如权利要15所述的无人机,其特征在于,所述路径规划模块用于:
    将多条扫描路径中路径最短的扫描路径确定为所述目标扫描路径,其中,所述多条路径中的每条扫描路径是起点为所述当前扫描点,经过所述至少一个待扫描子区域中的每个子区域的最佳扫描点且终点为所述至少一个待扫描子区域中的一个子区域的最佳扫描点的路径。
  17. 如权利要求10-16中任一项所述的无人机,其特征在于,所述待扫描子区域确定模块用于:
    获取所述目标房间的边界信息;
    根据所述边界信息构建所述目标房间的包围盒;
    根据所述包围盒构建所述目标房间的初始网格图,其中,所述初始网格图为所述目标房间的俯视图、仰视图、左视图、右视图、前视图以及后视图中的至少一种;
    将所述边界信息投影到所述初始网格图,得到目标网格图;
    将所述目标网格图中3D点云投影区域之外的区域确定为待扫描区域,所述待扫描区域包括所述至少一个待扫描子区域。
  18. 如权利要求10-17中任一项所述的无人机,其特征在于,所述待扫描子区域确定 模块用于:
    确定所述每个子区域所属的三维空间对应的几何体的几何重心;
    确定经过所述几何重心和所述几何体的上表面边缘的边缘最低点且与所述每个子区域的底面垂直的参考平面;
    在所述参考平面上确定目标线段,其中,所述目标线段的起点为所述几何重心,所述目标线段的长度为预设长度,所述目标线段与所述每个子区域的底面的夹角为预设夹角;
    将所述目标线段的终点所在的位置确定为所述每个子区域的最佳扫描点的位置;
    将所述目标线段的终点指向所述目标线段的起点的方向确定为所述最佳扫描点的扫描姿态。
PCT/CN2019/090503 2018-06-22 2019-06-10 建立室内3d地图的方法和无人机 WO2019242516A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810648788.5 2018-06-22
CN201810648788.5A CN110631581B (zh) 2018-06-22 2018-06-22 建立室内3d地图的方法和无人机

Publications (1)

Publication Number Publication Date
WO2019242516A1 true WO2019242516A1 (zh) 2019-12-26

Family

ID=68966451

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090503 WO2019242516A1 (zh) 2018-06-22 2019-06-10 建立室内3d地图的方法和无人机

Country Status (2)

Country Link
CN (1) CN110631581B (zh)
WO (1) WO2019242516A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984032A (zh) * 2020-07-24 2020-11-24 武汉智会创新科技有限公司 无人机路径规划方法、装置、电子设备及存储介质
US11216005B1 (en) * 2020-10-06 2022-01-04 Accenture Global Solutions Limited Generating a point cloud capture plan
CN117499547A (zh) * 2023-12-29 2024-02-02 先临三维科技股份有限公司 自动化三维扫描方法、装置、设备及存储介质
CN117553804A (zh) * 2024-01-11 2024-02-13 深圳市普渡科技有限公司 路径规划方法、装置、计算机设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506178B (zh) * 2020-08-25 2023-02-28 深圳银星智能集团股份有限公司 一种机器人控制方法、装置、终端和介质
CN113804183B (zh) * 2021-09-17 2023-12-22 广东汇天航空航天科技有限公司 一种实时地形测绘方法和系统
CN115415547B (zh) * 2022-11-07 2023-03-24 北京清研智束科技有限公司 电子束扫描方法、装置、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120143372A1 (en) * 2010-12-06 2012-06-07 Samsung Electronics Co., Ltd. Robot and method for planning path of the same
KR20120091937A (ko) * 2011-02-10 2012-08-20 고려대학교 산학협력단 시맨틱 격자 지도 생성 방법 및 시맨틱 격자 지도를 활용한 시맨틱 격자 지도 기반 탐사 방법
CN103941750A (zh) * 2014-04-30 2014-07-23 东北大学 基于小型四旋翼无人机的构图装置及方法
CN104991463A (zh) * 2015-05-21 2015-10-21 北京云迹科技有限公司 机器人半自主建图方法及系统
CN105911988A (zh) * 2016-04-26 2016-08-31 湖南拓视觉信息技术有限公司 一种自动制图装置及方法
CN107990876A (zh) * 2017-11-20 2018-05-04 北京科技大学 基于无人飞行器的地下矿山采空区快速扫描装置及方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101803598B1 (ko) * 2014-09-02 2017-12-01 네이버비즈니스플랫폼 주식회사 클라우드 포인트를 이용한 실내 지도 구축 장치 및 방법
EP3136054B1 (en) * 2015-08-28 2019-11-27 HERE Global B.V. Method, system, and computer program for determining a parametric site model from motion related sensor data
US10274325B2 (en) * 2016-11-01 2019-04-30 Brain Corporation Systems and methods for robotic mapping
CN107862738B (zh) * 2017-11-28 2019-10-11 武汉大学 一种基于移动激光测量点云进行室内结构化三维重建方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120143372A1 (en) * 2010-12-06 2012-06-07 Samsung Electronics Co., Ltd. Robot and method for planning path of the same
KR20120091937A (ko) * 2011-02-10 2012-08-20 고려대학교 산학협력단 시맨틱 격자 지도 생성 방법 및 시맨틱 격자 지도를 활용한 시맨틱 격자 지도 기반 탐사 방법
CN103941750A (zh) * 2014-04-30 2014-07-23 东北大学 基于小型四旋翼无人机的构图装置及方法
CN104991463A (zh) * 2015-05-21 2015-10-21 北京云迹科技有限公司 机器人半自主建图方法及系统
CN105911988A (zh) * 2016-04-26 2016-08-31 湖南拓视觉信息技术有限公司 一种自动制图装置及方法
CN107990876A (zh) * 2017-11-20 2018-05-04 北京科技大学 基于无人飞行器的地下矿山采空区快速扫描装置及方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984032A (zh) * 2020-07-24 2020-11-24 武汉智会创新科技有限公司 无人机路径规划方法、装置、电子设备及存储介质
CN111984032B (zh) * 2020-07-24 2024-02-23 武汉智会创新科技有限公司 无人机路径规划方法、装置、电子设备及存储介质
US11216005B1 (en) * 2020-10-06 2022-01-04 Accenture Global Solutions Limited Generating a point cloud capture plan
CN117499547A (zh) * 2023-12-29 2024-02-02 先临三维科技股份有限公司 自动化三维扫描方法、装置、设备及存储介质
CN117553804A (zh) * 2024-01-11 2024-02-13 深圳市普渡科技有限公司 路径规划方法、装置、计算机设备和存储介质
CN117553804B (zh) * 2024-01-11 2024-04-09 深圳市普渡科技有限公司 路径规划方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN110631581A (zh) 2019-12-31
CN110631581B (zh) 2023-08-04

Similar Documents

Publication Publication Date Title
WO2019242516A1 (zh) 建立室内3d地图的方法和无人机
US20240045433A1 (en) Method for Dividing Robot Area Based on Boundaries, Chip and Robot
CN108776492B (zh) 一种基于双目相机的四轴飞行器自主避障与导航方法
US11971726B2 (en) Method of constructing indoor two-dimensional semantic map with wall corner as critical feature based on robot platform
CN109564690B (zh) 使用多向相机评估封闭空间的尺寸
WO2020135446A1 (zh) 一种目标定位方法和装置、无人机
CN108303972B (zh) 移动机器人的交互方法及装置
JP2020125102A (ja) ライダ、レーダ及びカメラセンサのデータを使用する強化学習に基づく自律走行時の最適化されたリソース割当てのための方法及び装置
KR102577785B1 (ko) 청소 로봇 및 그의 태스크 수행 방법
US20200359867A1 (en) Determining Region Attribute
CN111598916A (zh) 一种基于rgb-d信息的室内占据栅格地图的制备方法
US20130107010A1 (en) Surface segmentation from rgb and depth images
CN110801180A (zh) 清洁机器人的运行方法及装置
WO2019006760A1 (zh) 一种姿态的识别方法、设备及可移动平台
JP7314411B2 (ja) 移動ロボットの障害物情報感知方法、装置
WO2021143935A1 (zh) 一种检测方法、装置、电子设备及存储介质
CN111784819B (zh) 一种多楼层的地图拼接方法、系统及自移动机器人
US20220309761A1 (en) Target detection method, device, terminal device, and medium
CN110567441B (zh) 基于粒子滤波的定位方法、定位装置、建图及定位的方法
JP2020119523A (ja) 疑似3dバウンディングボックスを検出する方法及びこれを利用した装置
JP2015114954A (ja) 撮影画像解析方法
JP2020126623A (ja) V2v通信によって取得された、他の自律走行車両の空間探知結果を自身の自律走行車両の空間探知結果と統合する学習方法及び学習装置、そしてこれを利用したテスト方法及びテスト装置{learning method and learning device for integrating object detection information acquired through v2v communication from other autonomous vehicle with object detection information generated by present autonomous vehicle, and testing method and testing device using the same}
CN116993817B (zh) 目标车辆的位姿确定方法、装置、计算机设备及存储介质
Cao et al. Hierarchical coverage path planning in complex 3d environments
CN109064533A (zh) 一种3d漫游方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19822973

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19822973

Country of ref document: EP

Kind code of ref document: A1