WO2022252036A1 - 障碍物信息获取方法、装置、可移动平台及存储介质 - Google Patents

障碍物信息获取方法、装置、可移动平台及存储介质 Download PDF

Info

Publication number
WO2022252036A1
WO2022252036A1 PCT/CN2021/097339 CN2021097339W WO2022252036A1 WO 2022252036 A1 WO2022252036 A1 WO 2022252036A1 CN 2021097339 W CN2021097339 W CN 2021097339W WO 2022252036 A1 WO2022252036 A1 WO 2022252036A1
Authority
WO
WIPO (PCT)
Prior art keywords
measured
point
distance
space
image
Prior art date
Application number
PCT/CN2021/097339
Other languages
English (en)
French (fr)
Inventor
高飞
汪哲培
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/097339 priority Critical patent/WO2022252036A1/zh
Publication of WO2022252036A1 publication Critical patent/WO2022252036A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Definitions

  • the present application relates to the technical field of environmental detection, and in particular, to a method, device, mobile platform, and storage medium for obtaining obstacle information.
  • depth sensors such as depth cameras or laser radars
  • Depth information is the information necessary to ensure maneuver safety when mobile platforms (such as UAVs, mobile robots, etc.) perform tasks.
  • one of the methods is to only maintain the depth information of each frame, and only store relative pose information between multiple frames, and do not establish a unified frame for multiple frames. 3D map, but this method has the disadvantages of large calculation and storage consumption, and excessive information redundancy will also reduce the efficiency of subsequent query operations.
  • one of the objectives of the present application is to provide a method, device, mobile platform and storage medium for obtaining obstacle information.
  • the embodiment of the present application provides a method for obtaining obstacle information, including:
  • the set of three-dimensional points is determined according to a plurality of feature points and depth information thereof in images collected from the detection environment;
  • an obstacle information acquisition device including:
  • processors one or more processors
  • the one or more processors execute the executable instructions, they are individually or jointly configured to:
  • the set of three-dimensional points is determined according to a plurality of feature points and depth information thereof in images collected from the detection environment;
  • the embodiment of the present application provides a mobile platform, including:
  • a power system installed in the fuselage and used to provide power for the movable platform
  • a depth sensor for acquiring a depth image of the detected environment
  • the obstacle information acquisition device as described in the second aspect.
  • an embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores executable instructions, and when the executable instructions are executed by a processor, the method as described in the first aspect is implemented .
  • a three-dimensional point set related to obstacles in the detection environment is obtained; the three-dimensional point set is obtained from Determining a plurality of feature points and their depth information in the image collected by the detection environment; then determining the first distance between the space point to be measured and the nearest neighbor 3D point in the 3D point set, and according to the The first distance obtains obstacle information in the space where the space point to be measured is located.
  • the three-dimensional points related to obstacles are obtained only based on multiple feature points in the image and their depth information It is unnecessary to maintain all the depth information corresponding to the image, which is beneficial to reduce the amount of calculation and save storage resources.
  • the location of the spatial point to be measured can be determined more quickly from the three-dimensional point set.
  • the nearest neighbor three-dimensional point is beneficial to improve the efficiency of obtaining obstacle information in the space where the space point to be measured is located.
  • Fig. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for obtaining obstacle information provided by an embodiment of the present application
  • Fig. 3 is a schematic diagram of performing edge detection or high-frequency extraction on an image provided by an embodiment of the present application
  • Fig. 4 is a schematic diagram of a surface element provided by an embodiment of the present application.
  • Figure 5 and Figure 7 are the different positions between the space point to be measured and the nearest neighbor obstacle when the space point to be measured is at different positions within the sensing range of the depth sensor provided by an embodiment of the present application schematic diagram;
  • Fig. 8 is a schematic structural diagram of an obstacle information acquisition device provided by an embodiment of the present application.
  • one of the methods is to only maintain the depth information of each frame, and only store relative pose information between multiple frames, and do not establish a unified frame for multiple frames.
  • 3D map of When maintaining the depth information of each frame, for example, the 3D point cloud about obstacles can be obtained according to the depth information of the frame, and the 3D point cloud data is stored in a tree-like data structure, such as based on each frame Establish a k-d tree or R tree for all 3D point clouds, and provide a nearest neighbor query interface, and then use the provided interface to query the nearest obstacle to the point to be measured, and determine the distance between the point to be measured and the nearest obstacle, according to This distance provides a reference for subsequent tasks (such as collision detection, path planning or motion planning, etc.).
  • the embodiment of the present application provides a method for acquiring obstacle information, after acquiring the space points to be measured in the detection environment, and obtaining a three-dimensional point set related to the obstacles in the detection environment;
  • the three-dimensional point set is determined according to a plurality of feature points in the image collected from the detection environment and their depth information;
  • Obtaining obstacle information in the space where the space point to be measured is located according to the first distance.
  • the three-dimensional points related to obstacles are obtained only based on multiple feature points in the image and their depth information It is unnecessary to maintain all the depth information corresponding to the image, which is beneficial to reduce the amount of calculation and save storage resources.
  • the location of the spatial point to be measured can be determined more quickly from the three-dimensional point set.
  • the nearest neighbor three-dimensional point is beneficial to improve the efficiency of obtaining obstacle information in the space where the space point to be measured is located.
  • the obstacle information acquisition method provided in the embodiment of the present application may be applied to an obstacle information acquisition device.
  • the obstacle information acquisition device may be an electronic device with data processing capabilities, such as the electronic device includes but not limited to a computing device such as a mobile platform, a terminal device, or a server.
  • a computing device such as a mobile platform, a terminal device, or a server.
  • the mobile platform include but are not limited to unmanned aerial vehicles, unmanned vehicles, cloud platforms, unmanned ships or mobile robots.
  • Examples of such end devices include, but are not limited to: smartphones/cell phones, tablet computers, personal digital assistants (PDAs), laptop computers, desktop computers, media content players, video game stations/systems, virtual reality systems, augmented reality Systems, wearable devices (e.g., watches, glasses, gloves, headgear (e.g., hats, helmets, virtual reality headsets, augmented reality headsets, head-mounted devices (HMDs), headbands), pendants, armbands , leg rings, shoes, vests), remote controls, or any other type of device.
  • smartphones/cell phones e.g., smartphones/cell phones, tablet computers, personal digital assistants (PDAs), laptop computers, desktop computers, media content players, video game stations/systems, virtual reality systems, augmented reality Systems, wearable devices (e.g., watches, glasses, gloves, headgear (e.g., hats, helmets, virtual reality headsets, augmented reality headsets, head-mounted devices (HMDs), headbands), pendants, armbands
  • the apparatus for obtaining obstacle information may be a computer software product integrated in the electronic device, and the computer software product may include an application program capable of executing the method for obtaining obstacle information provided by the embodiment of the present application.
  • the apparatus for acquiring obstacle information may be an electronic device including at least a memory and a processor, and the processor in the electronic device may execute the acquisition of obstacle information stored in the memory indicating that the embodiment of the present application provides The executable instruction for the method.
  • the obstacle information acquisition device can also be a chip or an integrated circuit with data processing capabilities, and the obstacle information acquisition device includes but is not limited to, for example, a central processing unit (Central Processing Unit, CPU), digital signal processing Digital Signal Processor (DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC) or off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA), etc.
  • the obstacle information acquisition device can be installed in electronic equipment.
  • the obstacle information acquisition device may be a flight controller in an unmanned aerial vehicle.
  • the unmanned aerial vehicle 10 performs a flight mission in the current environment, there may be obstacles on its flight path, and the depth sensor (not shown in FIG. 1 ) can be used. Shown) to collect a depth image within the current perception range, the depth image contains depth information of obstacles within the current perception range, considering that there is redundant depth information in all depth information corresponding to the depth image, Usually, the effective depth information in the depth image exists in the area where the depth information changes rapidly.
  • the UAV aircraft can use the method provided in the embodiment of the present application to perform edge detection or high-frequency extraction on the depth image to obtain A plurality of feature points, and then determine a set of three-dimensional points related to obstacles in the depth image based on the plurality of feature points and their depth information, without maintaining all the depth information corresponding to the image, which is conducive to reducing the amount of calculation and saving storage resource; then the UAV can obtain the depth of the obstacle after obtaining the space point to be measured in the current detection environment (for example, the space point to be measured is the flight path point of the UAV).
  • a set of three-dimensional points of the image determining a first distance between the space point to be measured and the nearest neighbor three-dimensional point in the set of three-dimensional points, and then obtaining obstacles in the space where the space point to be measured is located according to the first distance Object information; the obstacle information can be used to check the safety of the space where the space point to be measured is located, and can be used for path planning or collision detection.
  • the nearest neighbor 3D point of the space point to be measured can be determined from the set of 3D points more quickly, which is beneficial to improve the obstacle of the space where the space point to be measured is located. The efficiency of acquiring material information.
  • FIG. 2 is a schematic flow chart of a method for obtaining obstacle information provided by the embodiment of the present application.
  • the method can be obtained by the obstacle obtaining means to perform, the method comprising:
  • step S101 the spatial points to be measured in the detection environment are acquired.
  • step S102 a set of three-dimensional points related to obstacles in the detection environment is obtained; the set of three-dimensional points is determined according to a plurality of feature points in images collected from the detection environment and their depth information.
  • step S103 a first distance between the spatial point to be measured and a nearest neighbor 3D point in the 3D point set is determined.
  • step S104 obstacle information in the space where the space point to be measured is located is acquired according to the first distance.
  • the movable platform is also equipped with a depth sensor, and the depth sensor is used to collect and detect obstacles within the current sensing range of the depth sensor in the detection environment.
  • Image the image corresponds to depth information
  • the depth information represents the relative distance between the obstacle in the current perception range and the movable platform, which is the guarantee when the movable platform (such as unmanned aerial vehicle, mobile robot, etc.) performs tasks Information necessary for motor safety.
  • the depth sensor includes but not limited to lidar or RGBD camera, etc.
  • the image collected by the depth sensor includes but not limited to grayscale image corresponding to depth information, color image or depth image corresponding to depth information, etc.
  • the device can acquire images collected by the depth sensor related to obstacles in the detection environment. Considering that generally effective depth information in images exists in areas where the depth information changes sharply, and for the corresponding For a grayscale image with depth information, a color image with depth information, or a depth image, the area where the depth information changes sharply can be considered as an area with rich texture information in the image, that is, a texture-rich area. Therefore, the The device can perform edge detection or high-frequency extraction on the image to obtain multiple feature points that characterize the fine structure of the image, and then use the multiple feature points and their depth information to construct a three-dimensional point set that characterizes the texture-rich region, thereby having It is beneficial to reduce redundant depth information.
  • the means for edge detection or high-frequency extraction of the image can be specifically set according to the actual application scenario, and this embodiment does not make any restrictions on this; as an example, a Sobel operator or a Canny operator can be used to perform Edge detection; as an example, discrete cosine transform can be used to extract the high-frequency information of the image.
  • a Sobel operator or a Canny operator can be used to perform Edge detection; as an example, discrete cosine transform can be used to extract the high-frequency information of the image.
  • the device in FIG. 3 performs edge detection or high-frequency extraction on the image to obtain a plurality of pixels representing the fine structure of the image, that is, the feature points.
  • the depth information corresponding to the plurality of feature points is effective depth information, and the change rate of the depth information corresponding to the plurality of feature points is greater than or equal to a preset change threshold;
  • Depth information corresponding to pixels other than the points is redundant depth information, and a rate of change of depth information corresponding to pixels other than the plurality of feature points in the image is smaller than the preset change threshold. That is to say, changes in depth information corresponding to the plurality of feature points are relatively sharp, and changes in depth information corresponding to other pixels in the image except for the plurality of feature points are relatively flat.
  • the effective depth information is concentrated in the area where the depth information changes drastically. Therefore, this embodiment maintains the 3D point set constructed by using the multiple feature points and their depth information, which is conducive to reducing redundant depth information and saving storage.
  • the device when the device maintains the 3D point set of each frame of image collected by the depth sensor, it may store the 3D point set of each frame of image through a tree data structure, such as based on each The three-dimensional point set of the frame establishes a k-d tree or R tree, so that the k-d tree or R tree corresponding to each frame image can be used to perform nearest neighbor search.
  • a tree data structure such as based on each The three-dimensional point set of the frame establishes a k-d tree or R tree, so that the k-d tree or R tree corresponding to each frame image can be used to perform nearest neighbor search.
  • only the tree-like data structure is used to maintain the three-dimensional point set, and the maintained depth information is reduced, which can effectively improve the efficiency of subsequent nearest neighbor query based on the tree-like data structure.
  • the k-d tree (k-dimensional tree) is a binary tree in which each node is a k-dimensional point. All non-leaf nodes can be regarded as a hyperplane to divide the space into two half-spaces.
  • the subtree on the left of the node represents the hyperplane
  • the core idea of the R-tree is to aggregate nodes with similar distances and represent them as the minimum circumscribed rectangle of these nodes on the upper layer of the tree structure, and this minimum circumscribed rectangle becomes a node of the upper layer; the "R” of the R-tree represents "Rectangle (rectangle)", because all nodes are in their smallest circumscribed rectangle, so a query that does not intersect with a certain rectangle must be disjoint with all nodes in this rectangle, and each rectangle on the leaf node represents For an object, nodes are aggregations of objects, and the higher the level of aggregation, the more objects there are.
  • the k-d tree or R tree mentioned in this embodiment is only an example of a tree-like data structure, and does not constitute a limitation on the tree-like data structure.
  • R* tree, R+ tree, The B-tree is used to store the 3D point set of each frame, which is not limited in this embodiment.
  • other data storage structures may also be used to store the three-dimensional point set of the image, which is not limited in this embodiment.
  • the obstacle information acquisition method can be used to check the safety of the space where the space point to be measured is located.
  • the space point to be measured can be a path point planned by the movable platform equipped with the device in the path planning process or a path point in the motion process.
  • the obstacle information may be the information of the nearest neighbor obstacle of the space point to be measured, so as to ensure that the movable platform can avoid obstacles during the movement to ensure safety sports.
  • the device After the device obtains the space point to be measured in the detection environment, it can obtain a three-dimensional point set related to obstacles in the detection environment, and the three-dimensional point set performs edge detection on the image collected by the depth sensor according to the device. Or determine the multiple feature points and their depth information obtained by high-frequency extraction; Exemplarily, the three-dimensional point set can be maintained by the device using a tree-like data structure, such as establishing a k-d tree based on the three-dimensional point set of the image Or an R tree; then the device can query the nearest neighbor 3D point of the spatial point to be measured from the set of 3D points according to the 3D coordinates of the spatial point to be measured.
  • a tree-like data structure such as establishing a k-d tree based on the three-dimensional point set of the image Or an R tree
  • the device can be based on the Query the nearest neighbor three-dimensional point of the space point to be measured in the k-d tree or R tree established by the three-dimensional point set, and the nearest neighbor three-dimensional point represents the detection environment within the sensing range of the depth sensor and the The obstacle closest to the spatial point to be measured, and then the device can determine the first distance between the spatial point to be measured and the nearest neighbor 3D point in the set of 3D points, and then obtain the Obstacle information in the space where the space point to be measured is located, for example, the obstacle information may be the information of the nearest neighbor obstacle of the space point to be measured.
  • the depth information maintained is reduced, and the nearest neighbor three-dimensional point of the space point to be measured can be determined from the tree-like data structure faster, which is beneficial to improve Obtaining efficiency of obstacle information in the space where the space point to be measured is located.
  • the device projects the space point to be measured to the The two-dimensional space where the image is located, obtain the target pixel corresponding to the space point to be measured in the image, if the target pixel belongs to one of the plurality of feature points, it indicates that the depth information corresponding to the target pixel has been used
  • the device can directly obtain the obstacles in the space where the space point to be measured is located according to the first distance item information.
  • the device may Determine the three-dimensional surface of the target pixel according to other pixels in the image except the plurality of feature points, and obtain a second distance from the space point to be measured to the three-dimensional surface, and then according to the first distance and the second distance to obtain obstacle information in the space where the space point to be measured is located.
  • the three-dimensional surface can approximate the detection environment.
  • the device can The three-dimensional coordinates determine the origin of the three-dimensional surface (represent the origin with "p” in Figure 4), and determine the normal vector of the three-dimensional surface according to the depth information of the target pixel and the depth information of its neighboring pixels (in Figure 4, the "n” represents a normal vector), and/or, determine the size of the three-dimensional surface according to the first distance ("r" represents the size in FIG. 4), so as to obtain a constructed three-dimensional surface.
  • the three-dimensional surface is used to approximate the environment part, and the redundant depth information other than the effective depth information of the feature points is compressed into three parameters forming the three-dimensional surface (the three parameters are the origin, the Radius and normal vector), greatly improve the abstraction ability of environment representation, realize the accurate representation of environment with less data; Exemplary, in the embodiment shown in Fig.
  • the origin of described three-dimensional surface can use 3 attributes value, the three attribute values jointly represent the coordinate position of the origin of the three-dimensional surface in the preset three-dimensional coordinate system; in the case of a fixed height, the normal vector of the three-dimensional surface can be represented by two attribute values, The height and the two attribute values jointly represent the position of the normal vector of the three-dimensional surface in the preset three-dimensional coordinate system; if the three-dimensional surface is a circular surface, the size of the three-dimensional surface can be represented by one attribute value, which is Except for the effective depth information of the feature points, other redundant depth information is compressed into six attribute values constituting the three-dimensional surface, so as to achieve an accurate representation of the environment with less data.
  • the device After acquiring the three-dimensional surface of the target pixel, the device can calculate the shortest distance from the space point to be measured to the three-dimensional surface, determine it as the second distance, and then according to the first distance and the first distance The smaller of the two distances obtains obstacle information in the space where the space point to be measured is located.
  • the first distance is compared with the distance, and the smaller of the two is used to obtain obstacle information in the space where the space point to be measured is located, That is, the safety range of the space point to be measured is determined by the smaller of the first distance and the distance, so as to effectively reduce or avoid obstacle information errors caused by the acquisition noise of the depth sensor or other errors improve the accuracy of the obtained obstacle information.
  • the device can store the corresponding relationship between the target pixel and its neighboring pixels and the three-dimensional surface, so as to provide any one of the target pixel and its neighboring pixels Pixels are used to avoid repeated construction of surface elements, which is beneficial to save computing resources.
  • each pixel in the image except for the plurality of feature points only corresponds to a three-dimensional surface, and when the pixel corresponds to a three-dimensional surface, there is no need to repeatedly construct surface elements.
  • the device in the process of obtaining obstacle information in the space where the space point to be measured is located, if the target pixel does not belong to the plurality of feature points, the device can The corresponding relationship determines whether the target pixel or its neighboring pixels correspond to a three-dimensional surface, and if it is determined according to the corresponding relationship that the target pixel or its neighboring pixels correspond to a three-dimensional surface, then the device can According to the corresponding relationship, obtain the three-dimensional surface corresponding to the target pixel or its neighboring pixels, determine the second distance from the space point to be measured to the three-dimensional surface, and obtain the three-dimensional surface according to the first distance and the second distance. Describe the obstacle information in the space where the space point to be measured is located.
  • the device constructs a three-dimensional surface of the target pixel, wherein the origin of the three-dimensional surface is determined according to the space point to be measured corresponding to the target pixel, and the normal vector of the three-dimensional surface is determined according to the target pixel and/or the size of the three-dimensional surface is determined according to the first distance, and then the device determines the second distance from the space point to be measured to the three-dimensional surface , according to the first distance and the second distance, obtain obstacle information in the space where the space point to be measured is located, and store the corresponding relationship between the target pixel and its neighboring pixels and the three-dimensional surface.
  • the depth information of the target pixel and its neighboring pixels is represented by a three-dimensional surface
  • the depth information of the neighboring pixels of the target pixel can be discarded; in this embodiment, the three-dimensional surface is used to approximate the environment part, and the redundant depth information other than the effective depth information of the feature points can be compressed into a composition
  • the three parameters of the three-dimensional surface (the three parameters are the origin, the radius and the normal vector) greatly improve the abstraction ability of the environment representation. After the surface element is constructed, there is no need to store redundant depth information, which is beneficial to save storage space.
  • the images correspond to depth information
  • the depth information represents the relative distance from the obstacle.
  • the fine part and the rough part representing the environment in the image are separated and represented, and edge extraction or high-frequency information extraction is performed on the image to obtain multiple feature points that can represent the fine part of the environment.
  • the depth information of the image is compressed into three parameters that make up the three-dimensional surface (the three parameters are the origin, radius and normal vector), which helps to reduce the amount of data that needs to maintain the depth information in the image, further reduces the amount of calculation and saves storage space
  • the depth information of the image is stored in different data structures, and the effective depth information and redundant depth information in the depth information of the image are stored in two forms of three-dimensional point sets and surface elements, respectively. Balanced query accuracy and efficiency.
  • the depth information sensed only once is relatively small, therefore, in order to improve the accuracy of the acquired obstacle information in the space where the space point to be measured is located, the There are multiple images, and each image corresponds to the three-dimensional point set. Relative pose information is stored between multiple frames of images to ensure accurate conversion of pixel coordinates between multiple frames of images.
  • the device may start from the latest captured frame of image, and first determine whether the spatial point to be measured is in the field of view corresponding to the image Within the range, the field of view corresponding to the image is the field of view of the depth sensor when capturing the image; if the spatial point to be measured is within the field of view corresponding to the image, the A set of three-dimensional points, querying the nearest three-dimensional point of the spatial point to be measured in the set of three-dimensional points; if the spatial point to be measured is not within the field of view corresponding to the image, continue to query the historical image, and detect Whether the spatial point to be measured is within the field of view corresponding to the next image, until the nearest neighbor 3D point of the spatial point to be measured is queried.
  • the device does not need to start querying from the most recently acquired frame of image, and the device can also query the multiple images sequentially according to other rules until the nearest three-dimensional point of the spatial point to be measured is found.
  • the embodiment does not impose any limitation on the order in which the device queries the multiple images, and specific settings may be made according to actual application scenarios.
  • the frequency of a general depth sensor (such as an RGBD camera) is more than 30 frames, and when the movable platform moves slowly or stays in place, the depth The similarity between two adjacent images collected by the sensor is high, and the 3D point set of the acquired image and the subsequent surface metadata also have a high similarity, so it can be considered that it is not necessary to combine the depth sensor Data for all images is maintained.
  • the relative pose between a frame of image currently collected by the depth sensor and the latest frame of image in the image sequence maintained by the device may be greater than a preset threshold, and/or, a frame of image currently collected by the depth sensor
  • a preset threshold When the acquisition time interval between the latest frame of image in the image sequence maintained by the device is greater than the preset time threshold, add data corresponding to a frame of image currently collected by the depth sensor to the image sequence maintained by the device In this way, it is beneficial to ensure the sparsity of the image sequence maintained by the device.
  • the image sequence includes multiple images, the relative pose between adjacent images is greater than a preset threshold, and/or, the acquisition time interval between adjacent images is greater than a preset time threshold, which is beneficial to reduce the need for maintenance
  • the amount of data can effectively reduce the amount of calculation and save storage space, and the data that needs to be maintained is reduced.
  • the query data for the nearest three-dimensional point of the spatial point to be measured is reduced, and the query efficiency can also be effectively improved.
  • the sensing range of the depth sensor is limited, for example, please refer to FIG. 5 , when the spatial point to be measured is located near the edge of the sensing range of the depth sensor, the closest point to the spatial point to be measured is actually Obstacle A, but due to the limited sensing range of the depth sensor, the depth sensor does not perceive obstacle A. Based on the images acquired by the depth sensor within its sensing range, it is finally determined that the nearest neighbor obstacle of the space point to be measured is an obstacle B, obviously the nearest neighbor obstacle query result is wrong.
  • the device determines the first distance between the space point to be measured and the nearest neighbor 3D point in the set of 3D points, it also needs to determine the Measure the third distance between the spatial point and the boundary of the field of view corresponding to the image, the boundary of the field of view corresponding to the image is the boundary of the field of view when the depth sensor captures the image, see Figure 6 , if the first distance is not greater than the third distance, it indicates that the spatial point to be measured is not near the edge of the sensing range of the depth sensor, and the above error situation will not occur, which means that the query result is complete,
  • the device can obtain obstacle information in the space where the space point to be measured is located according to the first distance; please refer to FIG.
  • the device obtains the 3D point set of the next image, and searches the 3D point set of the next image
  • the nearest three-dimensional point of the spatial point to be measured is to determine a first distance between the spatial point to be measured and the nearest three-dimensional point in the three-dimensional point set of the next image.
  • the accurate detection of the nearest three-dimensional point of the spatial point to be measured is realized, and the most complete nearest neighbor query result is obtained, within the allowable range of the depth sensor error
  • the determined space point to be measured is used as the way point of the movable platform, obstacles can be avoided effectively.
  • the embodiment of the present application also provides an obstacle information acquisition device, including:
  • memory 21 for storing executable instructions
  • processors 22 one or more processors 22;
  • processors 22 execute the executable instructions, they are individually or jointly configured to:
  • the set of three-dimensional points is determined according to a plurality of feature points and depth information thereof in images collected from the detection environment;
  • the memory 21 stores executable instructions of the obstacle information acquisition method
  • the memory 21 may include at least one type of storage medium
  • the storage medium includes a flash memory, a hard disk, a multimedia card, a card memory (for example, SD or DX memory, etc.), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), Magnetic Memory, disk, CD, etc.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • ROM Read Only Memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • Magnetic Memory disk, CD, etc.
  • the device may include, but not limited to, a processor 22 and a memory 21 .
  • a processor 22 and a memory 21 .
  • Fig. 8 is only an example of the device, and does not constitute a limitation to the device, and may include more or less components than those shown in the figure, or combine some components, or different components, for example, the device also It can include input and output devices, network access devices, buses, etc.
  • the rate of change of the depth information corresponding to the plurality of feature points is greater than or equal to a preset change threshold; the change of the depth information corresponding to other pixels in the image except the plurality of feature points rate is less than the preset change threshold.
  • the processor 22 is further configured to: project the space point to be measured to the two-dimensional space where the image is located, and obtain the target pixel corresponding to the space point to be measured in the image; if The target pixel belongs to one of the plurality of feature points, and the obstacle information in the space where the space point to be measured is located is acquired according to the first distance.
  • the processor 22 is further configured to: if the target pixel does not belong to the plurality of feature points, determine the target according to other pixels in the image except the plurality of feature points The three-dimensional surface of the pixel, and obtain the second distance from the space point to be measured to the three-dimensional surface; according to the first distance and the second distance, obtain obstacle information in the space where the space point to be measured is located.
  • the obstacle information in the space where the space point to be measured is located is obtained according to the smaller of the first distance and the second distance.
  • the second distance is the shortest distance from the spatial point to be measured to the three-dimensional surface.
  • the origin of the three-dimensional surface is determined according to the space point to be measured corresponding to the target pixel; the normal vector of the three-dimensional surface is determined according to the depth information of the target pixel and the depth information of its neighboring pixels; And/or, the size of the three-dimensional surface is determined according to the first distance.
  • the processor 22 is further configured to: store the corresponding relationship between the target pixel and its neighboring pixels and the three-dimensional surface.
  • the processor 22 is further configured to: construct the target pixel when it is determined according to the corresponding relationship that neither the target pixel nor the neighbor pixels of the target pixel correspond to a three-dimensional surface Otherwise, according to the corresponding relationship, obtain the three-dimensional surface corresponding to the target pixel or the neighbor pixels of the target pixel.
  • the processor 22 is further configured to: discard the depth information of the neighboring pixels of the target pixel .
  • each pixel in the image except for the plurality of feature points only corresponds to a three-dimensional surface.
  • the nearest neighbor 3D point is obtained from a k-d tree or R tree established by using the 3D point set based on the space point to be measured.
  • each image corresponds to the three-dimensional point set.
  • the relative pose between adjacent images is greater than a preset threshold, and/or, the acquisition time interval between adjacent images is greater than a preset time threshold.
  • the processor 22 is further configured to: if the point in the space to be measured is within the field of view corresponding to the image, acquire a set of three-dimensional points of the image; otherwise, detect the space to be measured Whether the point is within the field of view corresponding to the next image.
  • the processor 22 is further configured to: determine the spatial point to be measured A third distance between the boundaries of the field of view corresponding to the image; if the first distance is not greater than the third distance, obtain obstacle information in the space where the space point to be measured is located according to the first distance .
  • the processor 22 is further configured to: if the first distance is greater than the third distance, acquire a set of three-dimensional points of the next image.
  • the plurality of feature points are obtained based on edge detection or high-frequency extraction of the image.
  • the obstacle information in the space where the space point to be measured is located is at least used for path planning or collision detection.
  • the image is collected by a depth sensor mounted on a movable platform;
  • the depth sensor includes at least: lidar or RGBD camera;
  • the image includes any one of the following: a grayscale image corresponding to depth information, a color image corresponding to depth information, or a depth image.
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without creative effort.
  • Various implementations described herein can be implemented using a computer readable medium such as computer software, hardware, or any combination thereof.
  • the embodiments described herein can be implemented by using Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays ( FPGA), processors, controllers, microcontrollers, microprocessors, electronic units designed to perform the functions described herein.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGA Field Programmable Gate Arrays
  • processors controllers, microcontrollers, microprocessors, electronic units designed to perform the functions described herein.
  • an embodiment such as a procedure or a function may be implemented with a separate software module that allows at least one function or operation to be performed.
  • the software codes can be implemented by a software application (or program
  • the embodiment of the present application also provides a mobile platform, including:
  • a power system installed in the fuselage and used to provide power for the movable platform
  • a depth sensor for collecting images of the detection environment for collecting images of the detection environment; and the above-mentioned obstacle information acquisition device.
  • the mobile platform includes but is not limited to unmanned aerial vehicles, unmanned vehicles, unmanned ships, or mobile robots.
  • non-transitory computer-readable storage medium including instructions, such as a memory including instructions, which are executable by a processor of an apparatus to perform the above method.
  • the non-transitory computer readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical data storage device, among others.
  • a non-transitory computer-readable storage medium enabling the terminal to execute the above method when instructions in the storage medium are executed by a processor of the terminal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种障碍物信息获取方法、装置、可移动平台及存储介质,所述方法包括:获取探测环境中的待测空间点;获取有关于所述探测环境中的障碍物的三维点集合;所述三维点集合根据从所述探测环境采集的图像中的多个特征点及其深度信息确定;确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离;根据所述第一距离获取所述待测空间点所在空间的障碍物信息。本实施例能够提高查询待测空间点最近邻障碍物信息的效率。

Description

障碍物信息获取方法、装置、可移动平台及存储介质 技术领域
本申请涉及环境探测技术领域,具体而言,涉及一种障碍物信息获取方法、装置、可移动平台及存储介质。
背景技术
可移动平台(如无人机、移动机器人等)在复杂环境中执行移动任务时,一般需要借助深度传感器,如深度相机或者激光雷达,获得环境中障碍物的相对距离信息。这种信息一般给出实际障碍物在观测方向上的投影距离,因此也称为深度信息。深度信息是可移动平台(如无人机、移动机器人等)执行任务时保证机动安全所必须的信息。
相关技术中,在深度传感器采集每一帧的深度信息之后,其中一种方法是仅对每一帧的深度信息进行维护,多帧之间仅储存相对位姿信息,不针对于多帧建立统一的三维地图,但该方法具有计算量、储存消耗大等缺陷,且过度的信息冗余也会降低后续查询操作效率。
发明内容
有鉴于此,本申请的目的之一是提供一种障碍物信息获取方法、装置、可移动平台及存储介质。
第一方面,本申请实施例提供了一种障碍物信息获取方法,包括:
获取探测环境中的待测空间点;
获取有关于所述探测环境中的障碍物的三维点集合;所述三维点集合根据从所述探测环境采集的图像中的多个特征点及其深度信息确定;
确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离;
根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
第二方面,本申请实施例提供了一种障碍物信息获取装置,包括:
用于存储可执行指令的存储器;
一个或多个处理器;
其中,所述一个或多个处理器执行所述可执行指令时,被单独地或共同地配置成:
获取探测环境中的待测空间点;
获取有关于所述探测环境中的障碍物的三维点集合;所述三维点集合根据从所述探测环境采集的图像中的多个特征点及其深度信息确定;
确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离;
根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
第三方面,本申请实施例提供了一种可移动平台,包括:
机身;
动力系统,安装在所述机身内,用于为所述可移动平台提供动力;
深度传感器,用于采集探测环境的深度图像;以及,
如第二方面所述的障碍物信息获取装置。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有可执行指令,所述可执行指令被处理器执行时实现如第一方面所述的方法。
本申请实施例所提供的一种障碍物信息获取方法,在获取探测环境中的待测空间点之后,获取有关于所述探测环境中的障碍物的三维点集合;所述三维点集合根据从所述探测环境采集的图像中的多个特征点及其深度信息确定;然后确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离,并根据所述第一距离获取所述待测空间点所在空间的障碍物信息。本实施例中,考虑到从所述探测环境采集的图像对应的全部深度信息中存在冗余的深度信息,因此仅基于图像中的多个特征点及其深度信息得到有关于障碍物的三维点集合,无需维护图像对应的全部深度信息,有利于减少计算量和节省存储资源,相应的,由于维护的深度信息减少了,可以更快从所述三维点集合中确定所述待测空间点的最近邻三维点,有利于提高关于所述待测空间点所在空间的障碍物信息的获取效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例提供的一种应用场景的示意图;
图2是本申请一个实施例提供的一种障碍物信息获取方法的流程示意图;
图3是本申请一个实施例提供的一种对图像进行边缘检测或者高频提取的示意图;
图4是本申请一个实施例提供的一种表面元的示意图;
图5、图6和图7是本申请一个实施例提供的一种待测空间点处于深度传感器的感知范围内的不同位置时,待测空间点与最近邻障碍物之间的位置情况的不同示意图;
图8是本申请一个实施例提供的一种障碍物信息获取装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
相关技术中,在深度传感器采集每一帧的深度信息之后,其中一种方法是仅对每一帧的深度信息进行维护,多帧之间仅储存相对位姿信息,不针对于多帧建立统一的三维地图。在对每一帧的深度信息进行维护时,示例性的,可以根据该帧的深度信息获得有关于障碍物的三维点云,以树状数据结构存储三维点云数据,比如基于每一帧的全部三维点云建立k-d树或者R树,并提供一个最近邻查询的接口,后续通过提供的接口来查询与待测点最邻近的障碍物,确定待测点与最邻近障碍物的距离,根据该距离为后续的任务(如碰撞检测、路径规划或者运动规划等)提供参考。其中,针对于每一帧,需要以树状数据结构存储该帧的全部三维点云数据,发明人意识到在维护的每一帧的全部深度信息中,并不是全部的深度信息都是有效的,存在冗余的深度信息,过度的信息冗余会降低后续查询操作效率。而且在需要维护多帧的深度信息的情况下,由于需对每一帧的全部深度信息进行维护,因此该方法具有计算量、储存消耗大等缺陷。
针对于相关技术中的问题,本申请实施例提供了一种障碍物信息获取方法,在获取探测环境中的待测空间点之后,获取有关于所述探测环境中的障碍物的三维点集合;所述三维点集合根据从所述探测环境采集的图像中的多个特征点及其深度信息确定;然后确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离,并根据所述第一距离获取所述待测空间点所在空间的障碍物信息。本实施例中,考虑到从 所述探测环境采集的图像对应的全部深度信息中存在冗余的深度信息,因此仅基于图像中的多个特征点及其深度信息得到有关于障碍物的三维点集合,无需维护图像对应的全部深度信息,有利于减少计算量和节省存储资源,相应的,由于维护的深度信息减少了,可以更快从所述三维点集合中确定所述待测空间点的最近邻三维点,有利于提高关于所述待测空间点所在空间的障碍物信息的获取效率。
在一些实施例中,本申请实施例提供的障碍物信息获取方法可以应用于障碍物信息获取装置中。
一方面,所述障碍物信息获取装置可以是具有数据处理能力的电子设备,如所述电子设备包括但不限于可移动平台、终端设备或者服务器等计算设备。其中,所述可移动平台的示例包括但不限于无人飞行器、无人驾驶车辆、云台、无人驾驶船只或者移动机器人等。所述终端设备的示例包括但不限于:智能电话/手机、平板计算机、个人数字助理(PDA)、膝上计算机、台式计算机、媒体内容播放器、视频游戏站/系统、虚拟现实系统、增强现实系统、可穿戴式装置(例如,手表、眼镜、手套、头饰(例如,帽子、头盔、虚拟现实头戴耳机、增强现实头戴耳机、头装式装置(HMD)、头带)、挂件、臂章、腿环、鞋子、马甲)、遥控器、或者任何其他类型的装置。
示例性的,所述障碍物信息获取装置可以是集成于所述电子设备中的计算机软件产品,该计算机软件产品可以包括可以执行本申请实施例提供的障碍物信息获取方法的应用程序。示例性的,所述障碍物信息获取装置可以是至少包括存储器和处理器的电子设备,所述电子设备中的处理器可以执行所述存储器中存储的指示本申请实施例提供的障碍物信息获取方法的可执行指令。
另一方面,所述障碍物信息获取装置也可以是具有数据处理能力的芯片或者集成电路,所述障碍物信息获取装置包括但不限于例如中央处理单元(Central Processing Unit,CPU)、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)或者现成可编程门阵列(Field-Programmable Gate Array,FPGA)等。所述障碍物信息获取装置可以安装于电子设备中。示例性的,例如所述障碍物信息获取装置可以是无人飞行器中的飞行控制器。
在一示例性的应用场景中,请参阅图1,如无人飞行器10在当前环境中执行飞行任务时,在其飞行路径上可能存在障碍物,可以使用自身搭载的深度传感器(图1中未示出)采集在当前感知范围内的深度图像,所述深度图像中包含在当前感知范围内的障碍物的深度信息,考虑到所述深度图像对应的全部深度信息中存在冗余的深度信息,而通常深度图像中有效的深度信息存在于深度信息变化较为剧烈的区域,因此, 所述无人机飞行器可以使用本申请实施例提供的方法,对所述深度图像进行边缘检测或者高频提取得到多个特征点,然后基于所述多个特征点及其深度信息确定所述深度图像中有关于障碍物的三维点集合,无需维护图像对应的全部深度信息,从而有利于减少计算量和节省存储资源;则所述无人机在获取当前探测环境中的待测空间点(比如所述待测空间点为所述无人机的飞行路径点)之后,可以获取有关于障碍物的所述深度图像的三维点集合,确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离,然后根据所述第一距离获取所述待测空间点所在空间的障碍物信息;所述障碍物信息可以用来检验所述待测空间点所在空间的安全性,进而可以用于路径规划或者碰撞检测等。本实施例中,由于维护的深度信息减少了,可以更快从所述三维点集合中确定所述待测空间点的最近邻三维点,有利于提高关于所述待测空间点所在空间的障碍物信息的获取效率。
接下来对本申请实施例提供的障碍物信息获取方法进行说明:请参阅图2,图2为本申请实施例提供的一种障碍物信息获取方法的流程示意图,所述方法可以由所述障碍物获取装置来执行,所述方法包括:
在步骤S101中,获取探测环境中的待测空间点。
在步骤S102中,获取有关于所述探测环境中的障碍物的三维点集合;所述三维点集合根据从所述探测环境采集的图像中的多个特征点及其深度信息确定。
在步骤S103中,确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离。
在步骤S104中,根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
在一些实施例中,以所述装置搭载于可移动平台为例,所述可移动平台也搭载有深度传感器,所述深度传感器用于采集探测环境中在深度传感器当前感知范围内的障碍物的图像,所述图像对应有深度信息,该深度信息表征在当前感知范围内的障碍物与所述可移动平台的相对距离,是可移动平台(如无人机、移动机器人等)执行任务时保证机动安全所必须的信息。其中,所述深度传感器包括但不限于激光雷达或者RGBD相机等,所述深度传感器所采集的图像包括但不限于对应有深度信息的灰度图、对应有深度信息的彩色图或者深度图像等。
在一些实施例中,所述装置可以获取所述深度传感器采集的有关于探测环境中的障碍物的图像,考虑到通常图像中有效的深度信息存在于深度信息变化较为剧烈的区域,而对于对应有深度信息的灰度图、对应有深度信息的彩色图或者深度图像等图像来说,深度信息变化较为剧烈的区域可以认为是图像中纹理信息较为丰富的区域,即 富纹理区域,因此所述装置可以对所述图像进行边缘检测或者高频提取得到表征所述图像精细结构的多个特征点,然后利用所述多个特征点及其深度信息构建表征富纹理区域的三维点集合,从而有利于减少冗余的深度信息。
可以理解的是,对所述图像进行边缘检测或者高频提取的手段可依据实际应用场景进行具体设置,本实施例对此不做任何限制;作为例子,可以使用Sobel算子或者Canny算子进行边缘检测;作为例子,可以通过离散余弦变换来提取所述图像的高频信息。请参阅图3,图3中所述装置对所述图像进行边缘检测或者高频提取得到表征所述图像精细结构的多个像素,即所述特征点。
其中,所述多个特征点分别对应的深度信息为有效深度信息,所述多个特征点分别对应的深度信息的变化率大于或等于预设变化阈值;所述图像中除所述多个特征点之外的其他像素对应的深度信息为冗余深度信息,所述图像中除所述多个特征点之外的其他像素对应的深度信息的变化率小于所述预设变化阈值。即是说,所述多个特征点分别对应的深度信息的变化较为剧烈,所述图像中除所述多个特征点之外的其他像素对应的深度信息的变化较为平坦。一般情况下有效的深度信息集中在深度信息变化剧烈的区域,因此本实施例对利用所述多个特征点及其深度信息构建的三维点集合进行维护,有利于减少冗余的深度信息,节省存储空间。
在一些可能的实施方式中,所述装置对所述深度传感器采集的每一帧图像的三维点集合进行维护时,可以通过树状数据结构存储每一帧图像的三维点集合,比如基于每一帧的三维点集合建立k-d树或者R树,从而后续可以使用每一帧图像对应的k-d树或者R树进行最近邻搜索。本实施例中由于只需使用树状数据结构维护三维点集合,维护的深度信息减少了,可以有效提高后续基于树状数据结构进行最近邻查询的效率。
其中,k-d树(k-维树)是每个节点都为k维点的二叉树,所有非叶子节点可以视作用一个超平面把空间分割成两个半空间,节点左边的子树代表在超平面左边的点,节点右边的子树代表在超平面右边的点;选择超平面的方法如下:每个节点都与k维中垂直于超平面的那一维有关。R树的核心思想是聚合距离相近的节点并在树结构的上一层将其表示为这些节点的最小外接矩形,这个最小外接矩形就成为上一层的一个节点;R树的“R”代表“Rectangle(矩形)”,因为所有节点都在它们的最小外接矩形中,所以跟某个矩形不相交的查询就一定跟这个矩形中的所有节点都不相交,叶子节点上的每个矩形都代表一个对象,节点都是对象的聚合,并且越往上层聚合的对象就越多。可以理解的是,本实施例中提及的k-d树或者R树仅为树状数据结构的一种示例,并不构成对树状数据结构的限制,例如还可以使用R*树、R+树、B树来存储每一帧的三 维点集合,本实施例对此不做任何限制。并且除了树状状数据结构之外,还可以使用其他数据存储结构来存储所述图像的三维点集合,本实施例对此不做任何限制。
在一些实施例中,所述装置获取探测环境中的待测空间点之后,可以使用本申请实施例提供的障碍物信息获取方法来检查所述待测空间点所在空间的安全性,示例性的,所述待测空间点可以是搭载有所述装置的可移动平台在路径规划过程中规划的路径点或者是在运动过程中的路径点,为了确保可移动平台的安全运动,则需要确定所述待测空间点所在空间的障碍物信息,例如所述障碍物信息可以是所述待测空间点的最近邻障碍物的信息,以保证可移动平台在运动过程中能够避开障碍物以安全运动。
所述装置在获取探测环境中的待测空间点之后,可以获取有关于所述探测环境中的障碍物的三维点集合,所述三维点集合根据所述装置对深度传感器采集的图像进行边缘检测或者高频提取得到的多个特征点及其深度信息确定;示例性的,所述三维点集合可以由所述装置使用树状数据结构进行维护,比如基于所述图像的三维点集合建立k-d树或者R树;然后所述装置可以根据所述待测空间点的三维坐标,从所述三维点集合查询所述待测空间点的最近邻三维点,示例性的,所述装置可以从基于所述三维点集合建立的k-d树或者R树中查询所述待测空间点的最近邻三维点,所述最近邻三维点表征在所述深度传感器中的感知范围内所述探测环境中与所述待测空间点距离最近的障碍物,进而所述装置可以确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离,然后根据所述第一距离获取所述待测空间点所在空间的障碍物信息,例如所述障碍物信息可以是所述待测空间点的最近邻障碍物的信息。本实施例中由于只需使用树状数据结构维护三维点集合,维护的深度信息减少了,则可以更快从树状数据结构中确定所述待测空间点的最近邻三维点,有利于提高关于所述待测空间点所在空间的障碍物信息的获取效率。
在一些可能的实施方式中,为了提高获取的障碍物信息的准确性,在获取所述待测空间点所在空间的障碍物信息的过程中,所述装置将所述待测空间点投影到所述图像所在二维空间,获取所述待测空间点在所述图像中对应的目标像素,如果所述目标像素属于所述多个特征点之一,表明所述目标像素对应的深度信息已用于构建所述图像的三维点集合,所述待测空间点处于所述三维点集合指示的区域内,则所述装置可以直接根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
如果所述目标像素不属于所述多个特征点,表明所述待测空间点可能在所述三维点集合指示的区域之外,则为了提高获取的障碍物信息的准确性,所述装置可以根据所述图像中除所述多个特征点之外的其他像素确定所述目标像素的三维表面,并获取 所述待测空间点到所述三维表面的第二距离,然后根据所述第一距离和所述第二距离,获取所述待测空间点所在空间的障碍物信息。
其中,所述三维表面可以近似表征所述探测环境,示例性的,请参阅图4,在构建所述目标像素的三维表面时,所述装置可以根据所述目标像素对应的待测空间点的三维坐标确定所述三维表面的原点(图4中以“p”表示原点),根据所述目标像素的深度信息及其邻域像素的深度信息确定所述三维表面的法向量(图4中以“n”表示法向量),和/或,根据所述第一距离确定所述三维表面的尺寸(图4中以“r”表示尺寸),从而获得构建好的三维表面。本实施例中,利用三维表面近似表征环境部分,实现将除所述特征点的有效深度信息之外的其他冗余的深度信息压缩成组成三维表面的3个参数(3个参数分别为原点、半径和法向量),极大地提高环境表征的抽象能力,实现以更少的数据对环境的准确表征;示例性的,图4所示实施例中,所述三维表面的原点可以用3个属性值来表示,3个属性值共同表示所述三维表面的原点在预设三维坐标系中的坐标位置;在高度固定的情况下,所述三维表面的法向量可以用2个属性值来表示,高度和2个属性值共同表示所述三维表面的法向量在预设三维坐标系中的位置;如三维表面为圆形表面,则所述三维表面的尺寸可以用1个属性值来表示,将除所述特征点的有效深度信息之外的其他冗余的深度信息压缩成组成三维表面的6个属性值,实现以更少的数据对环境的准确表征。
在获取所述目标像素的三维表面之后,所述装置可以计算所述待测空间点到所述三维表面的最短距离,将其确定为第二距离,然后根据所述第一距离和所述第二距离中的较小者获取所述待测空间点所在空间的障碍物信息。本实施例中,在获取所述第二距离之后,将所述第一距离和所述距离进行比较,使用两者中的较小者来获取所述待测空间点所在空间的障碍物信息,即以所述第一距离和所述距离中的较小者来确定所述待测空间点的安全范围,从而可以有效减少或者避免因深度传感器的采集噪声或者其他误差的影响导致障碍物信息错误的情况,提高获取的障碍物信息的准确性。
在构建所述目标像素的三维表面之后,所述装置可以存储所述目标像素及其邻域像素与所述三维表面的对应关系,以便后续供所述目标像素及其邻域像素中的任一像素使用,避免重复构建表面元,有利于节省计算资源。换句话说,所述图像中除所述多个特征点之外的其他像素中的每一像素仅对应一三维表面,在所述像素对应有三维表面的情况下,无需再重复构建表面元。
则在一些实施例中,在获取所述待测空间点所在空间的障碍物信息的过程中,如果所述目标像素不属于所述多个特征点,所述装置可以根据预存的像素与三维表面的 对应关系确定所述目标像素或者其邻域像素是否对应有三维表面,在根据所述对应关系确定所述目标像素或者其邻域像素对应有三维表面的情况下,则所述装置可以根据所述对应关系获取所述目标像素或者其邻域像素对应的三维表面,确定所述待测空间点到所述三维表面的第二距离,根据所述第一距离和所述第二距离,获取所述待测空间点所在空间的障碍物信息,本实施例无需重复构建三维表面,有利于节省计算资源以及提高效率;在根据所述对应关系确定所述目标像素或者其邻域像素未对应有三维表面的情况下,所述装置构建所述目标像素的三维表面,其中,所述三维表面的原点根据所述目标像素对应的待测空间点确定,所述三维表面的法向量根据所述目标像素的深度信息及其邻域像素的深度信息确定,和/或所述三维表面的尺寸根据所述第一距离确定,然后所述装置确定所述待测空间点到所述三维表面的第二距离,根据所述第一距离和所述第二距离,获取所述待测空间点所在空间的障碍物信息,并存储所述目标像素及其邻域像素与所述三维表面的对应关系。
在一些可能的实现方式中,考虑到所述目标像素及其邻域像素的深度信息以三维表面进行表示,则在存储所述目标像素及其邻域像素与所述三维表面的对应关系之后,可以丢弃所述目标像素的邻域像素的深度信息;本实施例中,利用三维表面近似表征环境部分,实现将除所述特征点的有效深度信息之外的其他冗余的深度信息压缩成组成三维表面的3个参数(3个参数分别为原点、半径和法向量),极大地提高环境表征的抽象能力,在构建表面元之后,无需再存储冗余的深度信息,有利于节省存储空间。
本实施例中,针对于深度传感器从探测环境中采集的有关于障碍物的图像,所述图像对应有深度信息,所述深度信息表征与障碍物之间的相对距离,为了保留有效的深度信息,同时降低冗余的深度信息,本实施例将图像中表征环境的精细部分和粗略部分分离表征,对所述图像进行边缘提取或者高频信息提取获取能够表征环境精细部分的多个特征点,利用所述多个特征点及其深度信息构建三维点集合,针对于所述图像除所述多个特征点之外的表征环境粗略部分的其他像素,则使用三维表面来表示,将所述像素的深度信息压缩成组成三维表面的3个参数(3个参数分别为原点、半径和法向量),从而有利于降低所述图像中需要维护深度信息的数据量,进一步减少计算量和节省存储空间,本实施例将所述图像的深度信息进行了不同的数据结构储存,将所述图像的深度信息中的有效深度信息和冗余深度信息分别以三维点集合和表面元两种形式进行存储,平衡了查询精度和效率。
在一些实施例中,考虑到由于深度传感器的感知范围有限,仅仅一次感知到的深度信息比较少,因此,为了提高获取的所述待测空间点所在空间的障碍物信息的准确 性,所述图像的数量有多张,每一所述图像均对应有所述三维点集合。多帧图像之间储存相对位姿信息,以确保多帧图像之间的像素坐标的准确转换。
示例性的,在查询所述待测空间点的最近邻三维点时,所述装置可以从最近采集到的一帧图像开始,首先判断所述待测空间点是否在所述图像对应的视场范围内,所述图像对应的视场范围即所述深度传感器在采集该图像时的视场范围;如果所述待测空间点在所述图像对应的视场范围内,则获取所述图像的三维点集合,在所述三维点集合中查询所述待测空间点的最近邻三维点;如果所述待测空间点不在所述图像对应的视场范围内,则继续向历史图像查询,检测所述待测空间点是否在下一张图像对应的视场范围内,直到查询到所述待测空间点的最近邻三维点为止。
可以理解的是,无需从最近采集到的一帧图像开始查询,所述装置也可以按照其他规则依次查询所述多张图像,直到查询到所述待测空间点的最近邻三维点为止,本实施例对于所述装置查询所述多张图像的顺序不做任何限制,可依据实际应用场景进行具体设置。
在一些实施例中,考虑到存储大量图像的数据可能反而会降低查询效率,一般深度传感器(如RGBD相机)的频率多在30帧左右,而当可移动平台缓慢移动或原地停留时,深度传感器所采集的相邻的两张图像之间的相似度较高,所获取的图像的三维点集合以及后续构建的表面元数据也有很高的相似度,因此可以考虑无需将深度传感器所采集的所有图像的数据均进行维护。可以在所述深度传感器当前采集的一帧图像与所述装置维护的图像序列中最新一帧图像之间的相对位姿大于预设阈值,和/或,所述深度传感器当前采集的一帧图像与所述装置维护的图像序列中最新一帧图像之间的采集时间间隔大于预设时间阈值的情况下,将所述深度传感器当前采集的一帧图像对应的数据加入所述装置维护的图像序列中,从而有利于保证所述装置维护的图像序列的稀疏性。其中,所述图像序列包括多张图像,相邻图像之间的相对位姿大于预设阈值,和/或,相邻图像之间的采集时间间隔大于预设时间阈值,从而有利于减少需要维护的数据量,可以有效减少计算量以及节省存储空间,需要维护的数据减少了,换句话说,针对于待测空间点的最近邻三维点的查询数据减少了,则查询效率也能够有效提高。
在一些实施例中,考虑到由于深度传感器的感知范围有限,比如请参阅图5,当待测空间点位于所述深度传感器的感知范围的边缘附近时,实际上距待测空间点最近的为障碍物A,但由于深度传感器的感知范围有限,深度传感器并未感知到障碍物A,基于深度传感器在其感知范围内获取的图像,最终确定了待测空间点的最近邻障碍物 为障碍物B,显然得到的最近邻障碍物查询结果是错误的。
因此,为了提高获取的障碍物信息的准确性,所述装置在确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离之后,还需要确定所述待测空间点与所述图像对应的视场范围边界之间的第三距离,所述图像对应的视场范围边界即所述深度传感器在采集该图像时的视场范围的边界,请参阅图6,如果所述第一距离不大于所述第三距离,表明所述待测空间点不处于所述深度传感器的感知范围的边缘附近,则不会出现上述的错误情况,即表示查询结果完备,所述装置可以根据所述第一距离获取所述待测空间点所在空间的障碍物信息;请参阅图7,如果所述第一距离大于所述第三距离,表明所述待测空间点处于所述深度传感器的感知范围的边缘附近,可能会出现上述错误情况,即表示查询结果不完备,则所述装置获取下一张图像的三维点集合,并在下一张图像的三维点集合中查询所述待测空间点的最近邻三维点,确定所述待测空间点与下一张图像的三维点集合中的最近邻三维点之间的第一距离。本实施例中,通过第一距离和所述第三距离的比较,实现对所述待测空间点的最近邻三维点的准确检测,得到最为完备的最近邻查询结果,在深度传感器误差允许范围内,确定的待测空间点作为所述可移动平台的路径点时,可以有效避开障碍物。
相应地,请参阅图8,本申请实施例还提供了一种障碍物信息获取装置,包括:
用于存储可执行指令的存储器21;
一个或多个处理器22;
其中,所述一个或多个处理器22执行所述可执行指令时,被单独地或共同地配置成:
获取探测环境中的待测空间点;
获取有关于所述探测环境中的障碍物的三维点集合;所述三维点集合根据从所述探测环境采集的图像中的多个特征点及其深度信息确定;
确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离;
根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
其中,所述存储器21存储障碍物信息获取方法的可执行指令,所述存储器21可以包括至少一种类型的存储介质,存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。
装置可包括,但不仅限于,处理器22、存储器21。本领域技术人员可以理解,图8仅仅是装置的示例,并不构成对装置的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如设备还可以包括输入输出设备、网络接入设备、总线等。
在一些实施例中,所述多个特征点分别对应的深度信息的变化率大于或等于预设变化阈值;所述图像中除所述多个特征点之外的其他像素对应的深度信息的变化率小于所述预设变化阈值。
在一些实施例中,所述处理器22还用于:将所述待测空间点投影到所述图像所在二维空间,获取所述待测空间点在所述图像中对应的目标像素;如果所述目标像素属于所述多个特征点之一,根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
在一些实施例中,所述处理器22还用于:如果所述目标像素不属于所述多个特征点,根据所述图像中除所述多个特征点之外的其他像素确定所述目标像素的三维表面,并获取所述待测空间点到所述三维表面的第二距离;根据所述第一距离和所述第二距离,获取所述待测空间点所在空间的障碍物信息。
在一些实施例中,所述待测空间点所在空间的障碍物信息根据所述第一距离和所述第二距离中的较小者获取。
在一些实施例中,所述第二距离为所述待测空间点到所述三维表面的最近距离。
在一些实施例中,所述三维表面的原点根据所述目标像素对应的待测空间点确定;所述三维表面的法向量根据所述目标像素的深度信息及其邻域像素的深度信息确定;和/或,所述三维表面的尺寸根据所述第一距离确定。
在一些实施例中,所述处理器22还用于:存储所述目标像素及其邻域像素与所述三维表面的对应关系。
在一些实施例中,所述处理器22还用于:在根据所述对应关系确定所述目标像素或者所述目标像素的邻域像素均未对应有三维表面的情况下,构建所述目标像素的三维表面;否则,根据所述对应关系获取所述目标像素或者所述目标像素的邻域像素对应的三维表面。
在一些实施例中,在所述存储所述目标像素及其邻域像素与所述三维表面的对应关系之后,所述处理器22还用于:丢弃所述目标像素的邻域像素的深度信息。
在一些实施例中,所述图像中除所述多个特征点之外的其他像素中的每一像素仅对应一三维表面。
在一些实施例中,所述最近邻三维点基于所述待测空间点从利用所述三维点集合建立的k-d树或者R树中查询得到。
在一些实施例中,所述图像的数量有多张,每一所述图像均对应有所述三维点集合。
在一些实施例中,相邻图像之间的相对位姿大于预设阈值,和/或,相邻图像之间的采集时间间隔大于预设时间阈值。
在一些实施例中,所述处理器22还用于:如果所述待测空间点在所述图像对应的视场范围内,获取所述图像的三维点集合;否则,检测所述待测空间点是否在下一张图像对应的视场范围内。
在一些实施例中,在确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离之后,所述处理器22还用于:确定所述待测空间点与所述图像对应的视场范围边界之间的第三距离;如果所述第一距离不大于所述第三距离,根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
在一些实施例中,所述处理器22还用于:如果所述第一距离大于所述第三距离,获取下一张图像的三维点集合。
在一些实施例中,所述多个特征点基于对所述图像进行边缘检测或者高频提取得到。
在一些实施例中,所述待测空间点所在空间的障碍物信息至少用于路径规划或者碰撞检测。
在一些实施例中,所述图像由搭载于可移动平台的深度传感器采集得到;
所述深度传感器至少包括:激光雷达或者RGBD相机;
所述图像包括以下任意一种:对应有深度信息的灰度图、对应有深度信息的彩色图或者深度图像。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
这里描述的各种实施方式可以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电 路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器中并且由控制器执行。
示例性的,本申请实施例还提供了一种可移动平台,包括:
机身;
动力系统,安装在所述机身内,用于为所述可移动平台提供动力;
深度传感器,用于采集探测环境的图像;以及,上述的障碍物信息获取装置。
在一实施例中,所述可移动平台包括但不限于无人飞行器、无人驾驶车辆、无人驾驶船只或者移动机器人等。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器,上述指令可由装置的处理器执行以完成上述方法。例如,非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当存储介质中的指令由终端的处理器执行时,使得终端能够执行上述方法。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本申请实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (42)

  1. 一种障碍物信息获取方法,其特征在于,包括:
    获取探测环境中的待测空间点;
    获取有关于所述探测环境中的障碍物的三维点集合;所述三维点集合根据从所述探测环境采集的图像中的多个特征点及其深度信息确定;
    确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离;
    根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
  2. 根据权利要求1所述的方法,其特征在于,所述多个特征点分别对应的深度信息的变化率大于或等于预设变化阈值;
    所述图像中除所述多个特征点之外的其他像素对应的深度信息的变化率小于所述预设变化阈值。
  3. 根据权利要求1所述的方法,其特征在于,还包括:
    将所述待测空间点投影到所述图像所在二维空间,获取所述待测空间点在所述图像中对应的目标像素;
    所述根据所述第一距离获取所述待测空间点所在空间的障碍物信息,包括:
    如果所述目标像素属于所述多个特征点之一,根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
  4. 根据权利要求3所述的方法,其特征在于,还包括:
    如果所述目标像素不属于所述多个特征点,根据所述图像中除所述多个特征点之外的其他像素确定所述目标像素的三维表面,并获取所述待测空间点到所述三维表面的第二距离;
    所述根据所述第一距离获取所述待测空间点所在空间的障碍物信息,包括:
    根据所述第一距离和所述第二距离,获取所述待测空间点所在空间的障碍物信息。
  5. 根据权利要求4所述的方法,其特征在于,所述待测空间点所在空间的障碍物信息根据所述第一距离和所述第二距离中的较小者获取。
  6. 根据权利要求4所述的方法,其特征在于,所述第二距离为所述待测空间点到所述三维表面的最近距离。
  7. 根据权利要求4所述的方法,其特征在于,所述三维表面的原点根据所述目标像素对应的待测空间点确定;
    所述三维表面的法向量根据所述目标像素的深度信息及其邻域像素的深度信息确定;
    和/或,所述三维表面的尺寸根据所述第一距离确定。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    存储所述目标像素及其邻域像素与所述三维表面的对应关系。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述图像中除所述多个特征点之外的其他像素确定所述目标像素的三维表面,包括:
    在根据所述对应关系确定所述目标像素或者所述目标像素的邻域像素均未对应有三维表面的情况下,构建所述目标像素的三维表面;
    否则,根据所述对应关系获取所述目标像素或者所述目标像素的邻域像素对应的三维表面。
  10. 根据权利要求8所述的方法,其特征在于,在所述存储所述目标像素及其邻域像素与所述三维表面的对应关系之后,还包括:
    丢弃所述目标像素的邻域像素的深度信息。
  11. 根据权利要求4至10任意一项所述的方法,其特征在于,所述图像中除所述多个特征点之外的其他像素中的每一像素仅对应一三维表面。
  12. 根据权利要求1所述的方法,其特征在于,所述最近邻三维点基于所述待测空间点从利用所述三维点集合建立的k-d树或者R树中查询得到。
  13. 根据权利要求1至12任意一项所述的方法,其特征在于,所述图像的数量有多张,每一所述图像均对应有所述三维点集合。
  14. 根据权利要求13所述的方法,其特征在于,相邻图像之间的相对位姿大于预设阈值,和/或,相邻图像之间的采集时间间隔大于预设时间阈值。
  15. 根据权利要求13所述的方法,其特征在于,所述获取有关于所述探测环境中的障碍物的三维点集合,包括:
    如果所述待测空间点在所述图像对应的视场范围内,获取所述图像的三维点集合;否则,检测所述待测空间点是否在下一张图像对应的视场范围内。
  16. 根据权利要求1、13至15任意一项所述的方法,其特征在于,在确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离之后,还包括:
    确定所述待测空间点与所述图像对应的视场范围边界之间的第三距离;
    所述根据所述第一距离获取所述待测空间点所在空间的障碍物信息,包括:
    如果所述第一距离不大于所述第三距离,根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
  17. 根据权利要求16所述的方法,其特征在于,还包括:
    如果所述第一距离大于所述第三距离,获取下一张图像的三维点集合。
  18. 根据权利要求1所述的方法,其特征在于,所述多个特征点基于对所述图像进行边缘检测或者高频提取得到。
  19. 根据权利要求1所述的方法,其特征在于,所述待测空间点所在空间的障碍物信息至少用于路径规划或者碰撞检测。
  20. 根据权利要求1所述的方法,其特征在于,所述图像由搭载于可移动平台的深度传感器采集得到;
    所述深度传感器至少包括:激光雷达或者RGBD相机;
    所述图像包括以下任意一种:对应有深度信息的灰度图、对应有深度信息的彩色图或者深度图像。
  21. 一种障碍物信息获取装置,其特征在于,包括:
    用于存储可执行指令的存储器;
    一个或多个处理器;
    其中,所述一个或多个处理器执行所述可执行指令时,被单独地或共同地配置成:
    获取探测环境中的待测空间点;
    获取有关于所述探测环境中的障碍物的三维点集合;所述三维点集合根据从所述探测环境采集的图像中的多个特征点及其深度信息确定;
    确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离;
    根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
  22. 根据权利要求21所述的装置,其特征在于,所述多个特征点分别对应的深度信息的变化率大于或等于预设变化阈值;
    所述图像中除所述多个特征点之外的其他像素对应的深度信息的变化率小于所述预设变化阈值。
  23. 根据权利要求21所述的装置,其特征在于,所述处理器还用于:
    将所述待测空间点投影到所述图像所在二维空间,获取所述待测空间点在所述图像中对应的目标像素;
    如果所述目标像素属于所述多个特征点之一,根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
  24. 根据权利要求23所述的装置,其特征在于,所述处理器还用于:
    如果所述目标像素不属于所述多个特征点,根据所述图像中除所述多个特征点之 外的其他像素确定所述目标像素的三维表面,并获取所述待测空间点到所述三维表面的第二距离;
    根据所述第一距离和所述第二距离,获取所述待测空间点所在空间的障碍物信息。
  25. 根据权利要求24所述的装置,其特征在于,所述待测空间点所在空间的障碍物信息根据所述第一距离和所述第二距离中的较小者获取。
  26. 根据权利要求24所述的装置,其特征在于,所述第二距离为所述待测空间点到所述三维表面的最近距离。
  27. 根据权利要求24所述的装置,其特征在于,所述三维表面的原点根据所述目标像素对应的待测空间点确定;
    所述三维表面的法向量根据所述目标像素的深度信息及其邻域像素的深度信息确定;
    和/或,所述三维表面的尺寸根据所述第一距离确定。
  28. 根据权利要求27所述的装置,其特征在于,所述处理器还用于:
    存储所述目标像素及其邻域像素与所述三维表面的对应关系。
  29. 根据权利要求28所述的装置,其特征在于,所述处理器还用于:
    在根据所述对应关系确定所述目标像素或者所述目标像素的邻域像素均未对应有三维表面的情况下,构建所述目标像素的三维表面;
    否则,根据所述对应关系获取所述目标像素或者所述目标像素的邻域像素对应的三维表面。
  30. 根据权利要求28所述的装置,其特征在于,在所述存储所述目标像素及其邻域像素与所述三维表面的对应关系之后,所述处理器还用于:
    丢弃所述目标像素的邻域像素的深度信息。
  31. 根据权利要求24至30任意一项所述的装置,其特征在于,所述图像中除所述多个特征点之外的其他像素中的每一像素仅对应一三维表面。
  32. 根据权利要求21所述的装置,其特征在于,所述最近邻三维点基于所述待测空间点从利用所述三维点集合建立的k-d树或者R树中查询得到。
  33. 根据权利要求21至32任意一项所述的装置,其特征在于,所述图像的数量有多张,每一所述图像均对应有所述三维点集合。
  34. 根据权利要求33所述的装置,其特征在于,相邻图像之间的相对位姿大于预设阈值,和/或,相邻图像之间的采集时间间隔大于预设时间阈值。
  35. 根据权利要求33所述的装置,其特征在于,所述处理器还用于:如果所述待 测空间点在所述图像对应的视场范围内,获取所述图像的三维点集合;否则,检测所述待测空间点是否在下一张图像对应的视场范围内。
  36. 根据权利要求21、33至35任意一项所述的装置,其特征在于,在确定所述待测空间点与所述三维点集合中的最近邻三维点之间的第一距离之后,所述处理器还用于:
    确定所述待测空间点与所述图像对应的视场范围边界之间的第三距离;如果所述第一距离不大于所述第三距离,根据所述第一距离获取所述待测空间点所在空间的障碍物信息。
  37. 根据权利要求36所述的装置,其特征在于,所述处理器还用于:如果所述第一距离大于所述第三距离,获取下一张图像的三维点集合。
  38. 根据权利要求21所述的装置,其特征在于,所述多个特征点基于对所述图像进行边缘检测或者高频提取得到。
  39. 根据权利要求21所述的装置,其特征在于,所述待测空间点所在空间的障碍物信息至少用于路径规划或者碰撞检测。
  40. 根据权利要求21所述的装置,其特征在于,所述图像由搭载于可移动平台的深度传感器采集得到;
    所述深度传感器至少包括:激光雷达或者RGBD相机;
    所述图像包括以下任意一种:对应有深度信息的灰度图、对应有深度信息的彩色图或者深度图像。
  41. 一种可移动平台,其特征在于,包括:
    机身;
    动力系统,安装在所述机身内,用于为所述可移动平台提供动力;
    深度传感器,用于采集探测环境的深度图像;以及,
    如权利要求21至40任意一项所述的障碍物信息获取装置。
  42. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有可执行指令,所述可执行指令被处理器执行时实现如权利要求1至20任一项所述的方法。
PCT/CN2021/097339 2021-05-31 2021-05-31 障碍物信息获取方法、装置、可移动平台及存储介质 WO2022252036A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/097339 WO2022252036A1 (zh) 2021-05-31 2021-05-31 障碍物信息获取方法、装置、可移动平台及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/097339 WO2022252036A1 (zh) 2021-05-31 2021-05-31 障碍物信息获取方法、装置、可移动平台及存储介质

Publications (1)

Publication Number Publication Date
WO2022252036A1 true WO2022252036A1 (zh) 2022-12-08

Family

ID=84323835

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/097339 WO2022252036A1 (zh) 2021-05-31 2021-05-31 障碍物信息获取方法、装置、可移动平台及存储介质

Country Status (1)

Country Link
WO (1) WO2022252036A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116793245A (zh) * 2023-08-24 2023-09-22 济南瑞源智能城市开发有限公司 一种基于轨道机器人的隧道检测方法、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150043782A1 (en) * 2013-08-08 2015-02-12 Convoy Technologies Llc System, apparatus, and method of detecting and displaying obstacles and data associated with the obstacles
CN107689060A (zh) * 2016-08-03 2018-02-13 北京三星通信技术研究有限公司 目标对象的视觉处理方法、装置及基于视觉处理的设备
CN107928566A (zh) * 2017-12-01 2018-04-20 深圳市沃特沃德股份有限公司 视觉扫地机器人及障碍物检测方法
CN110070570A (zh) * 2019-03-20 2019-07-30 重庆邮电大学 一种基于深度信息的障碍物检测系统及方法
CN110645960A (zh) * 2018-06-26 2020-01-03 凌上科技(北京)有限公司 测距方法、地形跟随测距方法、避障测距方法及装置
CN111368607A (zh) * 2018-12-26 2020-07-03 北京欣奕华科技有限公司 一种机器人、障碍物的检测方法及检测装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150043782A1 (en) * 2013-08-08 2015-02-12 Convoy Technologies Llc System, apparatus, and method of detecting and displaying obstacles and data associated with the obstacles
CN107689060A (zh) * 2016-08-03 2018-02-13 北京三星通信技术研究有限公司 目标对象的视觉处理方法、装置及基于视觉处理的设备
CN107928566A (zh) * 2017-12-01 2018-04-20 深圳市沃特沃德股份有限公司 视觉扫地机器人及障碍物检测方法
CN110645960A (zh) * 2018-06-26 2020-01-03 凌上科技(北京)有限公司 测距方法、地形跟随测距方法、避障测距方法及装置
CN111368607A (zh) * 2018-12-26 2020-07-03 北京欣奕华科技有限公司 一种机器人、障碍物的检测方法及检测装置
CN110070570A (zh) * 2019-03-20 2019-07-30 重庆邮电大学 一种基于深度信息的障碍物检测系统及方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116793245A (zh) * 2023-08-24 2023-09-22 济南瑞源智能城市开发有限公司 一种基于轨道机器人的隧道检测方法、设备及介质
CN116793245B (zh) * 2023-08-24 2023-12-01 济南瑞源智能城市开发有限公司 一种基于轨道机器人的隧道检测方法、设备及介质

Similar Documents

Publication Publication Date Title
US10510159B2 (en) Information processing apparatus, control method for information processing apparatus, and non-transitory computer-readable storage medium
US10121076B2 (en) Recognizing entity interactions in visual media
CN110705574B (zh) 定位方法及装置、设备、存储介质
US20190206124A1 (en) Method and apparatus for creating map and positioning moving entity
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
US10937214B2 (en) System and method for merging maps
CN110986969B (zh) 地图融合方法及装置、设备、存储介质
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
KR102474160B1 (ko) 맵 작성 방법, 장치, 및 시스템, 및 저장 매체
CN110648363A (zh) 相机姿态确定方法、装置、存储介质及电子设备
US11045953B2 (en) Relocalization method and robot using the same
Jin et al. Ellipse proposal and convolutional neural network discriminant for autonomous landing marker detection
CN113888639B (zh) 基于事件相机与深度相机的视觉里程计定位方法及系统
Zheng et al. Robust and accurate monocular visual navigation combining IMU for a quadrotor
CN116468786B (zh) 一种面向动态环境的基于点线联合的语义slam方法
WO2019144286A1 (zh) 障碍物检测方法、移动平台及计算机可读存储介质
Wu et al. A survey on monocular 3D object detection algorithms based on deep learning
WO2021142843A1 (zh) 图像扫描方法及装置、设备、存储介质
WO2022252036A1 (zh) 障碍物信息获取方法、装置、可移动平台及存储介质
CN114140527A (zh) 一种基于语义分割的动态环境双目视觉slam方法
CN116481517B (zh) 扩展建图方法、装置、计算机设备和存储介质
CN114648639B (zh) 一种目标车辆的检测方法、系统及装置
CN105677843A (zh) 一种自动获取宗地四至属性的方法
WO2021082736A1 (zh) 获取位姿信息、确定物体对称性的方法、装置和存储介质
CN113298844A (zh) 基于多特征融合和区域生长的低小慢目标跟踪方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21943413

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21943413

Country of ref document: EP

Kind code of ref document: A1