WO2020024234A1 - Route navigation method, related device, and computer readable storage medium - Google Patents

Route navigation method, related device, and computer readable storage medium Download PDF

Info

Publication number
WO2020024234A1
WO2020024234A1 PCT/CN2018/098384 CN2018098384W WO2020024234A1 WO 2020024234 A1 WO2020024234 A1 WO 2020024234A1 CN 2018098384 W CN2018098384 W CN 2018098384W WO 2020024234 A1 WO2020024234 A1 WO 2020024234A1
Authority
WO
WIPO (PCT)
Prior art keywords
path
obstacle information
coordinate system
point cloud
dimensional point
Prior art date
Application number
PCT/CN2018/098384
Other languages
French (fr)
Chinese (zh)
Inventor
林义闽
廉士国
李业
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201880001094.0A priority Critical patent/CN109074668B/en
Priority to PCT/CN2018/098384 priority patent/WO2020024234A1/en
Publication of WO2020024234A1 publication Critical patent/WO2020024234A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present application relates to the field of computer vision technology, and in particular, to a path navigation method, a related device, and a computer-readable storage medium.
  • a technical problem to be solved in some embodiments of the present application is to provide a path navigation method, a related device, and a computer-readable storage medium to solve the above technical problems.
  • An embodiment of the present application provides a path navigation method, a related device, and a computer-readable storage medium, including: obtaining a depth map; constructing a three-dimensional point cloud in the world coordinate system according to the depth map, and according to the three-dimensional point cloud in the world coordinate system.
  • the point cloud is used for road surface detection to determine the first path obstacle information; the image learning is used to determine the second path obstacle information according to the depth map; the first path obstacle information and the second path obstacle information are combined to obtain the third path obstacle information ; Perform path navigation based on the third path obstacle information.
  • An embodiment of the present application further provides a path navigation device.
  • the path navigation device includes: an acquisition module for acquiring a depth map; a first determination module for constructing a three-dimensional point cloud in the world coordinate system according to the depth map, and according to Pavement detection of the three-dimensional point cloud in the world coordinate system to determine the obstacle information of the first road; a second determination module for image learning to determine the obstacle information of the second channel according to the depth map; a merge module for the obstacle of the first channel The object information and the second path obstacle information are combined to obtain the third path obstacle information; the navigation module is configured to perform route navigation according to the third path obstacle information.
  • An embodiment of the present application further provides an electronic device including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are processed by at least one The processor executes to enable at least one processor to execute the path navigation method involved in any method embodiment of the present application.
  • the embodiment of the present application further provides a computer-readable storage medium storing computer instructions, and the computer instructions are used to cause a computer to execute the path navigation method involved in any method embodiment of the present application.
  • the embodiment of the present application determines the first path obstacle information determined by road detection based on the acquired depth map and the second path obstacle information determined by image learning based on the acquired depth map to determine The third path obstacle information that more closely matches the actual road conditions, and the path navigation is performed based on the third path obstacle information that is more in line with the actual conditions to improve the accuracy of the path navigation.
  • FIG. 1 is a flowchart of a path navigation method in a first embodiment of the present application
  • FIG. 2 is a relationship diagram between a pixel coordinate system and a camera coordinate system in the first embodiment of the present application
  • FIG. 3 is a relationship diagram between a camera coordinate system and a world coordinate system in the first embodiment of the present application
  • FIG. 4 is a flowchart of a path navigation method in a second embodiment of the present application.
  • FIG. 5 is a schematic block diagram of a path navigation device in a third embodiment of the present application.
  • FIG. 6 is a schematic block diagram of a path navigation device in a fourth embodiment of the present application.
  • FIG. 7 is a structural example diagram of an electronic device in a fifth embodiment of the present application.
  • the first embodiment of the present application relates to a route navigation method, which is applied to a terminal or a cloud.
  • the terminal can be a device such as a blind guide helmet, an unmanned vehicle, or an intelligent robot.
  • the cloud communicates with the terminal, providing the terminal with a map for positioning or providing positioning results directly to the terminal.
  • This embodiment uses a terminal as an example to describe the execution process of the path navigation method. For the process of executing the path navigation method in the cloud, reference may be made to the content of the embodiment of the present application.
  • step 101 a depth map is obtained.
  • the depth map includes a color map and a depth value corresponding to the color map.
  • the way to obtain the depth map may be obtained by the terminal real-time shooting through the camera device during the movement.
  • step 102 a three-dimensional point cloud in the world coordinate system is constructed according to the depth map, and road surface detection is performed according to the three-dimensional point cloud in the world coordinate system to determine the first path obstacle information.
  • a three-dimensional point cloud in the first camera coordinate system is constructed according to the depth map, and the pose information of the terminal is obtained.
  • a three-dimensional point cloud in the world coordinate system is constructed according to the three-dimensional point cloud and pose information in the first camera coordinate system. , And then perform road detection according to the three-dimensional point cloud in the world coordinate system to determine the obstacle information of the first path.
  • formula (1) is used to construct a three-dimensional point cloud in the depth camera coordinate system.
  • Formula (1) is expressed as follows:
  • u and v are the position coordinates of the pixels in the depth map
  • f is the internal parameter of the depth camera
  • X d , Y d and Z d are the coordinate values of the 3D point cloud in the depth camera coordinate system
  • H is the depth map.
  • Width, W represents the height of the depth map
  • Z d is the depth value of the pixels in the depth map, which is a known quantity.
  • Formula (2) is expressed as follows:
  • X d , Y d and Z d are the coordinate values of the 3D point cloud in the depth camera coordinate system
  • X c , Y c and Z c are the coordinate values of the 3D point cloud in the first camera coordinate system
  • [R, T] cd is a fixed transformation matrix between the depth camera and the first camera, and is a known quantity.
  • obtaining pose information feature extraction is performed according to the depth map to obtain feature corner points, and descriptors corresponding to the feature corner points are determined. The descriptors of any two frames of the depth map are matched to obtain a match.
  • Information and obtain a transformation matrix according to the matching information; determine the three-dimensional point cloud in the second camera coordinate system according to the transformation matrix, and perform the transformation matrix according to the three-dimensional point cloud in the first camera coordinate system and the three-dimensional point cloud in the second camera coordinate system.
  • the pose information P is obtained by optimizing the transformed transformation matrix.
  • the three-dimensional point cloud in the depth camera coordinate system is constructed according to a depth map containing a depth value; the three-dimensional point cloud in the first camera coordinate system is based on the depth camera and the color camera.
  • the fixed conversion relationship between them is obtained through calculation of the 3D point cloud in the depth camera coordinate system; the 3D point cloud in the second camera coordinate system is obtained through feature point matching according to the obtained color map, so theoretically, In the absence of a matching error, the 3D point cloud in the first camera coordinate system and the 3D point cloud in the second coordinate system should be the same.
  • feature corner points are obtained by performing feature extraction on the color map in the depth map, so the three-dimensional point cloud in the second camera coordinate system is based on the feature points in the color map. Extraction and matching.
  • the 3D point cloud in the first camera coordinate system is obtained through the conversion of the 3D point cloud in the depth camera coordinate system.
  • the two expressions have the same meaning, so for any feature corner point, its coordinates in the 3D point cloud of the first camera coordinate system should be the same as the 3D point cloud coordinates of the second camera coordinate system. the same.
  • the actual results may be different. Therefore, it is necessary to check the transformation matrix and optimize the transformed transformation matrix to make the obtained terminal pose information P more accurate.
  • the specific way to obtain the pose information is not the focus of this application, so it will not be described in detail in the embodiments of this application.
  • a three-dimensional point cloud in the world coordinate system is constructed based on the acquired pose information P and a three-dimensional point cloud in the first camera coordinate system to implement three-dimensional reconstruction.
  • formula (3) is expressed as follows:
  • X w , Y w and Z w are the coordinate values of the three-dimensional point cloud in the world coordinate system
  • P is the acquired pose information
  • X c , Y c and Z c are the three-dimensional point cloud in the first camera coordinate system. Coordinate value.
  • the rectangular coordinate system o-uv in pixels which is established with the upper left corner of the depth image as the origin, is called a pixel coordinate system.
  • the abscissa u and ordinate v of a pixel are the number of columns and the number of rows in its image array, respectively.
  • the origin o 1 of the image coordinate system o 1 -xy is defined as the intersection of the camera optical axis and the depth image plane, and the x-axis is parallel to the u-axis, and the y-axis is parallel to the v-axis.
  • the camera coordinate system O c -X c Y c Z c uses the camera optical center O c as the origin, and the X c axis and Y c axis are parallel to the x and y axes of the image coordinate system, respectively, and the Z c axis is the optical axis of the camera , Perpendicular to the image plane and intersect at o 1 point.
  • the origin O w of the world coordinate system O w -X w Y w Z w coincides with the origin O c of the camera coordinate system, both of which are the camera optical centers, and the horizontal right is selected as the positive direction of the X w axis.
  • the vertical downward direction is the positive direction of the Y w axis
  • the vertical X w Y w plane and pointing directly forward is the positive direction of the Z w axis
  • a world coordinate system is established.
  • the road surface information is obtained based on the three-dimensional point cloud in the world coordinate system to obtain road surface information, and according to the acquired ground information Determine the first path obstacle information.
  • pavement detection is performed according to a three-dimensional point cloud in the world coordinate system, and a ground area and a non-ground area in the three-dimensional point cloud are determined, and the obtained result is used as ground information of the road.
  • step 103 image learning is performed according to the depth map to determine the second path obstacle information.
  • an initial candidate region is generated according to the color map, and the initial candidate region is divided to obtain at least two first candidate regions; feature extraction is performed on each first candidate region, and a category of each first candidate region is determined, where The category includes obstacles and road surfaces; the second path obstacle information is determined according to the category of each first candidate area and the position of each first candidate area in the initial candidate area.
  • the position coordinates of the left vertex in the color image, w, h represent the width and height of the rectangular frame of the initial candidate region.
  • the first candidate regions, i and j represent the serial numbers of each first candidate region in the horizontal direction and the vertical direction, respectively.
  • each first candidate region and the position of each first candidate region in the initial candidate region, and the position of the initial candidate region in the color image determine the information of obstacles and pathways in the color image, and the corresponding color
  • Each pixel in the image corresponds to information about an obstacle and the road surface.
  • the corresponding point in the world coordinate system corresponding to the pixel in the color image is determined. Thereby, the second path and obstacle information are determined.
  • step 104 the first-path obstacle information and the second-path obstacle information are combined to obtain third-path obstacle information.
  • the first-path obstacle information and the second-path obstacle information are combined, that is, the points determined as obstacles and paths in the first-path obstacle information and the second-path obstacle information are retained, and the first Only one of the obstacle information of the first passage or the obstacle information of the second passage is determined to be the obstacle and the passage point is also retained, thereby obtaining the obstacle information of the third passage. Therefore, the third-path obstacle information includes both the entire contents of the first-path obstacle information and the second-path obstacle information. Combining the two aspects of localization and image learning to obtain path information and obstacle information makes the obtained results more comprehensive and avoids the lack of path information and obstacle information.
  • step 105 route navigation is performed based on the third-path obstacle information.
  • the path navigation method provided in this embodiment determines the first path obstacle information determined by road detection based on the acquired depth map, and combines the second path obstacle information determined by image learning based on the depth map to determine The third path obstacle information is more matched with the actual road conditions, and navigation is performed according to the third path obstacle information, thereby improving the accuracy of path navigation.
  • the second embodiment of the present application relates to a route navigation method. This embodiment is further improved based on the first embodiment. The specific improvement is as follows: Detailed description. The flow of the route navigation method in this embodiment is shown in FIG. 4.
  • steps 201 to 208 are included, where steps 201 to 204 are substantially the same as steps 101 to 104 in the first embodiment, and are not repeated here.
  • steps 201 to 204 are substantially the same as steps 101 to 104 in the first embodiment, and are not repeated here.
  • the following mainly describes the differences.
  • steps 201 to 204 are substantially the same as steps 101 to 104 in the first embodiment, and are not repeated here.
  • the following mainly describes the differences.
  • step 205 is performed.
  • step 205 a two-dimensional grid map is obtained according to the three-dimensional point cloud in the world coordinate system.
  • a three-dimensional point cloud in a world coordinate system is projected onto a road surface to form a two-dimensional grid map, and path navigation is performed based on the two-dimensional grid map.
  • step 206 the attributes of each grid in the two-dimensional grid map are determined according to the third path obstacle information.
  • step 207 the location of the destination and the current location are determined.
  • a current position may be determined by a Global Positioning System (GPS), and a position of a destination to be reached may be determined according to a received user instruction.
  • GPS Global Positioning System
  • the optimal path is determined according to the location of the destination, the current location, and the attributes of each grid.
  • a third embodiment of the present application relates to a route navigation device, and a specific structure thereof is shown in FIG. 5.
  • the path navigation device includes an acquisition module 301, a first determination module 302, a second determination module 303, a merge module 304, and a navigation module 305.
  • the obtaining module 301 is configured to obtain a depth map.
  • the first determining module 302 is configured to construct a three-dimensional point cloud in the world coordinate system according to the depth map, and perform road surface detection according to the three-dimensional point cloud in the world coordinate system to determine the first path obstacle information.
  • the second determining module 303 is configured to perform image learning according to the depth map to determine the second path obstacle information.
  • the merging module 304 is configured to merge the first-path obstacle information and the second-path obstacle information to obtain the third-path obstacle information.
  • the navigation module 305 is configured to perform route navigation according to the third path obstacle information.
  • this embodiment is a device example corresponding to the first embodiment, and this embodiment can be implemented in cooperation with the first embodiment. Relevant technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, details are not repeated here. Accordingly, the related technical details mentioned in this embodiment can also be applied in the first embodiment.
  • the fourth embodiment of the present application relates to a route navigation device.
  • This embodiment is substantially the same as the third embodiment.
  • the specific structure is shown in FIG. 6.
  • the main improvement is that the fourth embodiment specifically describes the structure of the navigation module in the third embodiment.
  • the navigation module 305 includes an acquisition sub-module 3051, an attribute determination sub-module 3052, a position determination sub-module 3053, and a path determination sub-module 3054.
  • An obtaining sub-module 3051 is configured to obtain a two-dimensional grid map according to a three-dimensional point cloud in the world coordinate system.
  • the attribute determining sub-module 3052 is configured to determine an attribute of each grid in the two-dimensional grid map according to the obstacle information of the third path.
  • the position determining submodule 3053 is configured to determine the position of the destination and the current position.
  • the path determination sub-module 3054 is configured to determine an optimal path according to the destination position, the current position, and the attributes of each grid.
  • this embodiment is a device example corresponding to the second embodiment, and this embodiment can be implemented in cooperation with the second embodiment. Relevant technical details mentioned in the second embodiment are still valid in this embodiment, and in order to reduce repetition, details are not repeated here. Accordingly, the related technical details mentioned in this embodiment can also be applied in the second embodiment.
  • a fifth embodiment of the present application relates to an electronic device, and a specific structure thereof is shown in FIG. 7.
  • the memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can execute a route navigation method.
  • the processor 501 uses a central processing unit (CPU) as an example, and the memory 502 uses a readable and writable memory (Random Access Memory, RAM) as an example.
  • the processor 501 and the memory 502 may be connected through a bus or in other manners. In FIG. 7, the connection through the bus is taken as an example.
  • the memory 502 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules.
  • the program for implementing the method for determining environmental information in the embodiment of the present application is Stored in the memory 502.
  • the processor 501 executes various functional applications and data processing of the device by running the non-volatile software programs, instructions, and modules stored in the memory 502, that is, the above path navigation method is implemented.
  • the memory 502 may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required for at least one function; the storage data area may store a list of options and the like.
  • the memory may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 502 may optionally include a memory remotely set relative to the processor 501, and these remote memories may be connected to an external device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • One or more program modules are stored in the memory 502, and when executed by one or more processors 501, execute the path navigation method in any of the above method embodiments.
  • An eighth embodiment of the present application relates to a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program that can implement a path navigation method involved in any method embodiment of the present application when the computer program is executed by a processor. .
  • the program is stored in a storage medium and includes several instructions for making a device (which can be (Single chip microcomputer, chip, etc.) or a processor (processor) executes all or part of the steps of the method described in each embodiment of the present application.
  • the foregoing storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes .

Abstract

A Route navigation method, a related device, and a computer readable storage medium, relating to the technical field of computer vision. The method is applied to a terminal or a cloud, and comprises the following steps: obtaining a depth map (101); constructing a three-dimensional point cloud in a world coordinate system according to the depth map, and performing road surface detection according to the three-dimensional point cloud in the world coordinate system to determine first path obstacle information (102); performing image learning according to the depth map to determine second path obstacle information (103); combining the first path obstacle information and the second path obstacle information to obtain third path obstacle information (104); and performing route navigation according to the third path obstacle information (105). By determining the third path obstacle information which better matches an actual road status by combining the first path obstacle information determined by performing road surface detection on the obtained depth map and the second path obstacle information determined by performing image learning according to the depth map, and performing navigation according to the third path obstacle information, the navigation result is more accurate.

Description

路径导航方法、相关装置及计算机可读存储介质Path navigation method, related device and computer-readable storage medium 技术领域Technical field
本申请涉及计算机视觉技术领域,特别涉及一种路径导航方法、相关装置及计算机可读存储介质。The present application relates to the field of computer vision technology, and in particular, to a path navigation method, a related device, and a computer-readable storage medium.
背景技术Background technique
随着自动驾驶和导航技术的发展,目前的无人机、自主机器人等智能设备能够通过视觉深度传感器来采集图像信息,并根据获取的图像信息进行分析,从而实现避障和导航。With the development of autonomous driving and navigation technology, current smart devices such as drones and autonomous robots can collect image information through visual depth sensors and analyze based on the acquired image information to achieve obstacle avoidance and navigation.
技术问题technical problem
发明人在研究现有技术过程中发现,现有技术中的自主机器人等智能设备进行自主行走,通过获取障碍物信息实现导航时,往往根据采集的视觉信息通过定位结果确定通路障碍物信息,并进行导航,但由于视角限制和自身设备的缺陷,单纯使用这种方式可能无法获取到准确的通路障碍物信息,因此在根据获取的通路障碍物信息进行导航时,影响导航的准确性。The inventor discovered during the research of the prior art that in the prior art, intelligent devices such as autonomous robots perform autonomous walking, and when navigation is achieved by obtaining obstacle information, the obstacle information on the path is often determined through the positioning results based on the collected visual information, Navigation, but due to the limitation of perspective and the defects of its own equipment, using this method alone may not obtain accurate access obstacle information, so when navigating according to the obtained access obstacle information, the accuracy of navigation is affected.
技术解决方案Technical solutions
本申请部分实施例所要解决的一个技术问题在于提供一种路径导航方法、相关装置及计算机可读存储介质,以解决上述技术问题。A technical problem to be solved in some embodiments of the present application is to provide a path navigation method, a related device, and a computer-readable storage medium to solve the above technical problems.
本申请的一个实施例提供了一种路径导航方法、相关装置及计算机可读存储介质,包括:获取深度图;根据深度图构建世界坐标系下的三维点云,并根据世界坐标系下的三维点云进行路面检测确定第一通路障碍物信息;根据深度图进行图像学习确定第二通路障碍物信息;将第一通路障碍物信息和第二通路障碍物信息进行合并获得第三通路障碍物信息;根据第三通路障碍物信息进行路径导航。An embodiment of the present application provides a path navigation method, a related device, and a computer-readable storage medium, including: obtaining a depth map; constructing a three-dimensional point cloud in the world coordinate system according to the depth map, and according to the three-dimensional point cloud in the world coordinate system. The point cloud is used for road surface detection to determine the first path obstacle information; the image learning is used to determine the second path obstacle information according to the depth map; the first path obstacle information and the second path obstacle information are combined to obtain the third path obstacle information ; Perform path navigation based on the third path obstacle information.
本申请实施例还提供了一种路径导航装置,该路径导航装置包括:获取模块,用于获取深度图;第一确定模块,用于根据深度图构 建世界坐标系下的三维点云,并根据世界坐标系下的三维点云进行路面检测确定第一路障碍物信息;第二确定模块,用于根据深度图进行图像学习确定第二通路障碍物信息;合并模块,用于将第一通路障碍物信息和第二通路障碍物信息进行合并获得第三通路障碍物信息;导航模块,用于根据第三通路障碍物信息进行路径导航。An embodiment of the present application further provides a path navigation device. The path navigation device includes: an acquisition module for acquiring a depth map; a first determination module for constructing a three-dimensional point cloud in the world coordinate system according to the depth map, and according to Pavement detection of the three-dimensional point cloud in the world coordinate system to determine the obstacle information of the first road; a second determination module for image learning to determine the obstacle information of the second channel according to the depth map; a merge module for the obstacle of the first channel The object information and the second path obstacle information are combined to obtain the third path obstacle information; the navigation module is configured to perform route navigation according to the third path obstacle information.
本申请实施例还提供了一种电子设备,包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行本申请任意方法实施例中涉及的路径导航方法。An embodiment of the present application further provides an electronic device including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are processed by at least one The processor executes to enable at least one processor to execute the path navigation method involved in any method embodiment of the present application.
本申请实施例还提供了一种计算机可读存储介质,存储有计算机指令,计算机指令用于使计算机执行本申请任意方法实施例中涉及的路径导航方法。The embodiment of the present application further provides a computer-readable storage medium storing computer instructions, and the computer instructions are used to cause a computer to execute the path navigation method involved in any method embodiment of the present application.
有益效果Beneficial effect
本申请实施例相对于现有技术而言,根据获取的深度图进行路面检测确定的第一通路障碍物信息,结合根据获取的深度图进行图像学习确定的第二通路障碍物信息,确定出与实际路况更加匹配的第三通路障碍物信息,并根据更符合实际的第三通路障碍物信息进行路径导航,提高路径导航的准确性。Compared with the prior art, the embodiment of the present application determines the first path obstacle information determined by road detection based on the acquired depth map and the second path obstacle information determined by image learning based on the acquired depth map to determine The third path obstacle information that more closely matches the actual road conditions, and the path navigation is performed based on the third path obstacle information that is more in line with the actual conditions to improve the accuracy of the path navigation.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplified by the pictures in the accompanying drawings. These exemplary descriptions do not constitute a limitation on the embodiments. Elements with the same reference numerals in the drawings are denoted as similar elements. Unless otherwise stated, the drawings in the drawings do not constitute a limitation on scale.
图1是本申请第一实施例中路径导航方法的流程图;FIG. 1 is a flowchart of a path navigation method in a first embodiment of the present application;
图2是本申请第一实施例中像素坐标系和相机坐标系的关系图;2 is a relationship diagram between a pixel coordinate system and a camera coordinate system in the first embodiment of the present application;
图3是本申请第一实施例中相机坐标系和世界坐标系的关系图;3 is a relationship diagram between a camera coordinate system and a world coordinate system in the first embodiment of the present application;
图4是本申请第二实施例中路径导航方法的流程图;4 is a flowchart of a path navigation method in a second embodiment of the present application;
图5是本申请第三实施例中路径导航装置的方框示意图;5 is a schematic block diagram of a path navigation device in a third embodiment of the present application;
图6是本申请第四实施例中路径导航装置的方框示意图;6 is a schematic block diagram of a path navigation device in a fourth embodiment of the present application;
图7是本申请第五实施例中电子设备的结构实例图。FIG. 7 is a structural example diagram of an electronic device in a fifth embodiment of the present application.
本发明的实施方式Embodiments of the invention
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solution, and advantages of the present application clearer, some embodiments of the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the application, and are not used to limit the application.
本申请的第一实施例涉及一种路径导航方法,应用于终端或云端。终端可以是导盲头盔、无人驾驶车辆或智能机器人等设备。云端与终端通信连接,为终端提供用于定位的地图或直接为终端提供定位结果。本实施例以终端为例说明路径导航方法的执行过程,云端执行该路径导航方法的过程可以参考本申请实施例的内容。The first embodiment of the present application relates to a route navigation method, which is applied to a terminal or a cloud. The terminal can be a device such as a blind guide helmet, an unmanned vehicle, or an intelligent robot. The cloud communicates with the terminal, providing the terminal with a map for positioning or providing positioning results directly to the terminal. This embodiment uses a terminal as an example to describe the execution process of the path navigation method. For the process of executing the path navigation method in the cloud, reference may be made to the content of the embodiment of the present application.
该路径导航方法的具体流程如图1所示,包括以下步骤:The specific flow of the route navigation method is shown in Figure 1, and includes the following steps:
在步骤101中,获取深度图。In step 101, a depth map is obtained.
具体的说,在本实施例中,深度图包括彩色图和彩色图对应的深度值。获取深度图的方式可以是终端在移动过程中通过摄像装置实时拍摄所获得的。Specifically, in this embodiment, the depth map includes a color map and a depth value corresponding to the color map. The way to obtain the depth map may be obtained by the terminal real-time shooting through the camera device during the movement.
在步骤102中,根据深度图构建世界坐标系下的三维点云,并根据世界坐标系下的三维点云进行路面检测确定第一通路障碍物信息。In step 102, a three-dimensional point cloud in the world coordinate system is constructed according to the depth map, and road surface detection is performed according to the three-dimensional point cloud in the world coordinate system to determine the first path obstacle information.
具体的说,根据深度图构建第一相机坐标系下的三维点云,获取终端的位姿信息,根据第一相机坐标系下的三维点云和位姿信息构建世界坐标系下的三维点云,再根据世界坐标系下的三维点云进行路面检测确定第一通路障碍物信息。Specifically, a three-dimensional point cloud in the first camera coordinate system is constructed according to the depth map, and the pose information of the terminal is obtained. A three-dimensional point cloud in the world coordinate system is constructed according to the three-dimensional point cloud and pose information in the first camera coordinate system. , And then perform road detection according to the three-dimensional point cloud in the world coordinate system to determine the obstacle information of the first path.
在一个具体实现中,利用公式(1)构建深度相机坐标系下的三维点云,公式(1)表示如下:In a specific implementation, formula (1) is used to construct a three-dimensional point cloud in the depth camera coordinate system. Formula (1) is expressed as follows:
Figure PCTCN2018098384-appb-000001
Figure PCTCN2018098384-appb-000001
其中,u和v是深度图中像素点的位置坐标,f是深度相机的内参数,X d、Y d和Z d是三维点云在深度相机坐标系下的坐标值,H表示深度图的宽,W表示深度图的高,并且Z d是深度图中像素点的深度值,为已知量。 Among them, u and v are the position coordinates of the pixels in the depth map, f is the internal parameter of the depth camera, X d , Y d and Z d are the coordinate values of the 3D point cloud in the depth camera coordinate system, and H is the depth map. Width, W represents the height of the depth map, and Z d is the depth value of the pixels in the depth map, which is a known quantity.
另外,需要根据深度相机和彩色相机的转换关系,获得三维点云在第一相机(彩色相机)坐标系下的三维点云,并且利用公式(2)构建第一相机坐标系下的三维点云,公式(2)表示如下:In addition, it is necessary to obtain the 3D point cloud of the 3D point cloud in the first camera (color camera) coordinate system according to the conversion relationship between the depth camera and the color camera, and use formula (2) to construct the 3D point cloud in the first camera coordinate system. , Formula (2) is expressed as follows:
[X c Y c Z c]=[R,T] cd[X d Y d Z d]   (2) [X c Y c Z c ] = [R, T] cd [X d Y d Z d ] (2)
其中,X d、Y d和Z d是三维点云在深度相机坐标系下的坐标值,X c、Y c和Z c是三维点云在第一相机坐标系下的坐标值,[R,T] cd是深度相机和第一相机之间固定的转换矩阵,为已知量。 Among them, X d , Y d and Z d are the coordinate values of the 3D point cloud in the depth camera coordinate system, X c , Y c and Z c are the coordinate values of the 3D point cloud in the first camera coordinate system, [R, T] cd is a fixed transformation matrix between the depth camera and the first camera, and is a known quantity.
具体的说,在获取位姿信息时,具体为根据深度图进行特征提取获取特征角点,并确定特征角点对应的描述子,将深度图中的任意两帧图像的描述子进行匹配获得匹配信息,根据匹配信息获得转换矩阵;根据转换矩阵确定第二相机坐标系下的三维点云,根据第一相机坐标系下的三维点云和第二相机坐标系下的三维点云对转换矩阵进行校验;通过对校验后的转换矩阵进行优化获得位姿信息P。Specifically, when obtaining pose information, feature extraction is performed according to the depth map to obtain feature corner points, and descriptors corresponding to the feature corner points are determined. The descriptors of any two frames of the depth map are matched to obtain a match. Information, and obtain a transformation matrix according to the matching information; determine the three-dimensional point cloud in the second camera coordinate system according to the transformation matrix, and perform the transformation matrix according to the three-dimensional point cloud in the first camera coordinate system and the three-dimensional point cloud in the second camera coordinate system. Check; The pose information P is obtained by optimizing the transformed transformation matrix.
需要说明的是,在本申请实施方式中,深度相机坐标系下的三维点云是根据包含深度值的深度图所构建的;第一相机坐标系下的三维点云是根据深度相机和彩色相机之间的固定转换关系,对深度相机坐标系下的三维点云通过计算获得的;第二相机坐标系下的三维点云是根据获取的彩色图通过特征点匹配获得的,因此理论上来说,在不存在匹配误差的情况下,第一相机坐标系下的三维点云与第二坐标系下的三维点云应该相同。It should be noted that, in the embodiment of the present application, the three-dimensional point cloud in the depth camera coordinate system is constructed according to a depth map containing a depth value; the three-dimensional point cloud in the first camera coordinate system is based on the depth camera and the color camera. The fixed conversion relationship between them is obtained through calculation of the 3D point cloud in the depth camera coordinate system; the 3D point cloud in the second camera coordinate system is obtained through feature point matching according to the obtained color map, so theoretically, In the absence of a matching error, the 3D point cloud in the first camera coordinate system and the 3D point cloud in the second coordinate system should be the same.
需要说明的是,在本实施方式中,具体是对深度图中的彩色图进行特征提取获得特征角点的,因此第二相机坐标系下的三维点云是基于对彩色图中的特征点进行提取和匹配等过程所最终获得的。而第一相机坐标系下的三维点云是通过深度相机坐标系下的三维点云转换获得的。虽然获取方式不同,但两者表达含义相同,所以对于任意一个特征角点,其在第一相机坐标系下的三维点云中的坐标,应与其在第二相机坐标系下的三维点云坐标相同。但由于匹配误差的存在, 实际可能并不相同,因此需要对转换矩阵进行校验,并对校验后的转换矩阵进行优化,以使获得的终端位姿信息P更准确。当然,对于获取位姿信息的具体方式并不是本申请的重点,所以本申请实施例不再对其进行赘述。It should be noted that, in this embodiment, feature corner points are obtained by performing feature extraction on the color map in the depth map, so the three-dimensional point cloud in the second camera coordinate system is based on the feature points in the color map. Extraction and matching. The 3D point cloud in the first camera coordinate system is obtained through the conversion of the 3D point cloud in the depth camera coordinate system. Although the acquisition methods are different, the two expressions have the same meaning, so for any feature corner point, its coordinates in the 3D point cloud of the first camera coordinate system should be the same as the 3D point cloud coordinates of the second camera coordinate system. the same. However, due to the existence of matching errors, the actual results may be different. Therefore, it is necessary to check the transformation matrix and optimize the transformed transformation matrix to make the obtained terminal pose information P more accurate. Of course, the specific way to obtain the pose information is not the focus of this application, so it will not be described in detail in the embodiments of this application.
具体的说,根据获取的位姿信息P和第一相机坐标系下的三维点云构建世界坐标系下的三维点云,以实现三维重建。并且利用公式(3)构建世界坐标系下的三维点云,公式(3)表示如下:Specifically, a three-dimensional point cloud in the world coordinate system is constructed based on the acquired pose information P and a three-dimensional point cloud in the first camera coordinate system to implement three-dimensional reconstruction. And use formula (3) to construct a three-dimensional point cloud in the world coordinate system. Formula (3) is expressed as follows:
[X w Y w Z w]=P[X c Y c Z c]    (3) [X w Y w Z w ] = P [X c Y c Z c ] (3)
其中,X w、Y w和Z w是三维点云在世界坐标系下的坐标值,P是获取的位姿信息,X c、Y c和Z c是三维点云在第一相机坐标系下的坐标值。 Among them, X w , Y w and Z w are the coordinate values of the three-dimensional point cloud in the world coordinate system, P is the acquired pose information, and X c , Y c and Z c are the three-dimensional point cloud in the first camera coordinate system. Coordinate value.
需要说明的是,在进行坐标系方向确定时,设定标准图像坐标系为o 1-xy,则相机坐标系和像素坐标系的关系如图2所示,相机坐标系和世界坐标的关系如图3所示。 It should be noted that when the coordinate system direction is determined, the standard image coordinate system is set to o 1 -xy, then the relationship between the camera coordinate system and the pixel coordinate system is shown in Figure 2, and the relationship between the camera coordinate system and the world coordinate is as follows Shown in Figure 3.
其中,如图2所示,以深度图像的左上角为原点建立的以像素为单位的直角坐标系o-uv称为像素坐标系。像素的横坐标u与纵坐标v分别是在其图像数组中所在的列数与所在行数。图像坐标系o 1-xy的原点o 1定义为相机光轴与深度图像平面的交点,且x轴与u轴平行,y轴与v轴平行。相机坐标系O c-X cY cZ c以相机光心O c为坐标原点,X c轴和Y c轴分别与图像坐标系的x轴和y轴平行,Z c轴为相机的光轴,和图像平面垂直并交于o 1点。 Among them, as shown in FIG. 2, the rectangular coordinate system o-uv in pixels, which is established with the upper left corner of the depth image as the origin, is called a pixel coordinate system. The abscissa u and ordinate v of a pixel are the number of columns and the number of rows in its image array, respectively. The origin o 1 of the image coordinate system o 1 -xy is defined as the intersection of the camera optical axis and the depth image plane, and the x-axis is parallel to the u-axis, and the y-axis is parallel to the v-axis. The camera coordinate system O c -X c Y c Z c uses the camera optical center O c as the origin, and the X c axis and Y c axis are parallel to the x and y axes of the image coordinate system, respectively, and the Z c axis is the optical axis of the camera , Perpendicular to the image plane and intersect at o 1 point.
其中,如图3所示,世界坐标系O w-X wY wZ w的原点O w与相机坐标系的原点O c重合,均为相机光心,选取水平向右为X w轴正方向,垂直向下为Y w轴正方向,垂直X wY w平面并指向正前方为Z w轴正方向,建立世界坐标系。 Among them, as shown in FIG. 3, the origin O w of the world coordinate system O w -X w Y w Z w coincides with the origin O c of the camera coordinate system, both of which are the camera optical centers, and the horizontal right is selected as the positive direction of the X w axis. , The vertical downward direction is the positive direction of the Y w axis, the vertical X w Y w plane and pointing directly forward is the positive direction of the Z w axis, and a world coordinate system is established.
具体的说,在根据世界坐标系下的三维点云进行路面检测确定第一通路障碍物信息时,是根据世界坐标系下的三维点云进行路面检测获得道路的地面信息,根据获取的地面信息确定第一通路障碍物信息。Specifically, when the road surface detection is performed according to the three-dimensional point cloud in the world coordinate system to determine the first path obstacle information, the road surface information is obtained based on the three-dimensional point cloud in the world coordinate system to obtain road surface information, and according to the acquired ground information Determine the first path obstacle information.
在一个具体实现中,根据世界坐标系下的三维点云进行路面检测,确定出三维点云中的地面区域和非地面区域,将获取的结果作为 道路的地面信息。在确定出的地面区域进行障碍物检测确定出地面区域上的通路状况和障碍物状况,并用[x y z L]对世界坐标系下的三维点云进行标定,其中,x、y和z表示世界坐标系下的三维点云中任一点的坐标,L表示该点的属性,L=0表示通路,L=1表示障碍物,并将检测出的通路状况和障碍物状况的结果作为第一通路障碍物信息。In a specific implementation, pavement detection is performed according to a three-dimensional point cloud in the world coordinate system, and a ground area and a non-ground area in the three-dimensional point cloud are determined, and the obtained result is used as ground information of the road. Obstacle detection is performed on the identified ground area to determine the path conditions and obstacle conditions on the ground area, and the three-dimensional point cloud in the world coordinate system is calibrated with [x, y, z], where x, y, and z represent The coordinates of any point in the three-dimensional point cloud in the world coordinate system, L represents the attribute of the point, L = 0 represents the path, L = 1 represents the obstacle, and the result of the detected path condition and obstacle condition is taken as the first Access obstacle information.
在步骤103中,根据深度图进行图像学习确定第二通路障碍物信息。In step 103, image learning is performed according to the depth map to determine the second path obstacle information.
具体的说,根据彩色图生成初始候选区域,对初始候选区域进行划分获得至少两个第一候选区域;对每个第一候选区域进行特征提取,并确定每个第一候选区域的类别,其中,类别包括障碍物和路面;根据每个第一候选区域的类别和每个第一候选区域在初始候选区域的位置,确定第二通路障碍物信息。Specifically, an initial candidate region is generated according to the color map, and the initial candidate region is divided to obtain at least two first candidate regions; feature extraction is performed on each first candidate region, and a category of each first candidate region is determined, where The category includes obstacles and road surfaces; the second path obstacle information is determined according to the category of each first candidate area and the position of each first candidate area in the initial candidate area.
在一个具体实现中,在彩色图像上经过图像学习生成初始候选区域时,每一个初始候选区域为矩形框并用ROI=[a,b,w,h]表示,a,b表示矩形框初始候选区域的左顶点在彩色图像中的位置坐标,w,h表示初始候选区域矩形框的宽和高。将已经确定位置和大小的初始候选区域ROI平均分割成k×k个第一候选区域,用box=[a+i×k,b+j×k,w/k,h/k]表示每个第一候选区域,i和j分别表示每个第一候选区域在水平方向和竖直方向上的序号。对每个确定的第一候选区域box通过卷积计算进行特征提取,确定每个第一候选区域的类别,其中,类别用字母class表示。若经过特征提取确定该第一候选区域的类别class=1,则表示该第一候选区域为障碍物,若经过特征提取确定该第一候选区域的类别class=0,则表示该第一候选区域为通路。根据每个第一候选区域的类别和每个第一候选区域在初始候选区域的位置,以及初始候选区域在彩色图像中的位置,确定出彩色图像中的障碍物和通路的信息,相应的彩色图像中的每个像素相应的对应着一个障碍物和路面的信息,根据像素坐标系、相机坐标系以及世界坐标系的关系,确定出与彩色图像中像素对应的世界坐标系中对应的点,从而确定出第二通路和障碍物信息。In a specific implementation, when initial candidate regions are generated through image learning on a color image, each initial candidate region is a rectangular frame and is represented by ROI = [a, b, w, h], a, b represents a rectangular frame initial candidate region The position coordinates of the left vertex in the color image, w, h represent the width and height of the rectangular frame of the initial candidate region. The initial candidate region ROI that has determined the position and size is evenly divided into k × k first candidate regions, and each box is represented by box = [a + i × k, b + j × k, w / k, h / k] The first candidate regions, i and j represent the serial numbers of each first candidate region in the horizontal direction and the vertical direction, respectively. Feature extraction is performed on each determined first candidate region box through convolution calculation to determine the category of each first candidate region, where the category is represented by the letter class. If the class class = 1 of the first candidate region is determined through feature extraction, it indicates that the first candidate region is an obstacle. If the class class = 0 of the first candidate region is determined through feature extraction, it indicates the first candidate region. For access. According to the category of each first candidate region and the position of each first candidate region in the initial candidate region, and the position of the initial candidate region in the color image, determine the information of obstacles and pathways in the color image, and the corresponding color Each pixel in the image corresponds to information about an obstacle and the road surface. According to the relationship between the pixel coordinate system, the camera coordinate system, and the world coordinate system, the corresponding point in the world coordinate system corresponding to the pixel in the color image is determined. Thereby, the second path and obstacle information are determined.
在步骤104中,将第一通路障碍物信息和第二通路障碍物信息进行合并获得第三通路障碍物信息。In step 104, the first-path obstacle information and the second-path obstacle information are combined to obtain third-path obstacle information.
具体的说,将第一通路障碍物信息和第二通路障碍物信息进行合并,即将第一通路障碍物信息和第二通路障碍物信息中同时确定为障碍物和通路的点进行保留,将第一通路障碍物信息或第二通路障碍物信息中仅有一方确定为障碍物和通路点也进行保留,从而获得第三通路障碍物信息。因此在第三通路障碍物信息中既包含第一通路障碍物信息的全部内容,同时也包含第二通路障碍物信息的全部内容。结合定位和图像学习两方面来获得通路信息和障碍物信息,使得获得的结果更加全面,避免了通路信息和障碍物信息的缺失。Specifically, the first-path obstacle information and the second-path obstacle information are combined, that is, the points determined as obstacles and paths in the first-path obstacle information and the second-path obstacle information are retained, and the first Only one of the obstacle information of the first passage or the obstacle information of the second passage is determined to be the obstacle and the passage point is also retained, thereby obtaining the obstacle information of the third passage. Therefore, the third-path obstacle information includes both the entire contents of the first-path obstacle information and the second-path obstacle information. Combining the two aspects of localization and image learning to obtain path information and obstacle information makes the obtained results more comprehensive and avoids the lack of path information and obstacle information.
在步骤105中,根据第三通路障碍物信息进行路径导航。In step 105, route navigation is performed based on the third-path obstacle information.
与现有技术相比,本实施方式提供的路径导航方法,根据获取的深度图进行路面检测确定的第一通路障碍物信息,结合根据深度图进行图像学习确定的第二通路障碍物信息,确定出与实际路况更加匹配的第三通路障碍物信息,并根据第三通路障碍物信息进行导航,从而提高了路径导航的准确性。Compared with the prior art, the path navigation method provided in this embodiment determines the first path obstacle information determined by road detection based on the acquired depth map, and combines the second path obstacle information determined by image learning based on the depth map to determine The third path obstacle information is more matched with the actual road conditions, and navigation is performed according to the third path obstacle information, thereby improving the accuracy of path navigation.
本申请的第二实施例涉及一种路径导航方法,本实施例在第一实施例的基础上做了进一步改进,具体改进之处为:对根据第三通路障碍物信息进行路径导航的方式进行了具体描述。本实施例中的路径导航方法的流程如图4所示。The second embodiment of the present application relates to a route navigation method. This embodiment is further improved based on the first embodiment. The specific improvement is as follows: Detailed description. The flow of the route navigation method in this embodiment is shown in FIG. 4.
具体的说,在本实施例中,包括步骤201至步骤208,其中步骤201至步骤204与第一实施方式中的步骤101至步骤104大致相同,此处不再赘述,下面主要介绍不同之处,未在本实施方式中详尽描述的技术细节,可参见第一实施例所提供的路径导航方法,此处不再赘述。Specifically, in this embodiment, steps 201 to 208 are included, where steps 201 to 204 are substantially the same as steps 101 to 104 in the first embodiment, and are not repeated here. The following mainly describes the differences. For technical details that are not described in detail in this embodiment mode, refer to the path navigation method provided in the first embodiment, and details are not described herein again.
在步骤201至步骤204之后,执行步骤205。After step 201 to step 204, step 205 is performed.
在步骤205中,根据世界坐标系下的三维点云获取二维格栅地图。In step 205, a two-dimensional grid map is obtained according to the three-dimensional point cloud in the world coordinate system.
具体的说,在本实施方式中,将世界坐标系下的三维点云投射到路面上,构成二维格栅地图,并在二维格栅地图的基础上进行路径导航。Specifically, in this embodiment, a three-dimensional point cloud in a world coordinate system is projected onto a road surface to form a two-dimensional grid map, and path navigation is performed based on the two-dimensional grid map.
在步骤206中,根据第三通路障碍物信息确定二维格栅地图中每个格子的属性。In step 206, the attributes of each grid in the two-dimensional grid map are determined according to the third path obstacle information.
具体的说,根据获得的第三通路障碍物信息确定二维格栅地图中每个格子的属性,其中,属性包括障碍物和通路。并可以用不同的颜色在二维格栅地图中对其属性进行标定。例如,用SG(p,q)=1表示在二维格栅地图中为障碍物的格子,并用白色进行标定;用SG(p,q)=0表示在二维格栅地图中为通路的格子,并用黑色进行标定。Specifically, the attributes of each grid in the two-dimensional grid map are determined according to the obtained third-path obstacle information, where the attributes include obstacles and paths. And its attributes can be calibrated in a two-dimensional grid map with different colors. For example, use SG (p, q) = 1 to represent the grid of obstacles in the two-dimensional grid map and use white for calibration; use SG (p, q) = 0 to represent the pathways in the two-dimensional grid map. Grid and calibrate with black.
在步骤207中,确定目的地的位置和当前的位置。In step 207, the location of the destination and the current location are determined.
具体的说,在本实施方式中,可以通过全球定位系统(Global Positioning System,GPS)来确定当前的位置,可以根据接收的用户指令确定要到达的目的地的位置。Specifically, in this embodiment, a current position may be determined by a Global Positioning System (GPS), and a position of a destination to be reached may be determined according to a received user instruction.
在步骤208中,根据目的地的位置、当前的位置和每个格子的属性确定最优的路径。In step 208, the optimal path is determined according to the location of the destination, the current location, and the attributes of each grid.
在一个具体实现中,若在二维格栅地图中确定的目的地的位置为T(u 1,v 1),确定的当前的位置为G(u 0,v 0),则根据目的地的位置和当前的位置计算一条直线为L 1,确定L 1的直线方程为u=A 1v+B 1,其中A 1=(u 1-u 0)/(v 1-v 0),B 1=u 1-A 1×v 1。以当前的位置G(u 0,v 0)为圆心,任意给出一条直线L 2,L 2的直线方程为u=A 2v+B 2。直线L 1与直线L 2的夹角为θ,且满足tanθ=|(A 1-A 2)/(1+A 1×A 2)|,可以求得A 2,则B 2=u 0-A 2×v 0。从当前的位置G(u 0,v 0)开始,遍历直线L 2上的点,并且统计属性为SG(p,q)=0的格子的数量,当满足θ最小,并且属性为SG(p,q)=0的格子的总数量大于设定的预设阈值时,即确定θ为最优的前进方向,移动步长为属性为SG(p,q)=0的格子的总数量乘以m,其中m为每个格子对应的实际物理尺寸的大小,单位为米。 In a specific implementation, if the location of the destination determined in the two-dimensional grid map is T (u 1 , v 1 ), and the determined current location is G (u 0 , v 0 ), then according to the destination's Calculate a straight line for the position and the current position as L 1 , and determine the straight line equation of L 1 as u = A 1 v + B 1 , where A 1 = (u 1 -u 0 ) / (v 1 -v 0 ), B 1 = U 1 -A 1 × v 1 . Taking the current position G (u 0 , v 0 ) as the center of the circle, a straight line L 2 is given arbitrarily, and the straight line equation of L 2 is u = A 2 v + B 2 . The angle between the straight line L 1 and the straight line L 2 is θ, and satisfies tan θ = | (A 1 -A 2 ) / (1 + A 1 × A 2 ) |, A 2 can be obtained, and then B 2 = u 0- A 2 × v 0 . Starting from the current position G (u 0 , v 0 ), iterates through the points on the straight line L 2 and counts the number of grids with the attribute SG (p, q) = 0. When θ is satisfied and the attribute is SG (p , q) = 0 When the total number of grids is greater than the preset threshold, it is determined that θ is the optimal forward direction, and the moving step is the total number of grids with the attribute SG (p, q) = 0 multiplied by m, where m is the actual physical size corresponding to each grid, and the unit is meter.
本申请的第三实施方式涉及一种路径导航装置,具体结构如图5所示。A third embodiment of the present application relates to a route navigation device, and a specific structure thereof is shown in FIG. 5.
如图5所示,路径导航装置包括获取模块301、第一确定模块302、第二确定模块303、合并模块304和导航模块305。As shown in FIG. 5, the path navigation device includes an acquisition module 301, a first determination module 302, a second determination module 303, a merge module 304, and a navigation module 305.
其中,获取模块301,用于获取深度图。The obtaining module 301 is configured to obtain a depth map.
第一确定模块302,用于根据深度图构建世界坐标系下的三维点云,并根据世界坐标系下的三维点云进行路面检测确定第一通路障碍物信息。The first determining module 302 is configured to construct a three-dimensional point cloud in the world coordinate system according to the depth map, and perform road surface detection according to the three-dimensional point cloud in the world coordinate system to determine the first path obstacle information.
第二确定模块303,用于根据深度图进行图像学习确定第二通 路障碍物信息。The second determining module 303 is configured to perform image learning according to the depth map to determine the second path obstacle information.
合并模块304,用于将第一通路障碍物信息和第二通路障碍物信息进行合并获得第三通路障碍物信息。The merging module 304 is configured to merge the first-path obstacle information and the second-path obstacle information to obtain the third-path obstacle information.
导航模块305,用于根据第三通路障碍物信息进行路径导航。The navigation module 305 is configured to perform route navigation according to the third path obstacle information.
不难发现,本实施方式为与第一实施方式相对应的装置实施例,本实施方式可与第一实施方式互相配合实施。第一实施方式中提到的相关技术细节在本实施方式中依然有效,为了减少重复,这里不再赘述。相应地,本实施方式中提到的相关技术细节也可应用在第一实施方式中。It is not difficult to find that this embodiment is a device example corresponding to the first embodiment, and this embodiment can be implemented in cooperation with the first embodiment. Relevant technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, details are not repeated here. Accordingly, the related technical details mentioned in this embodiment can also be applied in the first embodiment.
本申请的第四实施例涉及一种路径导航装置,该实施方式与第三实施方式大致相同,具体结构如图6所示。其中,主要改进之处在于:第四实施方式对第三实施方式中的导航模块的结构进行了具体描述。The fourth embodiment of the present application relates to a route navigation device. This embodiment is substantially the same as the third embodiment. The specific structure is shown in FIG. 6. Among them, the main improvement is that the fourth embodiment specifically describes the structure of the navigation module in the third embodiment.
其中,导航模块305包括获取子模块3051、属性确定子模块3052、位置确定子模块3053和路径确定子模块3054。The navigation module 305 includes an acquisition sub-module 3051, an attribute determination sub-module 3052, a position determination sub-module 3053, and a path determination sub-module 3054.
获取子模块3051,用于根据世界坐标系下的三维点云获取二维格栅地图。An obtaining sub-module 3051 is configured to obtain a two-dimensional grid map according to a three-dimensional point cloud in the world coordinate system.
属性确定子模块3052,用于根据第三通路障碍物信息确定二维格栅地图中每个格子的属性。The attribute determining sub-module 3052 is configured to determine an attribute of each grid in the two-dimensional grid map according to the obstacle information of the third path.
位置确定子模块3053,用于确定目的地的位置和当前的位置。The position determining submodule 3053 is configured to determine the position of the destination and the current position.
路径确定子模块3054,用于根据目的地的位置、当前的位置和每个格子的属性确定最优的路径。The path determination sub-module 3054 is configured to determine an optimal path according to the destination position, the current position, and the attributes of each grid.
不难发现,本实施方式为与第二实施方式相对应的装置实施例,本实施方式可与第二实施方式互相配合实施。第二实施方式中提到的相关技术细节在本实施方式中依然有效,为了减少重复,这里不再赘述。相应地,本实施方式中提到的相关技术细节也可应用在第二实施方式中。It is not difficult to find that this embodiment is a device example corresponding to the second embodiment, and this embodiment can be implemented in cooperation with the second embodiment. Relevant technical details mentioned in the second embodiment are still valid in this embodiment, and in order to reduce repetition, details are not repeated here. Accordingly, the related technical details mentioned in this embodiment can also be applied in the second embodiment.
以上所描述的装置实施例仅仅是示意性的,并不对本申请的保护范围构成限定,在实际应用中,本领域的技术人员可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的,此处不做限制。The device embodiments described above are only schematic and do not limit the scope of protection of this application. In practical applications, those skilled in the art may select some or all of the modules to implement this embodiment according to actual needs. The purpose of the plan is not limited here.
本申请的第五实施例涉及一种电子设备,具体结构如图7所示。包括至少一个处理器501;以及,与至少一个处理器501通信连接的存储器502。其中,存储器502存储有可被至少一个处理器501执行的指令,指令被至少一个处理器501执行,以使至少一个处理器501能够执行路径导航方法。A fifth embodiment of the present application relates to an electronic device, and a specific structure thereof is shown in FIG. 7. Includes at least one processor 501; and a memory 502 communicatively connected to the at least one processor 501. The memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can execute a route navigation method.
本实施例中,处理器501以中央处理器(Central Processing Unit,CPU)为例,存储器502以可读写存储器(Random Access Memory,RAM)为例。处理器501、存储器502可以通过总线或者其他方式连接,图7中以通过总线连接为例。存储器502作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中实现环境信息确定方法的程序就存储于存储器502中。处理器501通过运行存储在存储器502中的非易失性软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述路径导航方法。In this embodiment, the processor 501 uses a central processing unit (CPU) as an example, and the memory 502 uses a readable and writable memory (Random Access Memory, RAM) as an example. The processor 501 and the memory 502 may be connected through a bus or in other manners. In FIG. 7, the connection through the bus is taken as an example. The memory 502 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules. For example, the program for implementing the method for determining environmental information in the embodiment of the present application is Stored in the memory 502. The processor 501 executes various functional applications and data processing of the device by running the non-volatile software programs, instructions, and modules stored in the memory 502, that is, the above path navigation method is implemented.
存储器502可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储选项列表等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器502可选包括相对于处理器501远程设置的存储器,这些远程存储器可以通过网络连接至外接设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 502 may include a storage program area and a storage data area, where the storage program area may store an operating system and an application program required for at least one function; the storage data area may store a list of options and the like. In addition, the memory may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device. In some embodiments, the memory 502 may optionally include a memory remotely set relative to the processor 501, and these remote memories may be connected to an external device through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
一个或者多个程序模块存储在存储器502中,当被一个或者多个处理器501执行时,执行上述任意方法实施例中的路径导航方法。One or more program modules are stored in the memory 502, and when executed by one or more processors 501, execute the path navigation method in any of the above method embodiments.
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果,未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。The above products can execute the method provided in the embodiment of the present application, and have the corresponding functional modules and beneficial effects of the execution method. For technical details not described in this embodiment, refer to the method provided in the embodiment of the present application.
本申请的第八实施例涉及一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,该计算机程序被处理器执行时能够实现本申请任意方法实施例中涉及的路径导航方法。An eighth embodiment of the present application relates to a computer-readable storage medium. The computer-readable storage medium stores a computer program that can implement a path navigation method involved in any method embodiment of the present application when the computer program is executed by a processor. .
本领域技术人员可以理解,实现上述实施例方法中的全部或部 分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。Those skilled in the art can understand that all or part of the steps in the method of the above embodiments can be completed by a program instructing related hardware. The program is stored in a storage medium and includes several instructions for making a device (which can be (Single chip microcomputer, chip, etc.) or a processor (processor) executes all or part of the steps of the method described in each embodiment of the present application. The foregoing storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes .
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。Those of ordinary skill in the art can understand that the foregoing embodiments are specific embodiments for implementing the present application, and in practical applications, various changes can be made in form and details without departing from the spirit and range.

Claims (10)

  1. 一种路径导航方法,包括:A route navigation method includes:
    获取深度图;Get depth map;
    根据所述深度图构建世界坐标系下的三维点云,并根据所述世界坐标系下的三维点云进行路面检测确定第一通路障碍物信息;Constructing a three-dimensional point cloud in a world coordinate system according to the depth map, and performing road surface detection according to the three-dimensional point cloud in the world coordinate system to determine first-path obstacle information;
    根据所述深度图进行图像学习确定第二通路障碍物信息;Performing image learning according to the depth map to determine obstacle information of the second path;
    将所述第一通路障碍物信息和所述第二通路障碍物信息进行合并获得第三通路障碍物信息;Combining the first-path obstacle information and the second-path obstacle information to obtain third-path obstacle information;
    根据所述第三通路障碍物信息进行路径导航。Path navigation is performed according to the third path obstacle information.
  2. 如权利要求1所述的路径导航方法,其中,所述深度图包括:彩色图和所述彩色图对应的深度值。The path navigation method according to claim 1, wherein the depth map comprises: a color map and a depth value corresponding to the color map.
  3. 如权利要求2所述的路径导航方法,其中,所述根据所述深度图构建世界坐标系下的三维点云,并根据所述世界坐标系下的三维点云进行路面检测确定第一通路障碍物信息,具体包括:The path navigation method according to claim 2, wherein the three-dimensional point cloud in the world coordinate system is constructed according to the depth map, and the first path obstacle is determined according to the road surface detection based on the three-dimensional point cloud in the world coordinate system. Physical information, including:
    根据所述深度图构建第一相机坐标系下的三维点云;Constructing a three-dimensional point cloud in the first camera coordinate system according to the depth map;
    获取位姿信息;Obtain pose information;
    根据所述第一相机坐标系下的三维点云和所述位姿信息构建世界坐标系下的三维点云;Constructing a three-dimensional point cloud in a world coordinate system according to the three-dimensional point cloud in the first camera coordinate system and the pose information;
    根据所述世界坐标系下的三维点云进行路面检测确定所述第一通路障碍物信息。Performing road surface detection according to the three-dimensional point cloud in the world coordinate system to determine the first path obstacle information.
  4. 如权利要求3所述的路径导航方法,其中,所述获取位姿信息,具体包括:The path navigation method according to claim 3, wherein the acquiring pose information specifically comprises:
    根据所述深度图进行特征提取获取特征角点,并确定所述特征角点对应的描述子;Performing feature extraction according to the depth map to obtain feature corner points, and determining a descriptor corresponding to the feature corner points;
    将所述深度图中的任意两帧图像的所述描述子进行匹配获得匹配信息;Matching the descriptors of any two frames of the image in the depth map to obtain matching information;
    根据所述匹配信息获得转换矩阵;Obtaining a conversion matrix according to the matching information;
    根据所述转换矩阵确定第二相机坐标系下的三维点云;Determining a three-dimensional point cloud in a second camera coordinate system according to the transformation matrix;
    根据所述第一相机坐标系下的三维点云和所述第二相机坐标系下的三维点云对所述转换矩阵进行校验;Verifying the transformation matrix according to the three-dimensional point cloud in the first camera coordinate system and the three-dimensional point cloud in the second camera coordinate system;
    通过对校验后的所述转换矩阵进行优化获得位姿信息。The pose information is obtained by optimizing the verified transformation matrix.
  5. 如权利要求3至4任一项所述的路径导航方法,其中,所述根据所述世界坐标系下的三维点云进行路面检测确定所述第一通路障碍物信息,具体包括:The path navigation method according to any one of claims 3 to 4, wherein the determining the first path obstacle information by performing road surface detection based on the three-dimensional point cloud in the world coordinate system specifically includes:
    根据所述世界坐标系下的三维点云进行地面检测获取道路的地面信息;Performing ground detection according to the three-dimensional point cloud in the world coordinate system to obtain ground information of a road;
    根据获取的所述地面信息确定所述第一通路障碍物信息。Determining the first path obstacle information according to the acquired ground information.
  6. 如权利要求5所述的路径导航方法,其中,所述根据所述深度图进行图像学习确定第二通路障碍物信息,具体包括:The path navigation method according to claim 5, wherein the performing image learning to determine the second path obstacle information according to the depth map specifically comprises:
    根据所述彩色图生成初始候选区域;Generating an initial candidate region according to the color map;
    对所述初始候选区域进行划分获得至少两个第一候选区域;Dividing the initial candidate region to obtain at least two first candidate regions;
    对每个所述第一候选区域进行特征提取,并确定每个所述第一候选区域的类别,其中,所述类别包括障碍物和路面;Performing feature extraction on each of the first candidate areas, and determining a category of each of the first candidate areas, where the categories include obstacles and road surfaces;
    根据每个所述第一候选区域的类别和每个所述第一候选区域在所述初始候选区域的位置,确定所述第二通路障碍物信息。Determining the second path obstacle information according to the category of each of the first candidate regions and the position of each of the first candidate regions in the initial candidate region.
  7. 如权利要求6所述的路径导航方法,其中,所述根据所述第三通路障碍物信息进行路径导航,具体包括:The route navigation method according to claim 6, wherein the performing route navigation according to the third path obstacle information specifically comprises:
    根据所述世界坐标系下的三维点云获取二维格栅地图;Obtaining a two-dimensional grid map according to a three-dimensional point cloud in the world coordinate system;
    根据所述第三通路障碍物信息确定所述二维格栅地图中每个格子的属性,其中,所述属性包括障碍物和通路;Determining the attributes of each grid in the two-dimensional grid map according to the third path obstacle information, wherein the attributes include obstacles and paths;
    确定目的地的位置和当前位置;Determine the location and current location of the destination;
    根据所述目的地的位置、所述当前位置和所述每个格子的属性,确定最优的路径。An optimal path is determined according to the location of the destination, the current location, and the attributes of each grid.
  8. 一种路径导航装置,包括:A path navigation device includes:
    获取模块,用于获取深度图;An acquisition module for acquiring a depth map;
    第一确定模块,用于根据所述深度图构建世界坐标系下的三维点云,并根据所述世界坐标系下的三维点云进行路面检测确定第一路障碍物信息;A first determining module, configured to construct a three-dimensional point cloud in a world coordinate system according to the depth map, and perform road surface detection according to the three-dimensional point cloud in the world coordinate system to determine first obstacle information;
    第二确定模块,用于根据所述深度图进行图像学习确定第二通路障碍物信息;A second determining module, configured to perform image learning and determine second-path obstacle information according to the depth map;
    合并模块,用于将所述第一通路障碍物信息和所述第二通路障碍物信息进行合并获得第三通路障碍物信息;A merging module, configured to combine the first-path obstacle information and the second-path obstacle information to obtain third-path obstacle information;
    导航模块,用于根据所述第三通路障碍物信息进行路径导航。A navigation module is configured to perform route navigation according to the third path obstacle information.
  9. 一种电子设备,包括:An electronic device includes:
    至少一个处理器;以及,At least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,A memory connected in communication with the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1至7任一项所述的路径导航方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method according to any one of claims 1 to 7. Path navigation method.
  10. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至7任一项所述的路径导航方法。A computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the path navigation method according to any one of claims 1 to 7 is implemented.
PCT/CN2018/098384 2018-08-02 2018-08-02 Route navigation method, related device, and computer readable storage medium WO2020024234A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880001094.0A CN109074668B (en) 2018-08-02 2018-08-02 Path navigation method, related device and computer readable storage medium
PCT/CN2018/098384 WO2020024234A1 (en) 2018-08-02 2018-08-02 Route navigation method, related device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/098384 WO2020024234A1 (en) 2018-08-02 2018-08-02 Route navigation method, related device, and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020024234A1 true WO2020024234A1 (en) 2020-02-06

Family

ID=64789223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098384 WO2020024234A1 (en) 2018-08-02 2018-08-02 Route navigation method, related device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN109074668B (en)
WO (1) WO2020024234A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400537A (en) * 2020-03-19 2020-07-10 北京百度网讯科技有限公司 Road element information acquisition method and device and electronic equipment
CN111753768A (en) * 2020-06-29 2020-10-09 北京百度网讯科技有限公司 Method, apparatus, electronic device and storage medium for representing shape of obstacle
CN112101209A (en) * 2020-09-15 2020-12-18 北京百度网讯科技有限公司 Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN112269386A (en) * 2020-10-28 2021-01-26 深圳拓邦股份有限公司 Method and device for repositioning symmetric environment and robot
CN112486172A (en) * 2020-11-30 2021-03-12 深圳市普渡科技有限公司 Road edge detection method and robot
CN112556727A (en) * 2020-12-15 2021-03-26 国科易讯(北京)科技有限公司 AR navigation positioning error calibration method, device, equipment and storage medium
CN112710318A (en) * 2020-12-14 2021-04-27 深圳市商汤科技有限公司 Map generation method, route planning method, electronic device, and storage medium
CN112927338A (en) * 2021-03-30 2021-06-08 深圳裹动智驾科技有限公司 Simulation method based on three-dimensional contour, storage medium and computer equipment
CN113313765A (en) * 2021-05-28 2021-08-27 上海高仙自动化科技发展有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN113566807A (en) * 2020-04-28 2021-10-29 富华科精密工业(深圳)有限公司 Automatic navigation method, navigation device and storage medium
CN113610910A (en) * 2021-07-30 2021-11-05 合肥科大智能机器人技术有限公司 Obstacle avoidance method for mobile robot
CN113806455A (en) * 2020-06-12 2021-12-17 纳恩博(北京)科技有限公司 Map construction method, map construction equipment and storage medium
CN114474065A (en) * 2022-03-04 2022-05-13 美智纵横科技有限责任公司 Robot control method and device, robot and storage medium
CN114911223A (en) * 2021-02-09 2022-08-16 北京盈迪曼德科技有限公司 Robot navigation method and device, robot and storage medium

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111735433B (en) * 2019-03-25 2022-05-20 杭州海康威视数字技术股份有限公司 Method and device for establishing two-dimensional map
CN110046563B (en) * 2019-04-02 2022-06-10 中国能源建设集团江苏省电力设计院有限公司 Power transmission line section elevation correction method based on unmanned aerial vehicle point cloud
CN110244710B (en) * 2019-05-16 2022-05-31 达闼机器人股份有限公司 Automatic tracing method, device, storage medium and electronic equipment
CN110262487B (en) * 2019-06-12 2022-09-23 达闼机器人股份有限公司 Obstacle detection method, terminal and computer readable storage medium
CN112179360B (en) * 2019-06-14 2022-12-02 北京京东乾石科技有限公司 Map generation method, apparatus, system and medium
CN110276801B (en) * 2019-06-24 2021-09-28 达闼机器人有限公司 Object positioning method and device and storage medium
CN110378246A (en) * 2019-06-26 2019-10-25 深圳前海达闼云端智能科技有限公司 Ground detection method, apparatus, computer readable storage medium and electronic equipment
CN110738183B (en) * 2019-10-21 2022-12-06 阿波罗智能技术(北京)有限公司 Road side camera obstacle detection method and device
CN111179413B (en) * 2019-12-19 2023-10-31 中建科技有限公司深圳分公司 Three-dimensional reconstruction method, device, terminal equipment and readable storage medium
CN111337948A (en) * 2020-02-25 2020-06-26 达闼科技成都有限公司 Obstacle detection method, radar data generation device, and storage medium
CN111381594A (en) * 2020-03-09 2020-07-07 兰剑智能科技股份有限公司 AGV space obstacle avoidance method and system based on 3D vision
CN113008247B (en) * 2020-03-24 2022-10-28 青岛慧拓智能机器有限公司 High-precision map construction method and device for mining area
CN112327851B (en) * 2020-11-09 2023-08-22 达闼机器人股份有限公司 Map calibration method and system based on point cloud, robot and cloud platform
CN112785704A (en) * 2021-01-13 2021-05-11 北京小马慧行科技有限公司 Semantic map construction method, computer readable storage medium and processor
CN113310493B (en) * 2021-05-28 2022-08-05 广东工业大学 Unmanned aerial vehicle real-time navigation method based on event trigger mechanism
CN113932825B (en) * 2021-09-30 2024-04-09 深圳市普渡科技有限公司 Robot navigation path width acquisition system, method, robot and storage medium
CN116755441B (en) * 2023-06-19 2024-03-12 国广顺能(上海)能源科技有限公司 Obstacle avoidance method, device, equipment and medium of mobile robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140107842A1 (en) * 2012-10-16 2014-04-17 Electronics And Telecommunications Research Institute Human-tracking method and robot apparatus for performing the same
CN105652873A (en) * 2016-03-04 2016-06-08 中山大学 Mobile robot obstacle avoidance method based on Kinect
CN107491070A (en) * 2017-08-31 2017-12-19 成都通甲优博科技有限责任公司 A kind of method for planning path for mobile robot and device
CN107636680A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method and device
CN108280401A (en) * 2017-12-27 2018-07-13 达闼科技(北京)有限公司 A kind of pavement detection method, apparatus, cloud server and computer program product

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4406381B2 (en) * 2004-07-13 2010-01-27 株式会社東芝 Obstacle detection apparatus and method
AU2007347733B2 (en) * 2006-10-06 2012-05-03 Irobot Corporation Robotic vehicle
US8184159B2 (en) * 2007-03-26 2012-05-22 Trw Automotive U.S. Llc Forward looking sensor system
CN101419667B (en) * 2008-12-15 2010-12-08 东软集团股份有限公司 Method and apparatus for identifying obstacle in image
TWI610569B (en) * 2016-03-18 2018-01-01 晶睿通訊股份有限公司 Method for transmitting and displaying object tracking information and system thereof
CN106780576B (en) * 2016-11-23 2020-03-17 北京航空航天大学 RGBD data stream-oriented camera pose estimation method
CN107169418A (en) * 2017-04-18 2017-09-15 海信集团有限公司 A kind of obstacle detection method and device
CN107909009B (en) * 2017-10-27 2021-09-17 北京中科慧眼科技有限公司 Obstacle detection method and device based on road surface learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140107842A1 (en) * 2012-10-16 2014-04-17 Electronics And Telecommunications Research Institute Human-tracking method and robot apparatus for performing the same
CN105652873A (en) * 2016-03-04 2016-06-08 中山大学 Mobile robot obstacle avoidance method based on Kinect
CN107636680A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method and device
CN107491070A (en) * 2017-08-31 2017-12-19 成都通甲优博科技有限责任公司 A kind of method for planning path for mobile robot and device
CN108280401A (en) * 2017-12-27 2018-07-13 达闼科技(北京)有限公司 A kind of pavement detection method, apparatus, cloud server and computer program product

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400537B (en) * 2020-03-19 2023-04-28 北京百度网讯科技有限公司 Road element information acquisition method and device and electronic equipment
CN111400537A (en) * 2020-03-19 2020-07-10 北京百度网讯科技有限公司 Road element information acquisition method and device and electronic equipment
CN113566807A (en) * 2020-04-28 2021-10-29 富华科精密工业(深圳)有限公司 Automatic navigation method, navigation device and storage medium
CN113806455A (en) * 2020-06-12 2021-12-17 纳恩博(北京)科技有限公司 Map construction method, map construction equipment and storage medium
CN113806455B (en) * 2020-06-12 2024-03-29 未岚大陆(北京)科技有限公司 Map construction method, device and storage medium
CN111753768A (en) * 2020-06-29 2020-10-09 北京百度网讯科技有限公司 Method, apparatus, electronic device and storage medium for representing shape of obstacle
CN111753768B (en) * 2020-06-29 2023-07-28 北京百度网讯科技有限公司 Method, apparatus, electronic device, and storage medium for representing shape of obstacle
CN112101209A (en) * 2020-09-15 2020-12-18 北京百度网讯科技有限公司 Method and apparatus for determining a world coordinate point cloud for roadside computing devices
CN112101209B (en) * 2020-09-15 2024-04-09 阿波罗智联(北京)科技有限公司 Method and apparatus for determining world coordinate point cloud for roadside computing device
CN112269386A (en) * 2020-10-28 2021-01-26 深圳拓邦股份有限公司 Method and device for repositioning symmetric environment and robot
CN112269386B (en) * 2020-10-28 2024-04-02 深圳拓邦股份有限公司 Symmetrical environment repositioning method, symmetrical environment repositioning device and robot
CN112486172A (en) * 2020-11-30 2021-03-12 深圳市普渡科技有限公司 Road edge detection method and robot
CN112710318A (en) * 2020-12-14 2021-04-27 深圳市商汤科技有限公司 Map generation method, route planning method, electronic device, and storage medium
CN112556727A (en) * 2020-12-15 2021-03-26 国科易讯(北京)科技有限公司 AR navigation positioning error calibration method, device, equipment and storage medium
CN112556727B (en) * 2020-12-15 2022-11-15 国科易讯(北京)科技有限公司 AR navigation positioning error calibration method, device, equipment and storage medium
CN114911223A (en) * 2021-02-09 2022-08-16 北京盈迪曼德科技有限公司 Robot navigation method and device, robot and storage medium
CN114911223B (en) * 2021-02-09 2023-05-05 北京盈迪曼德科技有限公司 Robot navigation method, device, robot and storage medium
CN112927338A (en) * 2021-03-30 2021-06-08 深圳裹动智驾科技有限公司 Simulation method based on three-dimensional contour, storage medium and computer equipment
CN113313765B (en) * 2021-05-28 2023-12-01 上海高仙自动化科技发展有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN113313765A (en) * 2021-05-28 2021-08-27 上海高仙自动化科技发展有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN113610910A (en) * 2021-07-30 2021-11-05 合肥科大智能机器人技术有限公司 Obstacle avoidance method for mobile robot
CN113610910B (en) * 2021-07-30 2024-04-09 合肥科大智能机器人技术有限公司 Obstacle avoidance method for mobile robot
CN114474065A (en) * 2022-03-04 2022-05-13 美智纵横科技有限责任公司 Robot control method and device, robot and storage medium

Also Published As

Publication number Publication date
CN109074668B (en) 2022-05-20
CN109074668A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
WO2020024234A1 (en) Route navigation method, related device, and computer readable storage medium
WO2020006765A1 (en) Ground detection method, related device, and computer readable storage medium
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
US10086955B2 (en) Pattern-based camera pose estimation system
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
US10451403B2 (en) Structure-based camera pose estimation system
US20230215187A1 (en) Target detection method based on monocular image
WO2020228694A1 (en) Camera pose information detection method and apparatus, and corresponding intelligent driving device
WO2018133727A1 (en) Method and apparatus for generating orthophoto map
KR20190088866A (en) Method, apparatus and computer readable medium for adjusting point cloud data collection trajectory
US9858669B2 (en) Optimized camera pose estimation system
CN112184812B (en) Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system
WO2020019115A1 (en) Fusion mapping method, related device and computer readable storage medium
CN112232275A (en) Obstacle detection method, system, equipment and storage medium based on binocular recognition
CN110702028B (en) Three-dimensional detection positioning method and device for orchard trunk
WO2022179094A1 (en) Vehicle-mounted lidar external parameter joint calibration method and system, medium and device
WO2023065342A1 (en) Vehicle, vehicle positioning method and apparatus, device, and computer-readable storage medium
CN112017236A (en) Method and device for calculating position of target object based on monocular camera
CN112396656A (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN114419592A (en) Road area identification method, automatic driving control method and device
CN112489106A (en) Video-based vehicle size measuring method and device, terminal and storage medium
CN115410167A (en) Target detection and semantic segmentation method, device, equipment and storage medium
CN114077249B (en) Operation method, operation equipment, device and storage medium
CN114648639B (en) Target vehicle detection method, system and device
WO2020019116A1 (en) Multi-source data mapping method, related apparatus, and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928859

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18928859

Country of ref document: EP

Kind code of ref document: A1