WO2020006764A1 - Path detection method, related device, and computer readable storage medium - Google Patents

Path detection method, related device, and computer readable storage medium Download PDF

Info

Publication number
WO2020006764A1
WO2020006764A1 PCT/CN2018/094905 CN2018094905W WO2020006764A1 WO 2020006764 A1 WO2020006764 A1 WO 2020006764A1 CN 2018094905 W CN2018094905 W CN 2018094905W WO 2020006764 A1 WO2020006764 A1 WO 2020006764A1
Authority
WO
WIPO (PCT)
Prior art keywords
road
ground
point cloud
early warning
dimensional point
Prior art date
Application number
PCT/CN2018/094905
Other languages
French (fr)
Chinese (zh)
Inventor
李业
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2018/094905 priority Critical patent/WO2020006764A1/en
Priority to CN201880001082.8A priority patent/CN109074490B/en
Publication of WO2020006764A1 publication Critical patent/WO2020006764A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present application relates to the field of computer vision technology, and in particular, to a path detection method, a related device, and a computer-readable storage medium.
  • Path detection is an extremely important technology in the fields of blindness guidance, robotics, and autonomous driving. Existing path detection is based on vision to detect the road that a vehicle or robot is traveling to improve the safety of the vehicle or robot.
  • the traditional path detection usually sets a two-dimensional detection area in the image, and determines whether the area can pass by judging whether there is an obstacle in the area.
  • the image detection area is a rectangular detection area
  • the real world corresponds to a fan-shaped area in front of the camera.
  • objects on both sides of the passable width will be regarded as obstacles.
  • the image detection area is a trapezoidal detection area, although the influence of the fan-shaped area can be corrected to a certain extent in the real world, the size and position of the trapezoidal detection area is extremely inconvenient and needs to be adjusted with the lens focal length, camera attitude, etc Change.
  • traditional path detection usually only provides a rough warning of the obstacle ahead, and cannot provide more detailed road condition information, which makes subsequent decision-making and user experience extremely inconvenient.
  • a technical problem to be solved in some embodiments of the present application is to provide a path detection method, a related device, and a computer-readable storage medium to solve the above technical problems.
  • An embodiment of the present application provides a path detection method, including:
  • An embodiment of the present application further provides a path detection device, including: a establishment module, a first detection module, a determination module, and a second detection module;
  • a first detection module for detecting ground information of a road in a three-dimensional point cloud
  • the second detection module is used to detect the traffic condition in the early warning area, and determine the road detection result of the road according to the traffic condition.
  • An embodiment of the present application further provides an electronic device, including: at least one processor; and,
  • Memory in communication with at least one processor; wherein,
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the foregoing path detection method.
  • An embodiment of the present application further provides a computer-readable storage medium storing a computer program, and the computer program is executed by a processor to implement the foregoing path detection method.
  • the 3D point cloud can provide More traffic information.
  • FIG. 1 is a flowchart of a path detection method in a first embodiment of the present application
  • FIG. 2 is a relationship diagram between a pixel coordinate system and a camera coordinate system in the first embodiment of the present application
  • FIG. 3 is a relationship diagram between a camera coordinate system and world coordinates in the first embodiment of the present application.
  • FIG. 4 is a flowchart of a path detection method in a second embodiment of the present application.
  • FIG. 6 is a structural diagram of a path detection device in a third embodiment of the present application.
  • FIG. 7 is a structural diagram of an electronic device in a fourth embodiment of the present application.
  • the first embodiment of the present application relates to a path detection method. As shown in FIG. 1, the method includes the following steps:
  • Step 101 Establish a three-dimensional point cloud of the road according to the acquired image information.
  • the three-dimensional point cloud is a huge collection of points on the surface of the target object.
  • the establishment of the three-dimensional point cloud can determine road information in space.
  • the application can use multiple methods to establish the three-dimensional point cloud. This embodiment does not limit the establishment of three-dimensional points.
  • the specific implementation of the cloud is a huge collection of points on the surface of the target object.
  • a three-dimensional point cloud can be established by using a depth map.
  • the specific process of establishing a three-dimensional point cloud includes: obtaining a depth map and the attitude angle of the camera, where the attitude angle is the attitude angle of the camera when the depth map was taken; calculating a scale normalization factor according to the depth map and a preset normalization scale; Calculate the scale normalized depth map according to the depth map and the scale normalization factor; construct a 3D point cloud in the camera coordinate system based on the scale normalized depth map; and according to the 3D point cloud and camera in the camera coordinate system To construct a three-dimensional point cloud in the world coordinate system.
  • depth maps there are many methods for obtaining depth maps, including but not limited to: lidar depth imaging method, computer stereo vision imaging, coordinate measuring machine method, moire fringe method, structured light method, depth is not limited here How to get the graph.
  • formula 1 is expressed as follows:
  • S represents the scale normalization factor
  • W represents the width of the depth map
  • H represents the height of the depth map
  • Norm represents a preset normalized scale. Norm is a preset known amount. In specific applications, if it is necessary to process the depth map of continuous frames to establish a three-dimensional point cloud, the normalization scale used in the processing of the depth map of each frame remains unchanged.
  • Equation 2 The normalized depth map is calculated using Equation 2. Equation 2 is expressed as follows:
  • W S represents the width of the depth map after normalization
  • H S represents the height of the depth map after normalization.
  • the depth-normalized depth map can be determined according to W S and H S.
  • Equation 3 a three-dimensional point cloud in a camera coordinate system is constructed according to Equation 3 and a normalized depth map, and the three-dimensional point cloud in the camera coordinate system is represented as P (X c , Y c , Z c ).
  • Each pixel in the picture contains the distance between the camera and the object.
  • the pixel coordinates in the depth map are converted into the coordinates of the camera coordinate system using Equation 3 to form a three-dimensional point cloud in the camera coordinate system. Equation 3 is expressed as follows:
  • u and v are the coordinate values of any point P in the normalized depth map
  • X c , Y c , and Z c are the coordinate values of point P in the camera coordinate system
  • M 3 ⁇ 4 is the internal parameter of the camera Matrix
  • Z c is the depth value of point P in the depth map after normalization, that is, the distance value from the camera to the shooting object, which is a known quantity.
  • the three-dimensional point cloud P (X c , Y c , Z c ) in the camera coordinate system is converted into a three-dimensional point cloud P (X w , X w , Y w , Z w ), the conversion relationship is expressed by Equation 4:
  • X w , Y w and Z w are the coordinate values of any point P in the three-dimensional point cloud in the world coordinate system
  • X c , Y c and Z c are the coordinate values of point P in the camera coordinate system
  • is the camera The angle between the camera and the X w axis in the world coordinate system
  • is the angle between the camera and the Y w axis in the world coordinate system
  • is the angle between the camera and the Z w axis in the world coordinate system.
  • a rectangular coordinate system o-uv in pixels which is established with the upper left corner of the depth map as the origin, is used as the pixel coordinate system.
  • the abscissa u represents the number of pixel columns where the pixels are located, and the ordinate v represents the pixels The number of pixel rows at.
  • the intersection of the camera optical axis and the depth map plane is defined as the origin o 1 of the image coordinate system o 1 -xy, and the x-axis is parallel to the u-axis and the y-axis is parallel to the v-axis.
  • the camera coordinate system O c -X c Y c Z c uses the camera optical center O c as the origin, the X c axis and Y c axis are respectively parallel to the x and y axes in the image coordinate system, and the Z c axis is the light of the camera The axis is perpendicular to the image plane and intersects at o 1 point.
  • the origin O w of the world coordinate system O w -X w Y w Z w coincides with the origin O c of the camera coordinate system, both of which are camera light centers, and the horizontal direction to the right is the positive direction of the X w axis.
  • Vertically downward is the positive direction of the Y w axis, perpendicular to the X w O w Y w plane and pointing straight ahead is the positive direction of the Z w axis, and a world coordinate system is established.
  • three-dimensional point clouds based on image information is not limited to the construction of depth maps.
  • laser point cloud data can also be directly obtained by lidar, and three-dimensional point clouds are constructed based on point cloud data.
  • the construction of a three-dimensional point cloud is an exemplary description, and this embodiment does not limit the specific method adopted for constructing the three-dimensional point cloud.
  • Step 102 Detect the ground information of the road in the three-dimensional point cloud.
  • the specific implementation process of this step is: detecting the ground height in the three-dimensional point cloud; determining obstacle information on the ground height; and using the ground height and the obstacle information as ground information.
  • determining the ground height and detecting obstacle information on the ground height in a three-dimensional point cloud makes it possible to determine the specific conditions of the road and provide a possibility for ensuring the accuracy of the detection results.
  • pothole detection can be performed on the road to determine the pothole status of the road, and the pothole status of the road is taken as a part of the ground information.
  • other road-related detections are also performed, such as detecting the types of roads, including blind roads, sidewalks, pedestrian zebra crossings, etc. For example, if this method is applied to blind sticks, it is necessary to determine the specific road category of those currently using blind sticks. Therefore, in practice, more ground information can be detected as needed, which is not limited here.
  • Step 103 Determine an early warning area according to the ground information of the road.
  • the space coordinates of the early warning area are constructed; the height position of the early warning area in space coordinates is determined according to the ground height; the width and distance of the early warning area in space coordinates are determined according to the obstacle information, thereby determining the early warning area.
  • the determination of the early warning area is based on the three-dimensional point cloud in the world coordinate system.
  • the Y w O w Z w plane of the world coordinate system is a symmetrical plane, and the early warning area is constructed in the positive direction of the Z w axis.
  • the three-dimensional space area is the early warning area.
  • the space area of the early warning area is represented as vBox (x, y, z), where x, y, and z respectively represent the width, height, and distance of the early warning area.
  • the distance is determined by the speed of the user, the width and height of the early warning area is determined according to the shape of the user, and the early warning area is not less than the minimum space that the user can pass through.
  • the height of one user is 1.5m, the weight is 90kg (kg), and the speed of action is slow.
  • the warning zone can be set to vBox (100, 170, 150) in cm (centimeter); the height of the other user is 1.9m , Weight 55kg, agile speed, early warning area can be set to vBox (60, 210, 250), unit cm.
  • path detection can be performed in the image information of consecutive frames.
  • the coordinate system needs to be converted for each frame of the depth map, but the coordinate values of the early warning area can remain unchanged. It is necessary to determine the location of the early warning area according to the three-dimensional point clouds corresponding to different frames of images.
  • the road is not a flat road, and the ground information includes the ground height.
  • the position of the early warning area needs to be adjusted according to the ground height.
  • roads can be divided into uphill sections, downhill sections and flat sections with different ground heights. The location of the early warning area is adjusted based on the ground height in the ground information, and the traffic conditions of the road need to be detected based on the obstacle information and the size of the early warning area.
  • the real-time ground height is determined according to an adaptive ground detection method, or the real-time ground height is determined based on point cloud data indicating road information in a three-dimensional point cloud, and the position of the early warning area is dynamically adjusted according to changes in ground height. After the adjustment, the early warning area can be ensured to be directly above the ground. In this way, not only can energy efficiency be avoided to avoid ground interference, but also low-level obstacles will not be missed.
  • the adjustment early warning area can be determined by Formula 5, specifically expressed as follows:
  • vBox 1 vBox (x, H + y + ⁇ , z) (5)
  • H represents the real-time ground height
  • represents the dynamic adjustment margin
  • vBox 1 represents the adjusted early warning area
  • x, y, and z represent the width, height, and distance of the early warning area, respectively.
  • Step 104 Detect the traffic conditions in the early warning area, and determine the road detection result according to the traffic conditions.
  • the traffic conditions in the early warning area can be detected based on the obstacle information of the road, and the traffic conditions can specifically indicate the information such as the position of the traffic area and the width and height of the traffic area.
  • the traffic conditions After detecting the traffic conditions in the early warning area, determine whether the traffic conditions indicate that the road is passable; if so, determine the planned route in the early warning area and determine the road detection result based on the traffic route; otherwise, determine that the road detection result is impassable.
  • early warning information is sent according to the path detection result.
  • the early warning information includes, but is not limited to, obstacle information, traffic conditions, and ground height.
  • the early warning information may be one or a combination of sound information, image information, or light information.
  • the method may be applied to an intelligent robot, after the path detection result is obtained, it may be converted into machine language, so that the The intelligent robot can determine the path condition in the current frame.
  • channel detection result may also be reminded to the user in other forms, or prompted to the user after performing appropriate information conversion, which is not specifically limited here.
  • the second embodiment of the present application relates to a path detection method.
  • This embodiment is substantially the same as the first embodiment.
  • the main difference is that this embodiment specifically describes the specific implementation of determining the ground height in a three-dimensional point cloud.
  • the specific implementation of the path detection method is shown in FIG. 4 and includes the following steps:
  • step 201 is the same as step 101 in the first embodiment, and steps 209 and 210 are the same as step 103 and step 104 in the first embodiment, respectively, and the same steps are not described herein again.
  • Step 202 Perform automatic threshold segmentation in the height direction on the three-dimensional point cloud to obtain a first ground area.
  • Step 203 Perform fixed threshold segmentation on the distance direction of the three-dimensional point cloud to obtain a second ground area.
  • Step 204 Determine an initial ground area according to the first ground area and the second ground area.
  • Step 205 Calculate the inclination of the initial ground area.
  • Step 206 Determine the ground height of the ground area according to the inclination.
  • Step 207 Determine obstacle information at the ground height.
  • Step 208 Use the ground height and obstacle information as ground information.
  • Steps 207 and 208 have been described in the first embodiment, and are not repeated here.
  • the three-dimensional point cloud in the world coordinate system is divided in the height direction and the horizontal direction.
  • the three-dimensional point cloud in the world coordinate system defines Y w as a coordinate in the height direction.
  • Z w is a set of coordinates in the distance direction
  • X w is a set of coordinates in the width direction.
  • step 203 is to perform division in the direction designated by the Y w axis
  • step 204 is to perform division in the direction indicated by the Z w axis.
  • the specific process of obtaining the first ground region is: according to the height of the region of interest (ROI) selected by the user in the three-dimensional point cloud in the world coordinate system, calculating and obtaining the first segment Threshold; calculate the second segmentation threshold based on the ground height of the previous depth map of the current depth map; and perform automatic threshold segmentation in the height direction of the 3D point cloud in the world coordinate system based on the first and second segmentation thresholds
  • the specific segmentation process can be expressed by Equation 6:
  • Y mask represents the first ground area
  • ThdY roi is the first segmentation threshold
  • ThdY pre is the second segmentation threshold
  • a and b are weighting coefficients
  • the specific values of a and b are set by the user according to actual needs.
  • the automatic threshold segmentation algorithms that can be used include the mean method, Gauss method, or Otsu method. Since the automatic threshold segmentation algorithm is relatively mature, in this embodiment, This will not be repeated here.
  • the specific segmentation of the second ground region is obtained as follows: the minimum coordinate value of the distance direction selected by the user in the three-dimensional point cloud in the world coordinate system is set as the third segmentation threshold and set to Z min ; The maximum coordinate value of the distance direction selected in the three-dimensional point cloud in the coordinate system is set as Z max as the fourth segmentation threshold; according to the third segmentation threshold and the fourth segmentation threshold, the three-dimensional point cloud in the world coordinate system is distanced.
  • a fixed threshold segmentation in the direction is used to obtain a second ground area, which is set to Z mask , that is, a region obtained by retaining a Z w value between Z min and Z max is a second ground area.
  • the initial ground area can be determined, and the first ground area and the second ground area can be determined to determine the initial ground area.
  • the specific ground area can be determined through formula 7.
  • the initial ground area is expressed as follows:
  • Gnd 0 is the initial ground area
  • Y mask is the first ground area
  • Z mask is the second ground area.
  • the specific physical meaning of the formula is that the suspected ground area in the height direction can be determined through the first ground area, and the range of the first ground area in the distance direction can be further limited through the second ground area, thereby ensuring the final acquisition. Accuracy of the initial ground area.
  • the plane where the initial ground area is located must be determined first, that is, the plane fitting of the initial ground area is performed.
  • the inclination of the plane and the coordinate axis determined according to the plane fitting is the initial ground area.
  • the points on the initial ground area are used as the known quantity, and the least square method or random sampling consistency algorithm is used to perform the plane fitting on the initial ground area to obtain the initial ground area The general equation of the plane in which it lies.
  • other fitting methods may also be used to perform plane fitting on the initial ground area, and the specific method of plane fitting is not limited in the embodiments of the present application.
  • the normal vector of the plane can be determined: Further, the inclination angle of the initial ground area can be determined according to the normal vector. Specifically, the normal vector of the fitting plane and the vertical upward unit vector are used. The included angle is the horizontal tilt angle ⁇ of the initial ground, and the tilt angle ⁇ is calculated by Equation 8:
  • the ground height of the ground area can be determined according to the inclination of the initial ground area, which can be the ground height of a point on the ground area, or the real-time ground height.
  • the path detection method in this embodiment is based on the detection of image data of continuous frames.
  • the specific implementation process of path detection of image data of continuous frames is shown in FIG. 5 and includes the following implementation steps:
  • Step 301 Initialize the system.
  • Step 302 Establish a three-dimensional point cloud of the road according to the acquired image information.
  • Step 303 Detect the ground information of the road in the three-dimensional point cloud.
  • Step 304 Determine an early warning area according to the ground information of the road.
  • Step 305 Detect the traffic condition in the early warning area, and determine whether it is passable. If yes, go to step 306; otherwise, go to step 307.
  • Step 306 Determine the passing route planned in the early warning area and determine the detection result of the road according to the passing route.
  • Step 307 Determine that the detection result of the road is impassable.
  • Step 308 Send a warning message according to the detection result of the path.
  • Step 309 Determine whether there is image information of the next frame. If yes, go to step 302; otherwise, end the path detection.
  • the third embodiment of the present application relates to a path detection device.
  • the specific structure is shown in FIG. 6 and includes: a establishing module 601, a first detection module 602, a determination module 603, and a second detection module 604.
  • a establishing module 601 is configured to establish a three-dimensional point cloud of a road according to the acquired image information.
  • the first detection module 602 is configured to detect ground information of a road in a three-dimensional point cloud.
  • a determining module 603 is configured to determine an early warning area according to ground information of a road.
  • the second detection module 604 is configured to detect a traffic condition in the early warning area, and determine a road detection result of the road according to the traffic condition.
  • this embodiment is a device embodiment corresponding to the first or second embodiment, and this embodiment can be implemented in cooperation with the first or second embodiment.
  • the related technical details mentioned in the first or second embodiment are still valid in this embodiment. In order to reduce repetition, details are not repeated here.
  • the fourth embodiment of the present application relates to an electronic device.
  • the specific structure is shown in FIG. 7 and includes: at least one processor 701; and a memory 702 communicatively connected to the at least one processor 401; Instructions executed by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the path detection method in the first or second embodiment.
  • the memory and the processor are connected in a bus manner.
  • the bus may include any number of interconnected buses and bridges.
  • the bus links one or more processors and various circuits of the memory together.
  • the bus can also link various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, so they are not described further herein.
  • the processor is responsible for managing the bus and general processing, and can also provide various functions, including timing, peripheral interfaces, voltage regulation, power management, and other control functions.
  • the memory can be used to store data used by the processor when performing operations.
  • a fifth embodiment of the present application relates to a computer-readable storage medium.
  • the readable storage medium is a computer-readable storage medium, and the computer-readable storage medium stores computer instructions that enable a computer to execute the first The method for path detection involved in one or the second method embodiments.
  • the display method in the above embodiments is implemented by a program instructing related hardware.
  • the program is stored in a storage medium and includes several instructions for making a device (may It is a single-chip microcomputer, a chip, or the like) or a processor that executes all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random-Access Memory), magnetic disks or optical disks and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The present application relates to the technical field of computer vision, and relates in particular to a path detection method, a related device, and a computer readable storage medium. The path detection method comprises: establishing a three-dimensional point cloud of a road according to acquired image information; detecting ground information of the road in the three-dimensional point cloud; determining an early warning area according to the ground information of the road; detecting traffic conditions in the early warning area, and determining a path detection result of the road according to the traffic conditions. The described method may be applied to the detection of paths in complex environments, improves user experience, and is simultaneously capable of providing more road condition information by means of a three-dimensional point cloud.

Description

通路检测方法、相关装置及计算机可读存储介质Path detection method, related device and computer-readable storage medium 技术领域Technical field
本申请涉及计算机视觉技术领域,尤其涉及一种通路检测方法、相关装置及计算机可读存储介质。The present application relates to the field of computer vision technology, and in particular, to a path detection method, a related device, and a computer-readable storage medium.
背景技术Background technique
在导盲、机器人、自动驾驶等领域,通路检测是一项极其重要的技术。现有的通路检测是基于视觉对车辆或机器人行进中的道路进行检测,以提高车辆或机器人行驶的安全性。Path detection is an extremely important technology in the fields of blindness guidance, robotics, and autonomous driving. Existing path detection is based on vision to detect the road that a vehicle or robot is traveling to improve the safety of the vehicle or robot.
技术问题technical problem
发明人在研究现有技术的过程中发现,传统的通路检测通常是在图像中设定一个二维检测区域,通过判断该区域中是否存在障碍来确定该区域是否可以通行。但是由于图像存在透视投影特性,若图像检测区域为矩形检测区域,在真实世界对应的是相机前方的扇形区域,在对获取到的图像进行检测时会将可通行宽度两侧的物体当作障碍,从而形成误报;若图像检测区域为梯形检测区域,在真实世界虽然可以一定程度的矫正扇形区的影响,但是梯形检测区域的尺寸和位置设置极其不便,且需要随镜头焦距、相机姿态等的变化而变化。另外,传统的通路检测通常只是对正前方的障碍进行粗略预警,无法提供更为详尽的路况信息,使得后续决策和用户体验都极为不便。The inventor discovered in the process of studying the prior art that the traditional path detection usually sets a two-dimensional detection area in the image, and determines whether the area can pass by judging whether there is an obstacle in the area. However, due to the perspective projection characteristics of the image, if the image detection area is a rectangular detection area, the real world corresponds to a fan-shaped area in front of the camera. When detecting the acquired image, objects on both sides of the passable width will be regarded as obstacles. If the image detection area is a trapezoidal detection area, although the influence of the fan-shaped area can be corrected to a certain extent in the real world, the size and position of the trapezoidal detection area is extremely inconvenient and needs to be adjusted with the lens focal length, camera attitude, etc Change. In addition, traditional path detection usually only provides a rough warning of the obstacle ahead, and cannot provide more detailed road condition information, which makes subsequent decision-making and user experience extremely inconvenient.
技术解决方案Technical solutions
本申请部分实施例所要解决的技术问题在于提供一种通路检测方法、相关装置及计算机可读存储介质,用以解决上述技术问题。A technical problem to be solved in some embodiments of the present application is to provide a path detection method, a related device, and a computer-readable storage medium to solve the above technical problems.
本申请的一个实施例提供了一种通路检测方法,包括:An embodiment of the present application provides a path detection method, including:
根据获取的图像信息建立道路的三维点云;Build a 3D point cloud of the road based on the acquired image information;
在三维点云中检测道路的地面信息;Detecting ground information of roads in 3D point clouds;
根据道路的地面信息确定预警区;Determine early warning areas based on road surface information;
检测该预警区的通行状况,根据通行状况确定道路的通路检测结果。Detect the traffic conditions in the early warning area, and determine the road detection results based on the traffic conditions.
本申请的一个实施例还提供了一种通路检测装置,包括:建立模块、第一检测模块、确定模块和第二检测模块;An embodiment of the present application further provides a path detection device, including: a establishment module, a first detection module, a determination module, and a second detection module;
建立模块,用于根据获取的图像信息建立道路的三维点云;A building module for building a three-dimensional point cloud of a road based on the acquired image information;
第一检测模块,用于在三维点云中检测道路的地面信息;A first detection module for detecting ground information of a road in a three-dimensional point cloud;
确定模块,用于根据道路的地面信息确定预警区;A determining module for determining an early warning area based on road surface information;
第二检测模块,用于检测预警区的通行状况,根据通行状况确定道路的通路检测结果。The second detection module is used to detect the traffic condition in the early warning area, and determine the road detection result of the road according to the traffic condition.
本申请实施例还提供了一种电子设备,包括:至少一个处理器;以及,An embodiment of the present application further provides an electronic device, including: at least one processor; and,
与至少一个处理器通信连接的存储器;其中,Memory in communication with at least one processor; wherein,
存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述的通路检测方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the foregoing path detection method.
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序被处理器执行时实现上述的通路检测方法。An embodiment of the present application further provides a computer-readable storage medium storing a computer program, and the computer program is executed by a processor to implement the foregoing path detection method.
有益效果Beneficial effect
相对于现有技术而言,通过建立道路的三维点云,基于道路的三维点云确定预警区,避免了由于二维图像设置的预警区不合理,导致的检测道路不准确的问题,并且在三维云中检测道路的地面信息,进而确定预警区以及预警区的通行状况,能够保证通路检测结果的可靠性,使得能够适用于复杂环境的通路检测,提高用户体验,同时通过三维点云能够提供更多的路况信息。Compared with the prior art, by establishing a three-dimensional point cloud of the road and determining the early warning area based on the three-dimensional point cloud of the road, the problem of inaccurate detection of the road due to the unreasonable early warning area set by the two-dimensional image is avoided, and Detecting the ground information of the road in the 3D cloud, and then determining the early warning area and the traffic conditions of the early warning area, can ensure the reliability of the path detection results, make it suitable for path detection in complex environments, and improve the user experience. At the same time, the 3D point cloud can provide More traffic information.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplified by the pictures in the accompanying drawings. These exemplary descriptions do not constitute a limitation on the embodiments. Elements with the same reference numerals in the drawings are denoted as similar elements. Unless otherwise stated, the drawings in the drawings do not constitute a limitation on scale.
图1是本申请第一实施例中通路检测方法的流程图;1 is a flowchart of a path detection method in a first embodiment of the present application;
图2是本申请第一实施例中像素坐标系和相机坐标系的关系图;2 is a relationship diagram between a pixel coordinate system and a camera coordinate system in the first embodiment of the present application;
图3是本申请第一实施例中相机坐标系和世界坐标的关系图;3 is a relationship diagram between a camera coordinate system and world coordinates in the first embodiment of the present application;
图4是本申请第二实施例中通路检测方法的流程图;4 is a flowchart of a path detection method in a second embodiment of the present application;
图5是本申请第二实施例中另一通路检测方法的流程图;5 is a flowchart of another path detection method in the second embodiment of the present application;
图6是本申请第三实施例中的通路检测装置的结构图;6 is a structural diagram of a path detection device in a third embodiment of the present application;
图7是本申请第四实施例中电子设备的结构图。FIG. 7 is a structural diagram of an electronic device in a fourth embodiment of the present application.
本发明的实施方式Embodiments of the invention
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。然而,本领域的普通技术人员可以理解,在本申请的各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。In order to make the purpose, technical solution, and advantages of the present application clearer, some embodiments of the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the application, and are not used to limit the application. However, a person of ordinary skill in the art can understand that in the embodiments of the present application, many technical details are provided in order to make the reader better understand the application. However, even without these technical details and various changes and modifications based on the following embodiments, the technical solution claimed in this application can be implemented.
本申请的第一实施例涉及一种通路检测方法,如图1所示,包括如下步骤:The first embodiment of the present application relates to a path detection method. As shown in FIG. 1, the method includes the following steps:
步骤101:根据获取的图像信息建立道路的三维点云。Step 101: Establish a three-dimensional point cloud of the road according to the acquired image information.
具体地说,三维点云是目标物体表面特征的海量点集合,建立三维点云能够在空间中确定道路信息,应用中可以采用多种方式建立三维点云,本实施例并不限制建立三维点云采用的具体实现方式。Specifically, the three-dimensional point cloud is a huge collection of points on the surface of the target object. The establishment of the three-dimensional point cloud can determine road information in space. The application can use multiple methods to establish the three-dimensional point cloud. This embodiment does not limit the establishment of three-dimensional points. The specific implementation of the cloud.
一个具体的实现中,可以通过深度图建立三维点云。建立三维点云的具体过程包括:获取深度图和相机的姿态角,姿态角是拍摄该深度图时相机的姿态角;根据该深度图和预设的归一化尺度计算尺度归一化因子;根据该深度图和尺度归一化因子计算尺度归一化后的深度图;根据尺度归一化后的深度图构建相机坐标系下的三维点云;根据相机坐标系下的三维点云和相机的姿态角,构建世界坐标系下的三维点云。In a specific implementation, a three-dimensional point cloud can be established by using a depth map. The specific process of establishing a three-dimensional point cloud includes: obtaining a depth map and the attitude angle of the camera, where the attitude angle is the attitude angle of the camera when the depth map was taken; calculating a scale normalization factor according to the depth map and a preset normalization scale; Calculate the scale normalized depth map according to the depth map and the scale normalization factor; construct a 3D point cloud in the camera coordinate system based on the scale normalized depth map; and according to the 3D point cloud and camera in the camera coordinate system To construct a three-dimensional point cloud in the world coordinate system.
需要说明的是,深度图的获取方法有很多种,包括但不限于:激光雷达深度成像法、计算机立体视觉成像、坐标测量机法、莫尔条 纹法、结构光法,此处并不限制深度图的获取方法。It should be noted that there are many methods for obtaining depth maps, including but not limited to: lidar depth imaging method, computer stereo vision imaging, coordinate measuring machine method, moire fringe method, structured light method, depth is not limited here How to get the graph.
具体地说,利用公式1计算尺度归一化因子的,公式1表示如下:Specifically, using formula 1 to calculate the scale normalization factor, formula 1 is expressed as follows:
S=Norm/max(W,H)  (1)S = Norm / max (W, H) (1)
其中,S表示尺度归一化因子,W表示深度图的宽度,H表示深度图的高度,Norm表示预设的归一化尺度。Norm是预先设置的已知量,在具体应用中,如需要对连续帧的深度图进行处理建立三维点云,则对每一帧深度图的处理过程中使用的归一化尺度保持不变。Among them, S represents the scale normalization factor, W represents the width of the depth map, H represents the height of the depth map, and Norm represents a preset normalized scale. Norm is a preset known amount. In specific applications, if it is necessary to process the depth map of continuous frames to establish a three-dimensional point cloud, the normalization scale used in the processing of the depth map of each frame remains unchanged.
利用公式2计算尺度归一化后的深度图,公式2表示如下:The normalized depth map is calculated using Equation 2. Equation 2 is expressed as follows:
Figure PCTCN2018094905-appb-000001
Figure PCTCN2018094905-appb-000001
其中,W S表示尺度归一化后的深度图的宽度,H S表示尺度归一化后的深度图的高度。根据W S和H S可确定出尺度归一化后的深度图。 Among them, W S represents the width of the depth map after normalization, and H S represents the height of the depth map after normalization. The depth-normalized depth map can be determined according to W S and H S.
具体地说,根据公式3及归一化后的深度图构建相机坐标系下的三维点云,该相机坐标系下的三维点云表示为P(X c,Y c,Z c),由于深度图中每个像素都含有相机到拍摄物体的距离值,则通过公式3将深度图中的像素坐标转换为相机坐标系的坐标,并构成相机坐标系下的三维点云,公式3表示如下: Specifically, a three-dimensional point cloud in a camera coordinate system is constructed according to Equation 3 and a normalized depth map, and the three-dimensional point cloud in the camera coordinate system is represented as P (X c , Y c , Z c ). Each pixel in the picture contains the distance between the camera and the object. Then, the pixel coordinates in the depth map are converted into the coordinates of the camera coordinate system using Equation 3 to form a three-dimensional point cloud in the camera coordinate system. Equation 3 is expressed as follows:
Figure PCTCN2018094905-appb-000002
Figure PCTCN2018094905-appb-000002
其中,u和v是任意点P在归一化后的深度图中的坐标值,X c,Y c,Z c是点P在相机坐标系中的坐标值,M 3×4是相机的内参矩阵,Z c为点P在尺度归一化后的深度图中的深度值,也就是相机到拍摄物体的距离值,为已知量。 Among them, u and v are the coordinate values of any point P in the normalized depth map, X c , Y c , and Z c are the coordinate values of point P in the camera coordinate system, and M 3 × 4 is the internal parameter of the camera Matrix, Z c is the depth value of point P in the depth map after normalization, that is, the distance value from the camera to the shooting object, which is a known quantity.
根据相机坐标系与世界坐标系之间的坐标转换关系,将相机坐标系中的三维点云P(X c,Y c,Z c)转化为世界坐标系下的三维点云P(X w,Y w,Z w),该转换关系用公式4表示: According to the coordinate conversion relationship between the camera coordinate system and the world coordinate system, the three-dimensional point cloud P (X c , Y c , Z c ) in the camera coordinate system is converted into a three-dimensional point cloud P (X w , X w , Y w , Z w ), the conversion relationship is expressed by Equation 4:
Figure PCTCN2018094905-appb-000003
Figure PCTCN2018094905-appb-000003
Figure PCTCN2018094905-appb-000004
Figure PCTCN2018094905-appb-000004
其中,X w、Y w和Z w是三维点云中任意点P在世界坐标系中的坐标值,X c、Y c和Z c是点P在相机坐标系下的坐标值,α是相机在世界坐标系中与X w轴的夹角,β是相机在世界坐标系中与Y w轴的夹角,γ是相机在世界坐标系中与Z w轴的夹角。 Among them, X w , Y w and Z w are the coordinate values of any point P in the three-dimensional point cloud in the world coordinate system, X c , Y c and Z c are the coordinate values of point P in the camera coordinate system, and α is the camera The angle between the camera and the X w axis in the world coordinate system, β is the angle between the camera and the Y w axis in the world coordinate system, and γ is the angle between the camera and the Z w axis in the world coordinate system.
其中,假设图像坐标系为o 1-xy,则相机坐标系O c-X cY cZ c和像素坐标系o-uv的关系如图2所示,相机坐标系O c-X cY cZ c和世界坐标O w-X wY wZ w的关系如图3所示。 Among them, assuming that the image coordinate system is o 1 -xy, the relationship between the camera coordinate system O c -X c Y c Z c and the pixel coordinate system o-uv is shown in FIG. 2, and the camera coordinate system O c -X c Y c The relationship between Z c and world coordinates O w -X w Y w Z w is shown in FIG. 3.
其中,如图2中,以深度图左上角为原点建立的以像素为单位的直角坐标系o-uv作为像素坐标系,横坐标u表示像素点所在的像素列数,纵坐标v表示像素点所在的像素行数。将相机光轴与深度图平面的交点定义为图像坐标系o 1-xy的原点o 1,且x轴与u轴平行,y轴与v轴平行。相机坐标系O c-X cY cZ c以相机光心O c为坐标原点,X c轴和Y c轴分别与图像坐标系中的x轴和y轴平行,Z c轴为相机的光轴,和图像平面垂直并交于o 1点。 Among them, as shown in FIG. 2, a rectangular coordinate system o-uv in pixels, which is established with the upper left corner of the depth map as the origin, is used as the pixel coordinate system. The abscissa u represents the number of pixel columns where the pixels are located, and the ordinate v represents the pixels The number of pixel rows at. The intersection of the camera optical axis and the depth map plane is defined as the origin o 1 of the image coordinate system o 1 -xy, and the x-axis is parallel to the u-axis and the y-axis is parallel to the v-axis. The camera coordinate system O c -X c Y c Z c uses the camera optical center O c as the origin, the X c axis and Y c axis are respectively parallel to the x and y axes in the image coordinate system, and the Z c axis is the light of the camera The axis is perpendicular to the image plane and intersects at o 1 point.
其中,如图3中,世界坐标系O w-X wY wZ w的原点O w与相机坐标系的原点O c重合,均为相机光心,选取水平向右为X w轴正方向,垂直向下为Y w轴正方向,垂直X wO wY w平面并指向正前方为Z w轴正方向,建立世界坐标系。 Among them, as shown in FIG. 3, the origin O w of the world coordinate system O w -X w Y w Z w coincides with the origin O c of the camera coordinate system, both of which are camera light centers, and the horizontal direction to the right is the positive direction of the X w axis. Vertically downward is the positive direction of the Y w axis, perpendicular to the X w O w Y w plane and pointing straight ahead is the positive direction of the Z w axis, and a world coordinate system is established.
值得一提的是,根据图像信息构建三维点云,并不局限于通过深度图构建,如,还可以通过激光雷达直接获取激光点云数据,并根据点云数据构建三维点云,通过深度图构建三维点云是一种示例性说明,本实施例对构建三维点云采取的具体方式不做限制。It is worth mentioning that the construction of three-dimensional point clouds based on image information is not limited to the construction of depth maps. For example, laser point cloud data can also be directly obtained by lidar, and three-dimensional point clouds are constructed based on point cloud data. The construction of a three-dimensional point cloud is an exemplary description, and this embodiment does not limit the specific method adopted for constructing the three-dimensional point cloud.
步骤102:在三维点云中检测道路的地面信息。Step 102: Detect the ground information of the road in the three-dimensional point cloud.
一个具体实现中,该步骤具体实现过程为:检测三维点云中的地面高度;确定地面高度上的障碍物信息;将地面高度和障碍物信息作为地面信息。In a specific implementation, the specific implementation process of this step is: detecting the ground height in the three-dimensional point cloud; determining obstacle information on the ground height; and using the ground height and the obstacle information as ground information.
需要说明的是,在三维点云中确定出地面高度并检测地面高度上的障碍物信息,使得能够确定道路的具体状况,为保证检测结果的准确性提供可能性。It should be noted that determining the ground height and detecting obstacle information on the ground height in a three-dimensional point cloud makes it possible to determine the specific conditions of the road and provide a possibility for ensuring the accuracy of the detection results.
值得一提的是,在确定地面高度之后,还可以对该道路进行坑洼检测,确定该道路的坑洼状况,将道路的坑洼状况作为地面信息中的一部分。实际中还做其他道路相关的检测,如检测道路的类别,包括盲道、人行道、人行斑马线等,例如,将该方法应用于盲杖,则有必要确定当前使用盲杖者行走的具体道路类别。因此,实际中可根据需要检测更多的地面信息,此处不做限制。It is worth mentioning that after determining the ground height, pothole detection can be performed on the road to determine the pothole status of the road, and the pothole status of the road is taken as a part of the ground information. In practice, other road-related detections are also performed, such as detecting the types of roads, including blind roads, sidewalks, pedestrian zebra crossings, etc. For example, if this method is applied to blind sticks, it is necessary to determine the specific road category of those currently using blind sticks. Therefore, in practice, more ground information can be detected as needed, which is not limited here.
步骤103:根据道路的地面信息确定预警区。Step 103: Determine an early warning area according to the ground information of the road.
具体地说,构建预警区的空间坐标;根据地面高度确定预警区在空间坐标下的高度位置;根据障碍物信息确定预警区在空间坐标下的宽度和距离,从而确定出预警区。Specifically, the space coordinates of the early warning area are constructed; the height position of the early warning area in space coordinates is determined according to the ground height; the width and distance of the early warning area in space coordinates are determined according to the obstacle information, thereby determining the early warning area.
需要说明的是,确定预警区是基于世界坐标系下的三维点云确定的,具体为,以世界坐标系的Y wO wZ w平面为对称平面,以Z w轴的正方向构建预警区三维的空间区域,该三维的空间区域即为预警区,预警区的空间区域表示为vBox(x,y,z),其中x、y、z分别表示预警区的宽度、高度和距离,预警区的距离通过使用者的速度确定,预警区的宽度和高度根据使用者的外形确定,且该预警区不小于保证使用者可以通过的最小的空间。例如,一个使用者的身高为1.5m、体重90kg(千克)、行动速度迟缓,预警区可设置为vBox(100,170,150),单位cm(厘米);另一个使用者的身高为1.9m、体重55kg、行动速度敏捷,预警区可设置为vBox(60,210,250),单位cm。 It should be noted that the determination of the early warning area is based on the three-dimensional point cloud in the world coordinate system. Specifically, the Y w O w Z w plane of the world coordinate system is a symmetrical plane, and the early warning area is constructed in the positive direction of the Z w axis. The three-dimensional space area is the early warning area. The space area of the early warning area is represented as vBox (x, y, z), where x, y, and z respectively represent the width, height, and distance of the early warning area. The distance is determined by the speed of the user, the width and height of the early warning area is determined according to the shape of the user, and the early warning area is not less than the minimum space that the user can pass through. For example, the height of one user is 1.5m, the weight is 90kg (kg), and the speed of action is slow. The warning zone can be set to vBox (100, 170, 150) in cm (centimeter); the height of the other user is 1.9m , Weight 55kg, agile speed, early warning area can be set to vBox (60, 210, 250), unit cm.
值得一提的是,本实施例可以在连续帧的图像信息中进行通路检测,如是深度图,则需要对每帧深度图都需要转换坐标系,但预警区的坐标值可以保持不变,只需要根据不同帧图像对应的三维点云确定预警区的位置即可。It is worth mentioning that in this embodiment, path detection can be performed in the image information of consecutive frames. For depth maps, the coordinate system needs to be converted for each frame of the depth map, but the coordinate values of the early warning area can remain unchanged. It is necessary to determine the location of the early warning area according to the three-dimensional point clouds corresponding to different frames of images.
另外,道路并不是平坦的路面,地面信息中包括地面高度,则在确定出预警区之后,还需要根据地面高度调整预警区的位置。如,地面高度不同,道路可分为上坡路段,下坡路段和平坦路段,根据地面信息中的地面高度调整预警区的位置,还需要根据障碍物信息和预警区的大小检测道路的通行状况。In addition, the road is not a flat road, and the ground information includes the ground height. After the early warning area is determined, the position of the early warning area needs to be adjusted according to the ground height. For example, roads can be divided into uphill sections, downhill sections and flat sections with different ground heights. The location of the early warning area is adjusted based on the ground height in the ground information, and the traffic conditions of the road need to be detected based on the obstacle information and the size of the early warning area.
一个具体实现中,根据自适应的地面检测方法确定实时的地面高度,或者根据三维点云中表明道路信息的点云数据确定出实时地面 高度,根据地面高度的变化动态的调整预警区的位置,调整之后能够保证预警区正好位于地面的正上方,这样,不仅能效避免地面干扰,而且不会遗漏低矮通行障碍。具体的,调整预警区可通过公式5确定,具体表示如下:In a specific implementation, the real-time ground height is determined according to an adaptive ground detection method, or the real-time ground height is determined based on point cloud data indicating road information in a three-dimensional point cloud, and the position of the early warning area is dynamically adjusted according to changes in ground height. After the adjustment, the early warning area can be ensured to be directly above the ground. In this way, not only can energy efficiency be avoided to avoid ground interference, but also low-level obstacles will not be missed. Specifically, the adjustment early warning area can be determined by Formula 5, specifically expressed as follows:
vBox 1=vBox(x,H+y+σ,z)  (5) vBox 1 = vBox (x, H + y + σ, z) (5)
其中,H表示实时的地面高度,σ表示动态调整余量,vBox 1表示调整后的预警区,x、y、z分别表示预警区的宽度、高度和距离。 Among them, H represents the real-time ground height, σ represents the dynamic adjustment margin, vBox 1 represents the adjusted early warning area, and x, y, and z represent the width, height, and distance of the early warning area, respectively.
步骤104:检测预警区的通行状况,根据通行状况确定道路的通路检测结果。Step 104: Detect the traffic conditions in the early warning area, and determine the road detection result according to the traffic conditions.
具体地说,可根据道路的障碍物信息对预警区的通行状况进行检测,通行状况可以具体表明可通行区域的位置以及可通行区域的宽度和高度等信息。检测到预警区的通行状况后,判断通行状况指示道路是否为可通行;若是,则确定预警区规划的通行路线并根据通行路线确定道路的检测结果;否则,确定道路的检测结果为不可通行。Specifically, the traffic conditions in the early warning area can be detected based on the obstacle information of the road, and the traffic conditions can specifically indicate the information such as the position of the traffic area and the width and height of the traffic area. After detecting the traffic conditions in the early warning area, determine whether the traffic conditions indicate that the road is passable; if so, determine the planned route in the early warning area and determine the road detection result based on the traffic route; otherwise, determine that the road detection result is impassable.
具体地说,在确定出通路检测结果之后,根据通路检测结果发出预警信息,该预警信息中包括但不限于障碍物信息、通行状况和地面高度等。Specifically, after the path detection result is determined, early warning information is sent according to the path detection result. The early warning information includes, but is not limited to, obstacle information, traffic conditions, and ground height.
其中,预警信息可以是声音信息、图像信息或光线信息中的一个或组合信息,如,将该方法应用于智能机器人,则在获取到通路检测结果之后,可将其转化为机器语言,使得该智能机器人能确定当前帧中的通路状况。The early warning information may be one or a combination of sound information, image information, or light information. For example, if the method is applied to an intelligent robot, after the path detection result is obtained, it may be converted into machine language, so that the The intelligent robot can determine the path condition in the current frame.
需要说明的是,通路检测结果还可以用其他的形式提醒给使用者,或者做适当的信息转换之后提示给使用者,此处不做具体限制。It should be noted that the channel detection result may also be reminded to the user in other forms, or prompted to the user after performing appropriate information conversion, which is not specifically limited here.
与现有技术相比,通过建立道路的三维点云,基于道路的三维点云确定预警区,避免了由于二维图像设置的预警区不合理,导致的检测道路不准确的问题,并且在三维云中检测道路的地面信息,进而确定预警区以及预警区的通行状况,能够保证通路检测结果的可靠性,使得能够适用于复杂环境的通路检测,提高用户体验,同时通过三维点云能够提供更多的路况信息。Compared with the prior art, by establishing a three-dimensional point cloud of the road and determining the early warning area based on the three-dimensional point cloud of the road, the problem of inaccurate detection of the road due to the unreasonable setting of the two-dimensional image warning area is avoided, and Detecting the ground information of roads in the cloud, and then determining the early warning area and the traffic conditions of the early warning area, can ensure the reliability of the road detection results, make it suitable for the road detection in complex environments, and improve the user experience. At the same time, 3D point clouds can provide more More traffic information.
本申请的第二实施例涉及一种通路检测方法,本实施例与第一实施例大致相同,主要区别之处在于,本实施例具体说明了在三维点 云中确定地面高度的具体实现。该通路检测方法的具体实施如图4所示,包括如下步骤:The second embodiment of the present application relates to a path detection method. This embodiment is substantially the same as the first embodiment. The main difference is that this embodiment specifically describes the specific implementation of determining the ground height in a three-dimensional point cloud. The specific implementation of the path detection method is shown in FIG. 4 and includes the following steps:
需要说明的是,步骤201与第一实施例中的步骤101相同,步骤209、步骤210分别与第一实施例中的步骤103、步骤104相同,此处,对于相同的步骤不再赘述。It should be noted that step 201 is the same as step 101 in the first embodiment, and steps 209 and 210 are the same as step 103 and step 104 in the first embodiment, respectively, and the same steps are not described herein again.
步骤202:对三维点云进行高度方向的自动阈值分割,获得第一地面区域。Step 202: Perform automatic threshold segmentation in the height direction on the three-dimensional point cloud to obtain a first ground area.
步骤203:对三维点云进行距离方向的固定阈值分割,获得第二地面区域。Step 203: Perform fixed threshold segmentation on the distance direction of the three-dimensional point cloud to obtain a second ground area.
步骤204:根据第一地面区域和第二地面区域确定初始地面区域。Step 204: Determine an initial ground area according to the first ground area and the second ground area.
步骤205:计算初始地面区域的倾角。Step 205: Calculate the inclination of the initial ground area.
步骤206:根据倾角确定地面区域的地面高度。Step 206: Determine the ground height of the ground area according to the inclination.
步骤207:确定地面高度上的障碍物信息。Step 207: Determine obstacle information at the ground height.
步骤208:将地面高度和障碍物信息作为地面信息。Step 208: Use the ground height and obstacle information as ground information.
步骤207与步骤208已在第一实施例中进行阐述,此处不再赘述。 Steps 207 and 208 have been described in the first embodiment, and are not repeated here.
具体地说,本实施例中是对世界坐标系中的三维点云进行的高度方向和水平方向的分割,需要说明的是,世界坐标系中的三维点云中定义Y w为高度方向的坐标集合,Z w为距离方向的坐标集合,X w为宽度方向上的坐标集合。则步骤203为在Y w轴所指定的方向进行分割,步骤204为在Z w轴所指示的方向进行分割。 Specifically, in this embodiment, the three-dimensional point cloud in the world coordinate system is divided in the height direction and the horizontal direction. It should be noted that the three-dimensional point cloud in the world coordinate system defines Y w as a coordinate in the height direction. Z w is a set of coordinates in the distance direction, and X w is a set of coordinates in the width direction. Then step 203 is to perform division in the direction designated by the Y w axis, and step 204 is to perform division in the direction indicated by the Z w axis.
一个具体的实现中,获得第一地面区域的具体过程为:根据用户在世界坐标系下的三维点云中选定的高度方向的感兴趣区域(Region Of Interest,ROI),计算获得第一分割阈值;根据当前深度图的前一帧深度图的地面高度,计算获得第二分割阈值;根据第一分割阈值和第二分割阈值,对世界坐标系下的三维点云进行高度方向的自动阈值分割,具体分割过程可通过公式6表示:In a specific implementation, the specific process of obtaining the first ground region is: according to the height of the region of interest (ROI) selected by the user in the three-dimensional point cloud in the world coordinate system, calculating and obtaining the first segment Threshold; calculate the second segmentation threshold based on the ground height of the previous depth map of the current depth map; and perform automatic threshold segmentation in the height direction of the 3D point cloud in the world coordinate system based on the first and second segmentation thresholds The specific segmentation process can be expressed by Equation 6:
Y mask=a*ThdY roi+b*ThdY pre  (6) Y mask = a * ThdY roi + b * ThdY pre (6)
其中,Y mask表示第一地面区域,ThdY roi为第一分割阈值,ThdY pre为第二分割阈值,a和b为加权系数,a和b的具体取值由用户根据 实际需要进行设定。 Among them, Y mask represents the first ground area, ThdY roi is the first segmentation threshold, ThdY pre is the second segmentation threshold, a and b are weighting coefficients, and the specific values of a and b are set by the user according to actual needs.
需要说明的是,在获得第一分割阈值和第二分割阈值时,可以采用的自动阈值分割算法包括均值法、高斯法或大津法等,由于自动阈值分割算法已经比较成熟,所以本实施例中不再对此进行赘述。It should be noted that when obtaining the first segmentation threshold and the second segmentation threshold, the automatic threshold segmentation algorithms that can be used include the mean method, Gauss method, or Otsu method. Since the automatic threshold segmentation algorithm is relatively mature, in this embodiment, This will not be repeated here.
具体地说,获得第二地面区域的具体分割为:将用户在世界坐标系下的三维点云中选择的距离方向的最小坐标值,作为第三分割阈值,设为Z min;将用户在世界坐标系下的三维点云中选择的距离方向的最大坐标值,作为第四分割阈值,设为Z max;根据第三分割阈值和第四分割阈值,对世界坐标系下的三维点云进行距离方向的固定阈值分割,获得第二地面区域,设为Z mask,即保留Z min和Z max之间的Z w值所获得的区域为第二地面区域。 Specifically, the specific segmentation of the second ground region is obtained as follows: the minimum coordinate value of the distance direction selected by the user in the three-dimensional point cloud in the world coordinate system is set as the third segmentation threshold and set to Z min ; The maximum coordinate value of the distance direction selected in the three-dimensional point cloud in the coordinate system is set as Z max as the fourth segmentation threshold; according to the third segmentation threshold and the fourth segmentation threshold, the three-dimensional point cloud in the world coordinate system is distanced. A fixed threshold segmentation in the direction is used to obtain a second ground area, which is set to Z mask , that is, a region obtained by retaining a Z w value between Z min and Z max is a second ground area.
具体地说,在获取到第一地面区域和第二地面区域之后,则可确定初始地面区域,联立第一地面区域和第二地面区域可确定初始地面区域,具体的可通关过公式7确定出初始地面区域,表示如下:Specifically, after the first ground area and the second ground area are obtained, the initial ground area can be determined, and the first ground area and the second ground area can be determined to determine the initial ground area. The specific ground area can be determined through formula 7. The initial ground area is expressed as follows:
Gnd 0=Y mask∩Z mask  (7) Gnd 0 = Y mask ∩Z mask (7)
其中,Gnd 0为初始地面区域,Y mask为第一地面区域,Z mask为第二地面区域。该公式的具体物理含义是,通过第一地面区域可以确定在高度方向上的疑似地面区域,通过第二地面区域可对第一地面区域在距离方向上的范围做进一步限定,从而保证最终获取的初始地面区域的准确性。 Among them, Gnd 0 is the initial ground area, Y mask is the first ground area, and Z mask is the second ground area. The specific physical meaning of the formula is that the suspected ground area in the height direction can be determined through the first ground area, and the range of the first ground area in the distance direction can be further limited through the second ground area, thereby ensuring the final acquisition. Accuracy of the initial ground area.
具体地说,计算初始地面区域的倾角时,须先确定初始地面区域所在的平面,即对初始地面区域进行平面拟合,根据平面拟合确定出的平面与坐标轴的倾角即为初始地面区域的倾角。Specifically, when calculating the inclination of the initial ground area, the plane where the initial ground area is located must be determined first, that is, the plane fitting of the initial ground area is performed. The inclination of the plane and the coordinate axis determined according to the plane fitting is the initial ground area. Angle of inclination.
需要说明的是,在进行平面拟合时,是以初始地面区域上的点作为已知量,采用最小二乘法或随机抽样一致性算法,对初始地面区域进行平面拟合,以获得初始地面区域所在平面的一般方程。当然,也可以采用其它拟合方式对初始地面区域进行平面拟合,本申请实施例中并不限定平面拟合的具体方式。It should be noted that in the plane fitting, the points on the initial ground area are used as the known quantity, and the least square method or random sampling consistency algorithm is used to perform the plane fitting on the initial ground area to obtain the initial ground area The general equation of the plane in which it lies. Of course, other fitting methods may also be used to perform plane fitting on the initial ground area, and the specific method of plane fitting is not limited in the embodiments of the present application.
一个具体实现中,对初始平面拟合后得到初始地面区域所在的平面的一般方程AX+BY+CZ=D表示,则可确定出该平面的法向量:
Figure PCTCN2018094905-appb-000005
进一步可根据法向量确定出初始地面区域的倾角,具体 以该拟合平面的法向量与垂直向上的单位向量
Figure PCTCN2018094905-appb-000006
的夹角作为初始地面的水平倾角θ,通过公式8计算出倾角θ:
In a specific implementation, after fitting the initial plane to obtain the general equation of the plane where the initial ground area is located, AX + BY + CZ = D, the normal vector of the plane can be determined:
Figure PCTCN2018094905-appb-000005
Further, the inclination angle of the initial ground area can be determined according to the normal vector. Specifically, the normal vector of the fitting plane and the vertical upward unit vector are used.
Figure PCTCN2018094905-appb-000006
The included angle is the horizontal tilt angle θ of the initial ground, and the tilt angle θ is calculated by Equation 8:
Figure PCTCN2018094905-appb-000007
Figure PCTCN2018094905-appb-000007
其中,θ为初始地面区域的倾角,
Figure PCTCN2018094905-appb-000008
为初始地面区域的法向量,
Figure PCTCN2018094905-appb-000009
为垂直向上的单位向量,
Figure PCTCN2018094905-appb-000010
表示
Figure PCTCN2018094905-appb-000011
的模,
Figure PCTCN2018094905-appb-000012
表示
Figure PCTCN2018094905-appb-000013
的模。
Where θ is the inclination of the initial ground area,
Figure PCTCN2018094905-appb-000008
Is the normal vector of the initial ground area,
Figure PCTCN2018094905-appb-000009
Is a vertical upward unit vector,
Figure PCTCN2018094905-appb-000010
Express
Figure PCTCN2018094905-appb-000011
Mold
Figure PCTCN2018094905-appb-000012
Express
Figure PCTCN2018094905-appb-000013
Of mold.
根据初始地面区域的倾角即可确定出地面区域的地面高度,可以是该地面区域上一点的地面高度,也可是实时的地面高度。The ground height of the ground area can be determined according to the inclination of the initial ground area, which can be the ground height of a point on the ground area, or the real-time ground height.
值得一提的是,本实施例中的通路检测方法是基于连续帧的图像数据进行的检测,则对连续帧的图像数据进行通路检测的具体实现流程如图5所示,包括如下实施步骤:It is worth mentioning that the path detection method in this embodiment is based on the detection of image data of continuous frames. The specific implementation process of path detection of image data of continuous frames is shown in FIG. 5 and includes the following implementation steps:
步骤301:对系统进行初始化。Step 301: Initialize the system.
步骤302:根据获取的图像信息建立道路的三维点云。Step 302: Establish a three-dimensional point cloud of the road according to the acquired image information.
步骤303:在三维点云中检测道路的地面信息。Step 303: Detect the ground information of the road in the three-dimensional point cloud.
步骤304:根据道路的地面信息确定预警区。Step 304: Determine an early warning area according to the ground information of the road.
步骤305:检测预警区的通行状况,判断是否为可通行。若为是,则执行步骤306,否则,执行步骤307。Step 305: Detect the traffic condition in the early warning area, and determine whether it is passable. If yes, go to step 306; otherwise, go to step 307.
步骤306:则确定预警区规划的通行路线并根据通行路线确定道路的检测结果。Step 306: Determine the passing route planned in the early warning area and determine the detection result of the road according to the passing route.
步骤307:确定道路的检测结果为不可通行。Step 307: Determine that the detection result of the road is impassable.
步骤308:根据通路检测结果发出预警信息。Step 308: Send a warning message according to the detection result of the path.
步骤309:判断是否存在下一帧图像信息。若为是,则转去执行步骤302,否则,结束通路检测。Step 309: Determine whether there is image information of the next frame. If yes, go to step 302; otherwise, end the path detection.
上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该专利的保护范围内。The steps of the various methods above are just for clarity of description. During implementation, they can be combined into one step or some steps can be split and divided into multiple steps. As long as they include the same logical relationship, they are all within the protection scope of this patent. ; Add insignificant modifications to the algorithm or process or introduce insignificant designs, but the core design of the algorithm and process that does not change is within the scope of this patent.
本申请的第三实施例涉及一种通路检测装置,具体结构如图6所示,包括:建立模块601、第一检测模块602、确定模块603和第二检测模块604。The third embodiment of the present application relates to a path detection device. The specific structure is shown in FIG. 6 and includes: a establishing module 601, a first detection module 602, a determination module 603, and a second detection module 604.
建立模块601,用于根据获取的图像信息建立道路的三维点云。 第一检测模块602,用于在三维点云中检测道路的地面信息。确定模块603,用于根据道路的地面信息确定预警区。第二检测模块604,用于检测预警区的通行状况,根据通行状况确定道路的通路检测结果。A establishing module 601 is configured to establish a three-dimensional point cloud of a road according to the acquired image information. The first detection module 602 is configured to detect ground information of a road in a three-dimensional point cloud. A determining module 603 is configured to determine an early warning area according to ground information of a road. The second detection module 604 is configured to detect a traffic condition in the early warning area, and determine a road detection result of the road according to the traffic condition.
不难发现,本实施例为与第一或第二实施例相对应的装置实施例,本实施例可与第一或第二实施例互相配合实施。第一或第二实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。It is not difficult to find that this embodiment is a device embodiment corresponding to the first or second embodiment, and this embodiment can be implemented in cooperation with the first or second embodiment. The related technical details mentioned in the first or second embodiment are still valid in this embodiment. In order to reduce repetition, details are not repeated here.
本申请的第四实施例涉及一种电子设备,具体结构如图7所示,包括:至少一个处理器701;以及,与至少一个处理器401通信连接的存储器702;其中,存储器702存储有可被至少一个处理器701执行的指令,指令被至少一个处理器701执行,以使至少一个处理器701能够执行第一或第二实施例中的通路检测方法。The fourth embodiment of the present application relates to an electronic device. The specific structure is shown in FIG. 7 and includes: at least one processor 701; and a memory 702 communicatively connected to the at least one processor 401; Instructions executed by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the path detection method in the first or second embodiment.
其中,存储器和处理器采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器和存储器的各种电路链接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。The memory and the processor are connected in a bus manner. The bus may include any number of interconnected buses and bridges. The bus links one or more processors and various circuits of the memory together. The bus can also link various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, so they are not described further herein.
处理器负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器可以被用于存储处理器在执行操作时所使用的数据。The processor is responsible for managing the bus and general processing, and can also provide various functions, including timing, peripheral interfaces, voltage regulation, power management, and other control functions. The memory can be used to store data used by the processor when performing operations.
本申请的第五实施例涉及一种计算机可读存储介质,该可读存储介质为计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,该计算机指令使计算机能够执行本申请第一或第二方法实施例中涉及的通路检测的方法。A fifth embodiment of the present application relates to a computer-readable storage medium. The readable storage medium is a computer-readable storage medium, and the computer-readable storage medium stores computer instructions that enable a computer to execute the first The method for path detection involved in one or the second method embodiments.
需要说明的是,本领域的技术人员能够理解,上述实施例中显示方法是通过程序来指令相关的硬件来完成的,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random-AccessMemory)、磁碟或者光盘等各种可以存储程序代码的 介质。It should be noted that those skilled in the art can understand that the display method in the above embodiments is implemented by a program instructing related hardware. The program is stored in a storage medium and includes several instructions for making a device (may It is a single-chip microcomputer, a chip, or the like) or a processor that executes all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random-Access Memory), magnetic disks or optical disks and other media that can store program codes .
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。Those of ordinary skill in the art can understand that the foregoing embodiments are specific embodiments for implementing the present application, and in practical applications, various changes can be made in form and details without departing from the spirit and range.

Claims (14)

  1. 一种通路检测方法,其中,包括:A path detection method, including:
    根据获取的图像信息建立道路的三维点云;Build a 3D point cloud of the road based on the acquired image information;
    在所述三维点云中检测所述道路的地面信息;Detecting ground information of the road in the three-dimensional point cloud;
    根据所述道路的地面信息确定预警区;Determining an early warning area according to the ground information of the road;
    检测所述预警区的通行状况,根据所述通行状况确定所述道路的通路检测结果。Detect a traffic condition in the early warning area, and determine a road detection result of the road according to the traffic condition.
  2. 根据权利要求1所述的通路检测方法,其中,所述在所述三维点云中检测所述道路的地面信息,具体包括:The path detection method according to claim 1, wherein the detecting ground information of the road in the three-dimensional point cloud specifically comprises:
    检测所述三维点云中的地面高度;Detecting a ground height in the three-dimensional point cloud;
    确定所述地面高度上的障碍物信息;Determining obstacle information at the ground height;
    将所述地面高度和所述障碍物信息作为所述地面信息。Use the ground height and the obstacle information as the ground information.
  3. 根据权利要求2所述的通路检测方法,其中,所述根据所述道路的地面信息确定预警区,具体包括:The path detection method according to claim 2, wherein the determining an early warning area based on ground information of the road specifically comprises:
    构建预警区的空间坐标;Constructing the spatial coordinates of the early warning area;
    根据所述地面高度确定所述预警区在所述空间坐标下的高度位置;Determining a height position of the early warning area under the spatial coordinates according to the ground height;
    根据所述障碍物信息确定所述预警区在所述空间坐标下的宽度和距离。Determining the width and distance of the early warning area in the spatial coordinates according to the obstacle information.
  4. 根据权利要求2或3所述的通路检测方法,其中,所述根据所述道路的地面信息确定预警区之后,所述检测所述预警区的通行状况之前,所述通路检测方法还包括:The method for detecting a path according to claim 2 or 3, wherein after determining an early warning area based on the ground information of the road and before detecting the traffic condition of the early warning area, the method for detecting a route further comprises:
    根据所述地面高度调整所述预警区的位置。Adjusting the position of the early warning area according to the ground height.
  5. 根据权利要求1-4任一项所述的通路检测方法,其中,所述根据所述通行状况确定所述道路的通路检测结果,具体包括:The path detection method according to any one of claims 1-4, wherein the determining a path detection result of the road according to the traffic condition specifically includes:
    判断所述通行状况指示所述道路是否为可通行;Judging whether the traffic condition indicates that the road is passable;
    若是,则确定所述预警区规划的通行路线,并根据所述通行路线确定所述道路的检测结果;If yes, determine a passing route planned in the early warning area, and determine a detection result of the road according to the passing route;
    否则,确定所述道路的检测结果为不可通行。Otherwise, it is determined that the detection result of the road is impassable.
  6. 根据权利要求2或3所述的通路检测方法,其中,所述检测所述三维点云中的地面高度,具体包括:The path detection method according to claim 2 or 3, wherein the detecting the ground height in the three-dimensional point cloud specifically comprises:
    对所述三维点云进行高度方向的自动阈值分割,获得第一地面区域;Performing automatic threshold segmentation of the three-dimensional point cloud in the height direction to obtain a first ground area;
    对所述三维点云进行距离方向的固定阈值分割,获得第二地面区域;Performing a fixed threshold segmentation in the distance direction on the three-dimensional point cloud to obtain a second ground area;
    根据所述第一地面区域和所述第二地面区域确定初始地面区域;Determining an initial ground area according to the first ground area and the second ground area;
    计算所述初始地面区域的倾角;Calculating the inclination of the initial ground area;
    根据所述倾角确定所述地面区域的地面高度。A ground height of the ground area is determined according to the inclination angle.
  7. 根据权利要求1-6任一项所述的通路检测方法,其中,所述图像信息包括:深度图和相机的姿态角。The path detection method according to any one of claims 1-6, wherein the image information includes: a depth map and a pose angle of a camera.
  8. 根据权利要求7所述的通路检测方法,其中,所述根据获取的图像信息建立道路的三维点云,具体包括:The path detection method according to claim 7, wherein the establishing a three-dimensional point cloud of a road based on the acquired image information specifically comprises:
    根据所述深度图和预设的归一化尺度计算尺度归一化因子;Calculating a scale normalization factor according to the depth map and a preset normalization scale;
    根据所述深度图和所述尺度归一化因子计算尺度归一化后的深度图;Calculating a scale-normalized depth map according to the depth map and the scale-normalization factor;
    根据所述尺度归一化后的深度图构建相机坐标系下的三维点云;Constructing a three-dimensional point cloud in a camera coordinate system according to the scale-normalized depth map;
    根据所述相机坐标系下的三维点云和所述相机的姿态角,构建世界坐标系下的三维点云。According to the three-dimensional point cloud in the camera coordinate system and the attitude angle of the camera, a three-dimensional point cloud in the world coordinate system is constructed.
  9. 根据权利要求2或3所述的通路检测方法,其中,所述确定所述地面高度上的障碍物信息,具体包括:The path detection method according to claim 2 or 3, wherein the determining the obstacle information at the ground height specifically comprises:
    根据所述地面高度确定所述道路的地面位置;Determining a ground position of the road according to the ground height;
    对所述道路的地面位置进行坑洼检测,得到坑洼检测结果;Performing pothole detection on the ground position of the road to obtain a pothole detection result;
    根据所述地面高度和所述坑洼检测结果,生成所述地面高度上的障碍物信息。According to the ground height and the pothole detection result, obstacle information on the ground height is generated.
  10. 根据权利要求1-9任一项所述的通路检测方法,其中,所述通路检测结果包括:道路上障碍物的位置、道路上障碍物的种类和决策建议中的至少一个。The path detection method according to any one of claims 1-9, wherein the path detection result includes at least one of a position of an obstacle on a road, a type of the obstacle on the road, and a decision suggestion.
  11. 根据权利要求1-9任一项所述的通路检测方法,其中,所述根据所述通行状况确定所述道路的通路检测结果之后,所述通路检测方法还包括:The path detection method according to any one of claims 1 to 9, wherein after the path detection result of the road is determined according to the traffic condition, the path detection method further comprises:
    根据所述道路的通路检测结果发出预警。An early warning is issued based on the result of the path detection of the road.
  12. 一种通路检测装置,其中,包括:建立模块、第一检测模块、 确定模块和第二检测模块;A path detection device, including: a establishment module, a first detection module, a determination module, and a second detection module;
    所述建立模块,用于根据获取的图像信息建立道路的三维点云;The establishing module is configured to establish a three-dimensional point cloud of a road according to the acquired image information;
    所述第一检测模块,用于在所述三维点云中检测所述道路的地面信息;The first detection module is configured to detect ground information of the road in the three-dimensional point cloud;
    所述确定模块,用于根据所述道路的地面信息确定预警区;The determining module is configured to determine an early warning area according to the ground information of the road;
    所述第二检测模块,用于检测所述预警区的通行状况,根据所述通行状况确定所述道路的通路检测结果。The second detection module is configured to detect a traffic condition in the early warning area, and determine a road detection result of the road according to the traffic condition.
  13. 一种电子设备,其中,包括:An electronic device including:
    至少一个处理器;以及,At least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,A memory connected in communication with the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1-11任一项所述的通路检测方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method according to any one of claims 1-11. Access detection method.
  14. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-11任一项所述的通路检测方法。A computer-readable storage medium stores a computer program, wherein when the computer program is executed by a processor, the path detection method according to any one of claims 1-11 is implemented.
PCT/CN2018/094905 2018-07-06 2018-07-06 Path detection method, related device, and computer readable storage medium WO2020006764A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/094905 WO2020006764A1 (en) 2018-07-06 2018-07-06 Path detection method, related device, and computer readable storage medium
CN201880001082.8A CN109074490B (en) 2018-07-06 2018-07-06 Path detection method, related device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/094905 WO2020006764A1 (en) 2018-07-06 2018-07-06 Path detection method, related device, and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020006764A1 true WO2020006764A1 (en) 2020-01-09

Family

ID=64789261

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094905 WO2020006764A1 (en) 2018-07-06 2018-07-06 Path detection method, related device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN109074490B (en)
WO (1) WO2020006764A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113376614A (en) * 2021-06-10 2021-09-10 浙江大学 Laser radar point cloud-based field seedling zone leading line detection method
CN114029953A (en) * 2021-11-18 2022-02-11 上海擎朗智能科技有限公司 Method for determining ground plane based on depth sensor, robot and robot system
CN114333199A (en) * 2020-09-30 2022-04-12 中国电子科技集团公司第五十四研究所 Alarm method, equipment, system and chip
CN114491739A (en) * 2021-12-30 2022-05-13 深圳市优必选科技股份有限公司 Construction method and device of road traffic system, terminal equipment and storage medium
CN118172423A (en) * 2024-05-14 2024-06-11 整数智能信息技术(杭州)有限责任公司 Sequential point cloud data pavement element labeling method and device and electronic equipment

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222557B (en) * 2019-04-22 2021-09-21 北京旷视科技有限公司 Real-time road condition detection method, device and system and storage medium
CN110399807B (en) * 2019-07-04 2021-07-16 达闼机器人有限公司 Method and device for detecting ground obstacle, readable storage medium and electronic equipment
CN110738183B (en) * 2019-10-21 2022-12-06 阿波罗智能技术(北京)有限公司 Road side camera obstacle detection method and device
CN111123278B (en) * 2019-12-30 2022-07-12 科沃斯机器人股份有限公司 Partitioning method, partitioning equipment and storage medium
CN111208533A (en) * 2020-01-09 2020-05-29 上海工程技术大学 Real-time ground detection method based on laser radar
WO2021146971A1 (en) * 2020-01-21 2021-07-29 深圳市大疆创新科技有限公司 Flight control method and apparatus based on determination of passable airspace, and device
CN111609851B (en) * 2020-05-28 2021-09-24 北京理工大学 Mobile blind guiding robot system and blind guiding method
CN115511938A (en) * 2022-11-02 2022-12-23 清智汽车科技(苏州)有限公司 Height determining method and device based on monocular camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8886387B1 (en) * 2014-01-07 2014-11-11 Google Inc. Estimating multi-vehicle motion characteristics by finding stable reference points
CN106162144A (en) * 2016-07-21 2016-11-23 触景无限科技(北京)有限公司 A kind of visual pattern processing equipment, system and intelligent machine for overnight sight
CN106197452A (en) * 2016-07-21 2016-12-07 触景无限科技(北京)有限公司 A kind of visual pattern processing equipment and system
CN107169986A (en) * 2017-05-23 2017-09-15 北京理工大学 A kind of obstacle detection method and system
CN108007436A (en) * 2016-10-19 2018-05-08 德州仪器公司 Collision time estimation in computer vision system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8699754B2 (en) * 2008-04-24 2014-04-15 GM Global Technology Operations LLC Clear path detection through road modeling
CN101975951B (en) * 2010-06-09 2013-03-20 北京理工大学 Field environment barrier detection method fusing distance and image information
CN103198302B (en) * 2013-04-10 2015-12-02 浙江大学 A kind of Approach for road detection based on bimodal data fusion
CN103903479A (en) * 2014-04-23 2014-07-02 奇瑞汽车股份有限公司 Vehicle safety driving pre-warning method and system and vehicle terminal device
CN106530380B (en) * 2016-09-20 2019-02-26 长安大学 A kind of ground point cloud dividing method based on three-dimensional laser radar
CN107179768B (en) * 2017-05-15 2020-01-17 上海木木机器人技术有限公司 Obstacle identification method and device
JP6955783B2 (en) * 2018-01-10 2021-10-27 達闥機器人有限公司Cloudminds (Shanghai) Robotics Co., Ltd. Information processing methods, equipment, cloud processing devices and computer program products

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8886387B1 (en) * 2014-01-07 2014-11-11 Google Inc. Estimating multi-vehicle motion characteristics by finding stable reference points
CN106162144A (en) * 2016-07-21 2016-11-23 触景无限科技(北京)有限公司 A kind of visual pattern processing equipment, system and intelligent machine for overnight sight
CN106197452A (en) * 2016-07-21 2016-12-07 触景无限科技(北京)有限公司 A kind of visual pattern processing equipment and system
CN108007436A (en) * 2016-10-19 2018-05-08 德州仪器公司 Collision time estimation in computer vision system
CN107169986A (en) * 2017-05-23 2017-09-15 北京理工大学 A kind of obstacle detection method and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114333199A (en) * 2020-09-30 2022-04-12 中国电子科技集团公司第五十四研究所 Alarm method, equipment, system and chip
CN114333199B (en) * 2020-09-30 2024-03-26 中国电子科技集团公司第五十四研究所 Alarm method, equipment, system and chip
CN113376614A (en) * 2021-06-10 2021-09-10 浙江大学 Laser radar point cloud-based field seedling zone leading line detection method
CN113376614B (en) * 2021-06-10 2022-07-15 浙江大学 Laser radar point cloud-based field seedling zone leading line detection method
CN114029953A (en) * 2021-11-18 2022-02-11 上海擎朗智能科技有限公司 Method for determining ground plane based on depth sensor, robot and robot system
CN114029953B (en) * 2021-11-18 2022-12-20 上海擎朗智能科技有限公司 Method for determining ground plane based on depth sensor, robot and robot system
CN114491739A (en) * 2021-12-30 2022-05-13 深圳市优必选科技股份有限公司 Construction method and device of road traffic system, terminal equipment and storage medium
CN118172423A (en) * 2024-05-14 2024-06-11 整数智能信息技术(杭州)有限责任公司 Sequential point cloud data pavement element labeling method and device and electronic equipment

Also Published As

Publication number Publication date
CN109074490A (en) 2018-12-21
CN109074490B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
WO2020006764A1 (en) Path detection method, related device, and computer readable storage medium
WO2020007189A1 (en) Obstacle avoidance notification method and apparatus, electronic device, and readable storage medium
CN108885791B (en) Ground detection method, related device and computer readable storage medium
EP4141737A1 (en) Target detection method and device
US11338807B2 (en) Dynamic distance estimation output generation based on monocular video
CN106156723B (en) A kind of crossing fine positioning method of view-based access control model
WO2021098079A1 (en) Method for using binocular stereo camera to construct grid map
JP3729095B2 (en) Traveling path detection device
WO2018120040A1 (en) Obstacle detection method and device
WO2020154990A1 (en) Target object motion state detection method and device, and storage medium
KR20200046437A (en) Localization method based on images and map data and apparatus thereof
CN112967345B (en) External parameter calibration method, device and system of fish-eye camera
WO2021253245A1 (en) Method and device for identifying vehicle lane changing tendency
CN113240734B (en) Vehicle cross-position judging method, device, equipment and medium based on aerial view
CN116993817B (en) Pose determining method and device of target vehicle, computer equipment and storage medium
KR102373492B1 (en) Method for correcting misalignment of camera by selectively using information generated by itself and information generated by other entities and device using the same
CN111046719A (en) Apparatus and method for converting image
WO2023092870A1 (en) Method and system for detecting retaining wall suitable for automatic driving vehicle
CN114943941A (en) Target detection method and device
CN103679121A (en) Method and system for detecting roadside using visual difference image
CN112509054A (en) Dynamic calibration method for external parameters of camera
CN113111707A (en) Preceding vehicle detection and distance measurement method based on convolutional neural network
US20220219679A1 (en) Spatial parking place detection method and device, storage medium, and program product
CN109895697B (en) Driving auxiliary prompting system and method
CN115328153A (en) Sensor data processing method, system and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18925365

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15-04-2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18925365

Country of ref document: EP

Kind code of ref document: A1