WO2020006764A1 - Procédé de détection de trajet, dispositif apparenté et support de stockage lisible par ordinateur - Google Patents
Procédé de détection de trajet, dispositif apparenté et support de stockage lisible par ordinateur Download PDFInfo
- Publication number
- WO2020006764A1 WO2020006764A1 PCT/CN2018/094905 CN2018094905W WO2020006764A1 WO 2020006764 A1 WO2020006764 A1 WO 2020006764A1 CN 2018094905 W CN2018094905 W CN 2018094905W WO 2020006764 A1 WO2020006764 A1 WO 2020006764A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- road
- ground
- point cloud
- early warning
- dimensional point
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- the present application relates to the field of computer vision technology, and in particular, to a path detection method, a related device, and a computer-readable storage medium.
- Path detection is an extremely important technology in the fields of blindness guidance, robotics, and autonomous driving. Existing path detection is based on vision to detect the road that a vehicle or robot is traveling to improve the safety of the vehicle or robot.
- the traditional path detection usually sets a two-dimensional detection area in the image, and determines whether the area can pass by judging whether there is an obstacle in the area.
- the image detection area is a rectangular detection area
- the real world corresponds to a fan-shaped area in front of the camera.
- objects on both sides of the passable width will be regarded as obstacles.
- the image detection area is a trapezoidal detection area, although the influence of the fan-shaped area can be corrected to a certain extent in the real world, the size and position of the trapezoidal detection area is extremely inconvenient and needs to be adjusted with the lens focal length, camera attitude, etc Change.
- traditional path detection usually only provides a rough warning of the obstacle ahead, and cannot provide more detailed road condition information, which makes subsequent decision-making and user experience extremely inconvenient.
- a technical problem to be solved in some embodiments of the present application is to provide a path detection method, a related device, and a computer-readable storage medium to solve the above technical problems.
- An embodiment of the present application provides a path detection method, including:
- An embodiment of the present application further provides a path detection device, including: a establishment module, a first detection module, a determination module, and a second detection module;
- a first detection module for detecting ground information of a road in a three-dimensional point cloud
- the second detection module is used to detect the traffic condition in the early warning area, and determine the road detection result of the road according to the traffic condition.
- An embodiment of the present application further provides an electronic device, including: at least one processor; and,
- Memory in communication with at least one processor; wherein,
- the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the foregoing path detection method.
- An embodiment of the present application further provides a computer-readable storage medium storing a computer program, and the computer program is executed by a processor to implement the foregoing path detection method.
- the 3D point cloud can provide More traffic information.
- FIG. 1 is a flowchart of a path detection method in a first embodiment of the present application
- FIG. 2 is a relationship diagram between a pixel coordinate system and a camera coordinate system in the first embodiment of the present application
- FIG. 3 is a relationship diagram between a camera coordinate system and world coordinates in the first embodiment of the present application.
- FIG. 4 is a flowchart of a path detection method in a second embodiment of the present application.
- FIG. 6 is a structural diagram of a path detection device in a third embodiment of the present application.
- FIG. 7 is a structural diagram of an electronic device in a fourth embodiment of the present application.
- the first embodiment of the present application relates to a path detection method. As shown in FIG. 1, the method includes the following steps:
- Step 101 Establish a three-dimensional point cloud of the road according to the acquired image information.
- the three-dimensional point cloud is a huge collection of points on the surface of the target object.
- the establishment of the three-dimensional point cloud can determine road information in space.
- the application can use multiple methods to establish the three-dimensional point cloud. This embodiment does not limit the establishment of three-dimensional points.
- the specific implementation of the cloud is a huge collection of points on the surface of the target object.
- a three-dimensional point cloud can be established by using a depth map.
- the specific process of establishing a three-dimensional point cloud includes: obtaining a depth map and the attitude angle of the camera, where the attitude angle is the attitude angle of the camera when the depth map was taken; calculating a scale normalization factor according to the depth map and a preset normalization scale; Calculate the scale normalized depth map according to the depth map and the scale normalization factor; construct a 3D point cloud in the camera coordinate system based on the scale normalized depth map; and according to the 3D point cloud and camera in the camera coordinate system To construct a three-dimensional point cloud in the world coordinate system.
- depth maps there are many methods for obtaining depth maps, including but not limited to: lidar depth imaging method, computer stereo vision imaging, coordinate measuring machine method, moire fringe method, structured light method, depth is not limited here How to get the graph.
- formula 1 is expressed as follows:
- S represents the scale normalization factor
- W represents the width of the depth map
- H represents the height of the depth map
- Norm represents a preset normalized scale. Norm is a preset known amount. In specific applications, if it is necessary to process the depth map of continuous frames to establish a three-dimensional point cloud, the normalization scale used in the processing of the depth map of each frame remains unchanged.
- Equation 2 The normalized depth map is calculated using Equation 2. Equation 2 is expressed as follows:
- W S represents the width of the depth map after normalization
- H S represents the height of the depth map after normalization.
- the depth-normalized depth map can be determined according to W S and H S.
- Equation 3 a three-dimensional point cloud in a camera coordinate system is constructed according to Equation 3 and a normalized depth map, and the three-dimensional point cloud in the camera coordinate system is represented as P (X c , Y c , Z c ).
- Each pixel in the picture contains the distance between the camera and the object.
- the pixel coordinates in the depth map are converted into the coordinates of the camera coordinate system using Equation 3 to form a three-dimensional point cloud in the camera coordinate system. Equation 3 is expressed as follows:
- u and v are the coordinate values of any point P in the normalized depth map
- X c , Y c , and Z c are the coordinate values of point P in the camera coordinate system
- M 3 ⁇ 4 is the internal parameter of the camera Matrix
- Z c is the depth value of point P in the depth map after normalization, that is, the distance value from the camera to the shooting object, which is a known quantity.
- the three-dimensional point cloud P (X c , Y c , Z c ) in the camera coordinate system is converted into a three-dimensional point cloud P (X w , X w , Y w , Z w ), the conversion relationship is expressed by Equation 4:
- X w , Y w and Z w are the coordinate values of any point P in the three-dimensional point cloud in the world coordinate system
- X c , Y c and Z c are the coordinate values of point P in the camera coordinate system
- ⁇ is the camera The angle between the camera and the X w axis in the world coordinate system
- ⁇ is the angle between the camera and the Y w axis in the world coordinate system
- ⁇ is the angle between the camera and the Z w axis in the world coordinate system.
- a rectangular coordinate system o-uv in pixels which is established with the upper left corner of the depth map as the origin, is used as the pixel coordinate system.
- the abscissa u represents the number of pixel columns where the pixels are located, and the ordinate v represents the pixels The number of pixel rows at.
- the intersection of the camera optical axis and the depth map plane is defined as the origin o 1 of the image coordinate system o 1 -xy, and the x-axis is parallel to the u-axis and the y-axis is parallel to the v-axis.
- the camera coordinate system O c -X c Y c Z c uses the camera optical center O c as the origin, the X c axis and Y c axis are respectively parallel to the x and y axes in the image coordinate system, and the Z c axis is the light of the camera The axis is perpendicular to the image plane and intersects at o 1 point.
- the origin O w of the world coordinate system O w -X w Y w Z w coincides with the origin O c of the camera coordinate system, both of which are camera light centers, and the horizontal direction to the right is the positive direction of the X w axis.
- Vertically downward is the positive direction of the Y w axis, perpendicular to the X w O w Y w plane and pointing straight ahead is the positive direction of the Z w axis, and a world coordinate system is established.
- three-dimensional point clouds based on image information is not limited to the construction of depth maps.
- laser point cloud data can also be directly obtained by lidar, and three-dimensional point clouds are constructed based on point cloud data.
- the construction of a three-dimensional point cloud is an exemplary description, and this embodiment does not limit the specific method adopted for constructing the three-dimensional point cloud.
- Step 102 Detect the ground information of the road in the three-dimensional point cloud.
- the specific implementation process of this step is: detecting the ground height in the three-dimensional point cloud; determining obstacle information on the ground height; and using the ground height and the obstacle information as ground information.
- determining the ground height and detecting obstacle information on the ground height in a three-dimensional point cloud makes it possible to determine the specific conditions of the road and provide a possibility for ensuring the accuracy of the detection results.
- pothole detection can be performed on the road to determine the pothole status of the road, and the pothole status of the road is taken as a part of the ground information.
- other road-related detections are also performed, such as detecting the types of roads, including blind roads, sidewalks, pedestrian zebra crossings, etc. For example, if this method is applied to blind sticks, it is necessary to determine the specific road category of those currently using blind sticks. Therefore, in practice, more ground information can be detected as needed, which is not limited here.
- Step 103 Determine an early warning area according to the ground information of the road.
- the space coordinates of the early warning area are constructed; the height position of the early warning area in space coordinates is determined according to the ground height; the width and distance of the early warning area in space coordinates are determined according to the obstacle information, thereby determining the early warning area.
- the determination of the early warning area is based on the three-dimensional point cloud in the world coordinate system.
- the Y w O w Z w plane of the world coordinate system is a symmetrical plane, and the early warning area is constructed in the positive direction of the Z w axis.
- the three-dimensional space area is the early warning area.
- the space area of the early warning area is represented as vBox (x, y, z), where x, y, and z respectively represent the width, height, and distance of the early warning area.
- the distance is determined by the speed of the user, the width and height of the early warning area is determined according to the shape of the user, and the early warning area is not less than the minimum space that the user can pass through.
- the height of one user is 1.5m, the weight is 90kg (kg), and the speed of action is slow.
- the warning zone can be set to vBox (100, 170, 150) in cm (centimeter); the height of the other user is 1.9m , Weight 55kg, agile speed, early warning area can be set to vBox (60, 210, 250), unit cm.
- path detection can be performed in the image information of consecutive frames.
- the coordinate system needs to be converted for each frame of the depth map, but the coordinate values of the early warning area can remain unchanged. It is necessary to determine the location of the early warning area according to the three-dimensional point clouds corresponding to different frames of images.
- the road is not a flat road, and the ground information includes the ground height.
- the position of the early warning area needs to be adjusted according to the ground height.
- roads can be divided into uphill sections, downhill sections and flat sections with different ground heights. The location of the early warning area is adjusted based on the ground height in the ground information, and the traffic conditions of the road need to be detected based on the obstacle information and the size of the early warning area.
- the real-time ground height is determined according to an adaptive ground detection method, or the real-time ground height is determined based on point cloud data indicating road information in a three-dimensional point cloud, and the position of the early warning area is dynamically adjusted according to changes in ground height. After the adjustment, the early warning area can be ensured to be directly above the ground. In this way, not only can energy efficiency be avoided to avoid ground interference, but also low-level obstacles will not be missed.
- the adjustment early warning area can be determined by Formula 5, specifically expressed as follows:
- vBox 1 vBox (x, H + y + ⁇ , z) (5)
- H represents the real-time ground height
- ⁇ represents the dynamic adjustment margin
- vBox 1 represents the adjusted early warning area
- x, y, and z represent the width, height, and distance of the early warning area, respectively.
- Step 104 Detect the traffic conditions in the early warning area, and determine the road detection result according to the traffic conditions.
- the traffic conditions in the early warning area can be detected based on the obstacle information of the road, and the traffic conditions can specifically indicate the information such as the position of the traffic area and the width and height of the traffic area.
- the traffic conditions After detecting the traffic conditions in the early warning area, determine whether the traffic conditions indicate that the road is passable; if so, determine the planned route in the early warning area and determine the road detection result based on the traffic route; otherwise, determine that the road detection result is impassable.
- early warning information is sent according to the path detection result.
- the early warning information includes, but is not limited to, obstacle information, traffic conditions, and ground height.
- the early warning information may be one or a combination of sound information, image information, or light information.
- the method may be applied to an intelligent robot, after the path detection result is obtained, it may be converted into machine language, so that the The intelligent robot can determine the path condition in the current frame.
- channel detection result may also be reminded to the user in other forms, or prompted to the user after performing appropriate information conversion, which is not specifically limited here.
- the second embodiment of the present application relates to a path detection method.
- This embodiment is substantially the same as the first embodiment.
- the main difference is that this embodiment specifically describes the specific implementation of determining the ground height in a three-dimensional point cloud.
- the specific implementation of the path detection method is shown in FIG. 4 and includes the following steps:
- step 201 is the same as step 101 in the first embodiment, and steps 209 and 210 are the same as step 103 and step 104 in the first embodiment, respectively, and the same steps are not described herein again.
- Step 202 Perform automatic threshold segmentation in the height direction on the three-dimensional point cloud to obtain a first ground area.
- Step 203 Perform fixed threshold segmentation on the distance direction of the three-dimensional point cloud to obtain a second ground area.
- Step 204 Determine an initial ground area according to the first ground area and the second ground area.
- Step 205 Calculate the inclination of the initial ground area.
- Step 206 Determine the ground height of the ground area according to the inclination.
- Step 207 Determine obstacle information at the ground height.
- Step 208 Use the ground height and obstacle information as ground information.
- Steps 207 and 208 have been described in the first embodiment, and are not repeated here.
- the three-dimensional point cloud in the world coordinate system is divided in the height direction and the horizontal direction.
- the three-dimensional point cloud in the world coordinate system defines Y w as a coordinate in the height direction.
- Z w is a set of coordinates in the distance direction
- X w is a set of coordinates in the width direction.
- step 203 is to perform division in the direction designated by the Y w axis
- step 204 is to perform division in the direction indicated by the Z w axis.
- the specific process of obtaining the first ground region is: according to the height of the region of interest (ROI) selected by the user in the three-dimensional point cloud in the world coordinate system, calculating and obtaining the first segment Threshold; calculate the second segmentation threshold based on the ground height of the previous depth map of the current depth map; and perform automatic threshold segmentation in the height direction of the 3D point cloud in the world coordinate system based on the first and second segmentation thresholds
- the specific segmentation process can be expressed by Equation 6:
- Y mask represents the first ground area
- ThdY roi is the first segmentation threshold
- ThdY pre is the second segmentation threshold
- a and b are weighting coefficients
- the specific values of a and b are set by the user according to actual needs.
- the automatic threshold segmentation algorithms that can be used include the mean method, Gauss method, or Otsu method. Since the automatic threshold segmentation algorithm is relatively mature, in this embodiment, This will not be repeated here.
- the specific segmentation of the second ground region is obtained as follows: the minimum coordinate value of the distance direction selected by the user in the three-dimensional point cloud in the world coordinate system is set as the third segmentation threshold and set to Z min ; The maximum coordinate value of the distance direction selected in the three-dimensional point cloud in the coordinate system is set as Z max as the fourth segmentation threshold; according to the third segmentation threshold and the fourth segmentation threshold, the three-dimensional point cloud in the world coordinate system is distanced.
- a fixed threshold segmentation in the direction is used to obtain a second ground area, which is set to Z mask , that is, a region obtained by retaining a Z w value between Z min and Z max is a second ground area.
- the initial ground area can be determined, and the first ground area and the second ground area can be determined to determine the initial ground area.
- the specific ground area can be determined through formula 7.
- the initial ground area is expressed as follows:
- Gnd 0 is the initial ground area
- Y mask is the first ground area
- Z mask is the second ground area.
- the specific physical meaning of the formula is that the suspected ground area in the height direction can be determined through the first ground area, and the range of the first ground area in the distance direction can be further limited through the second ground area, thereby ensuring the final acquisition. Accuracy of the initial ground area.
- the plane where the initial ground area is located must be determined first, that is, the plane fitting of the initial ground area is performed.
- the inclination of the plane and the coordinate axis determined according to the plane fitting is the initial ground area.
- the points on the initial ground area are used as the known quantity, and the least square method or random sampling consistency algorithm is used to perform the plane fitting on the initial ground area to obtain the initial ground area The general equation of the plane in which it lies.
- other fitting methods may also be used to perform plane fitting on the initial ground area, and the specific method of plane fitting is not limited in the embodiments of the present application.
- the normal vector of the plane can be determined: Further, the inclination angle of the initial ground area can be determined according to the normal vector. Specifically, the normal vector of the fitting plane and the vertical upward unit vector are used. The included angle is the horizontal tilt angle ⁇ of the initial ground, and the tilt angle ⁇ is calculated by Equation 8:
- the ground height of the ground area can be determined according to the inclination of the initial ground area, which can be the ground height of a point on the ground area, or the real-time ground height.
- the path detection method in this embodiment is based on the detection of image data of continuous frames.
- the specific implementation process of path detection of image data of continuous frames is shown in FIG. 5 and includes the following implementation steps:
- Step 301 Initialize the system.
- Step 302 Establish a three-dimensional point cloud of the road according to the acquired image information.
- Step 303 Detect the ground information of the road in the three-dimensional point cloud.
- Step 304 Determine an early warning area according to the ground information of the road.
- Step 305 Detect the traffic condition in the early warning area, and determine whether it is passable. If yes, go to step 306; otherwise, go to step 307.
- Step 306 Determine the passing route planned in the early warning area and determine the detection result of the road according to the passing route.
- Step 307 Determine that the detection result of the road is impassable.
- Step 308 Send a warning message according to the detection result of the path.
- Step 309 Determine whether there is image information of the next frame. If yes, go to step 302; otherwise, end the path detection.
- the third embodiment of the present application relates to a path detection device.
- the specific structure is shown in FIG. 6 and includes: a establishing module 601, a first detection module 602, a determination module 603, and a second detection module 604.
- a establishing module 601 is configured to establish a three-dimensional point cloud of a road according to the acquired image information.
- the first detection module 602 is configured to detect ground information of a road in a three-dimensional point cloud.
- a determining module 603 is configured to determine an early warning area according to ground information of a road.
- the second detection module 604 is configured to detect a traffic condition in the early warning area, and determine a road detection result of the road according to the traffic condition.
- this embodiment is a device embodiment corresponding to the first or second embodiment, and this embodiment can be implemented in cooperation with the first or second embodiment.
- the related technical details mentioned in the first or second embodiment are still valid in this embodiment. In order to reduce repetition, details are not repeated here.
- the fourth embodiment of the present application relates to an electronic device.
- the specific structure is shown in FIG. 7 and includes: at least one processor 701; and a memory 702 communicatively connected to the at least one processor 401; Instructions executed by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the path detection method in the first or second embodiment.
- the memory and the processor are connected in a bus manner.
- the bus may include any number of interconnected buses and bridges.
- the bus links one or more processors and various circuits of the memory together.
- the bus can also link various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, so they are not described further herein.
- the processor is responsible for managing the bus and general processing, and can also provide various functions, including timing, peripheral interfaces, voltage regulation, power management, and other control functions.
- the memory can be used to store data used by the processor when performing operations.
- a fifth embodiment of the present application relates to a computer-readable storage medium.
- the readable storage medium is a computer-readable storage medium, and the computer-readable storage medium stores computer instructions that enable a computer to execute the first The method for path detection involved in one or the second method embodiments.
- the display method in the above embodiments is implemented by a program instructing related hardware.
- the program is stored in a storage medium and includes several instructions for making a device (may It is a single-chip microcomputer, a chip, or the like) or a processor that executes all or part of the steps of the method described in each embodiment of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random-Access Memory), magnetic disks or optical disks and other media that can store program codes .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
La présente invention concerne le domaine technique de la vision par ordinateur, et concerne en particulier un procédé de détection de trajet, un dispositif apparenté et un support de stockage lisible par ordinateur. Le procédé de détection de trajet comprend les étapes consistant à : établir un nuage de points tridimensionnel d'une route selon des informations d'image acquises ; détecter des informations de sol de la route dans le nuage de points tridimensionnel ; déterminer une zone d'avertissement précoce selon les informations de sol de la route ; détecter des conditions de circulation dans la zone d'avertissement précoce, et déterminer un résultat de détection de trajet de la route selon les conditions de circulation. Le procédé décrit peut être appliqué à la détection de trajets dans des environnements complexes, améliore l'expérience utilisateur, et est simultanément capable de fournir davantage d'informations de condition de route au moyen d'un nuage de points tridimensionnel.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/094905 WO2020006764A1 (fr) | 2018-07-06 | 2018-07-06 | Procédé de détection de trajet, dispositif apparenté et support de stockage lisible par ordinateur |
CN201880001082.8A CN109074490B (zh) | 2018-07-06 | 2018-07-06 | 通路检测方法、相关装置及计算机可读存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/094905 WO2020006764A1 (fr) | 2018-07-06 | 2018-07-06 | Procédé de détection de trajet, dispositif apparenté et support de stockage lisible par ordinateur |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020006764A1 true WO2020006764A1 (fr) | 2020-01-09 |
Family
ID=64789261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/094905 WO2020006764A1 (fr) | 2018-07-06 | 2018-07-06 | Procédé de détection de trajet, dispositif apparenté et support de stockage lisible par ordinateur |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109074490B (fr) |
WO (1) | WO2020006764A1 (fr) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113376614A (zh) * | 2021-06-10 | 2021-09-10 | 浙江大学 | 一种基于激光雷达点云的田间苗带导航线检测方法 |
CN114029953A (zh) * | 2021-11-18 | 2022-02-11 | 上海擎朗智能科技有限公司 | 基于深度传感器确定地平面的方法、机器人及机器人系统 |
CN114333199A (zh) * | 2020-09-30 | 2022-04-12 | 中国电子科技集团公司第五十四研究所 | 一种报警方法、设备及系统、芯片 |
CN114491739A (zh) * | 2021-12-30 | 2022-05-13 | 深圳市优必选科技股份有限公司 | 道路交通系统的构建方法、装置、终端设备及存储介质 |
CN118172423A (zh) * | 2024-05-14 | 2024-06-11 | 整数智能信息技术(杭州)有限责任公司 | 时序点云数据路面元素标注方法及装置、电子设备 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222557B (zh) * | 2019-04-22 | 2021-09-21 | 北京旷视科技有限公司 | 路况的实时检测方法、装置、系统和存储介质 |
CN110399807B (zh) * | 2019-07-04 | 2021-07-16 | 达闼机器人有限公司 | 检测地面障碍物的方法、装置、可读存储介质及电子设备 |
CN110738183B (zh) * | 2019-10-21 | 2022-12-06 | 阿波罗智能技术(北京)有限公司 | 路侧相机障碍物检测方法及装置 |
CN111123278B (zh) * | 2019-12-30 | 2022-07-12 | 科沃斯机器人股份有限公司 | 分区方法、设备及存储介质 |
CN111208533A (zh) * | 2020-01-09 | 2020-05-29 | 上海工程技术大学 | 一种基于激光雷达的实时地面检测方法 |
WO2021146971A1 (fr) * | 2020-01-21 | 2021-07-29 | 深圳市大疆创新科技有限公司 | Procédé et appareil de commande de vol basés sur la détermination de l'espace aérien franchissable, et dispositif |
CN111609851B (zh) * | 2020-05-28 | 2021-09-24 | 北京理工大学 | 一种移动型导盲机器人系统及导盲方法 |
CN115511938A (zh) * | 2022-11-02 | 2022-12-23 | 清智汽车科技(苏州)有限公司 | 基于单目摄像头的高度确定方法和装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8886387B1 (en) * | 2014-01-07 | 2014-11-11 | Google Inc. | Estimating multi-vehicle motion characteristics by finding stable reference points |
CN106162144A (zh) * | 2016-07-21 | 2016-11-23 | 触景无限科技(北京)有限公司 | 一种用于夜视环境的视觉图像处理设备、系统和智能机器 |
CN106197452A (zh) * | 2016-07-21 | 2016-12-07 | 触景无限科技(北京)有限公司 | 一种视觉图像处理设备及系统 |
CN107169986A (zh) * | 2017-05-23 | 2017-09-15 | 北京理工大学 | 一种障碍物检测方法及系统 |
CN108007436A (zh) * | 2016-10-19 | 2018-05-08 | 德州仪器公司 | 计算机视觉系统中的碰撞时间估计 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8699754B2 (en) * | 2008-04-24 | 2014-04-15 | GM Global Technology Operations LLC | Clear path detection through road modeling |
CN101975951B (zh) * | 2010-06-09 | 2013-03-20 | 北京理工大学 | 一种融合距离和图像信息的野外环境障碍检测方法 |
CN103198302B (zh) * | 2013-04-10 | 2015-12-02 | 浙江大学 | 一种基于双模态数据融合的道路检测方法 |
CN103903479A (zh) * | 2014-04-23 | 2014-07-02 | 奇瑞汽车股份有限公司 | 车辆安全行驶预警方法、系统及车辆终端设备 |
CN106530380B (zh) * | 2016-09-20 | 2019-02-26 | 长安大学 | 一种基于三维激光雷达的地面点云分割方法 |
CN107179768B (zh) * | 2017-05-15 | 2020-01-17 | 上海木木机器人技术有限公司 | 一种障碍物识别方法及装置 |
JP6955783B2 (ja) * | 2018-01-10 | 2021-10-27 | 達闥機器人有限公司Cloudminds (Shanghai) Robotics Co., Ltd. | 情報処理方法、装置、クラウド処理デバイス及びコンピュータプログラム製品 |
-
2018
- 2018-07-06 WO PCT/CN2018/094905 patent/WO2020006764A1/fr active Application Filing
- 2018-07-06 CN CN201880001082.8A patent/CN109074490B/zh active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8886387B1 (en) * | 2014-01-07 | 2014-11-11 | Google Inc. | Estimating multi-vehicle motion characteristics by finding stable reference points |
CN106162144A (zh) * | 2016-07-21 | 2016-11-23 | 触景无限科技(北京)有限公司 | 一种用于夜视环境的视觉图像处理设备、系统和智能机器 |
CN106197452A (zh) * | 2016-07-21 | 2016-12-07 | 触景无限科技(北京)有限公司 | 一种视觉图像处理设备及系统 |
CN108007436A (zh) * | 2016-10-19 | 2018-05-08 | 德州仪器公司 | 计算机视觉系统中的碰撞时间估计 |
CN107169986A (zh) * | 2017-05-23 | 2017-09-15 | 北京理工大学 | 一种障碍物检测方法及系统 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114333199A (zh) * | 2020-09-30 | 2022-04-12 | 中国电子科技集团公司第五十四研究所 | 一种报警方法、设备及系统、芯片 |
CN114333199B (zh) * | 2020-09-30 | 2024-03-26 | 中国电子科技集团公司第五十四研究所 | 一种报警方法、设备及系统、芯片 |
CN113376614A (zh) * | 2021-06-10 | 2021-09-10 | 浙江大学 | 一种基于激光雷达点云的田间苗带导航线检测方法 |
CN113376614B (zh) * | 2021-06-10 | 2022-07-15 | 浙江大学 | 一种基于激光雷达点云的田间苗带导航线检测方法 |
CN114029953A (zh) * | 2021-11-18 | 2022-02-11 | 上海擎朗智能科技有限公司 | 基于深度传感器确定地平面的方法、机器人及机器人系统 |
CN114029953B (zh) * | 2021-11-18 | 2022-12-20 | 上海擎朗智能科技有限公司 | 基于深度传感器确定地平面的方法、机器人及机器人系统 |
CN114491739A (zh) * | 2021-12-30 | 2022-05-13 | 深圳市优必选科技股份有限公司 | 道路交通系统的构建方法、装置、终端设备及存储介质 |
CN118172423A (zh) * | 2024-05-14 | 2024-06-11 | 整数智能信息技术(杭州)有限责任公司 | 时序点云数据路面元素标注方法及装置、电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN109074490A (zh) | 2018-12-21 |
CN109074490B (zh) | 2023-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020006764A1 (fr) | Procédé de détection de trajet, dispositif apparenté et support de stockage lisible par ordinateur | |
WO2020007189A1 (fr) | Procédé et appareil de notification d'évitement d'obstacle, dispositif électronique et support de stockage lisible | |
CN108885791B (zh) | 地面检测方法、相关装置及计算机可读存储介质 | |
EP4141737A1 (fr) | Procédé et dispositif de détection de cible | |
US11338807B2 (en) | Dynamic distance estimation output generation based on monocular video | |
CN106156723B (zh) | 一种基于视觉的路口精定位方法 | |
WO2021098079A1 (fr) | Procédé d'utilisation d'une caméra stéréo binoculaire pour construire une carte quadrillée | |
JP3729095B2 (ja) | 走行路検出装置 | |
WO2018120040A1 (fr) | Procédé et dispositif de détection d'obstacle | |
WO2020154990A1 (fr) | Procédé et dispositif de détection d'état de mouvement d'objet cible, et support de stockage | |
KR20200046437A (ko) | 영상 및 맵 데이터 기반 측위 방법 및 장치 | |
CN112967345B (zh) | 鱼眼相机的外参标定方法、装置以及系统 | |
WO2021253245A1 (fr) | Procédé et dispositif d'identification de tendance au changement de voie d'un véhicule | |
CN113240734B (zh) | 一种基于鸟瞰图的车辆跨位判断方法、装置、设备及介质 | |
CN116993817B (zh) | 目标车辆的位姿确定方法、装置、计算机设备及存储介质 | |
KR102373492B1 (ko) | 자체적으로 생성된 정보 및 다른 개체에 의해 생성된 정보를 선택적으로 사용하여 카메라의 오정렬을 보정하는 방법 및 이를 이용한 장치 | |
CN111046719A (zh) | 用于转换图像的设备和方法 | |
WO2023092870A1 (fr) | Procédé et système pour détecter mur de soutènement convenant pour véhicule à conduite automatique | |
CN114943941A (zh) | 一种目标检测方法及装置 | |
CN103679121A (zh) | 采用视差图像检测路边的方法及系统 | |
CN112509054A (zh) | 一种相机外参动态标定方法 | |
CN113111707A (zh) | 一种基于卷积神经网络的前车检测与测距方法 | |
US20220219679A1 (en) | Spatial parking place detection method and device, storage medium, and program product | |
CN109895697B (zh) | 一种行车辅助提示系统及方法 | |
CN115328153A (zh) | 传感器数据处理方法、系统及可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18925365 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15-04-2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18925365 Country of ref document: EP Kind code of ref document: A1 |