CN111860321A - Obstacle identification method and system - Google Patents

Obstacle identification method and system Download PDF

Info

Publication number
CN111860321A
CN111860321A CN202010701313.5A CN202010701313A CN111860321A CN 111860321 A CN111860321 A CN 111860321A CN 202010701313 A CN202010701313 A CN 202010701313A CN 111860321 A CN111860321 A CN 111860321A
Authority
CN
China
Prior art keywords
point cloud
depth image
ground
obstacle
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010701313.5A
Other languages
Chinese (zh)
Other versions
CN111860321B (en
Inventor
黄泽仕
余小欢
门阳
陈嵩
白云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Guangpo Intelligent Technology Co ltd
Original Assignee
Zhejiang Guangpo Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Guangpo Intelligent Technology Co ltd filed Critical Zhejiang Guangpo Intelligent Technology Co ltd
Priority to CN202010701313.5A priority Critical patent/CN111860321B/en
Publication of CN111860321A publication Critical patent/CN111860321A/en
Application granted granted Critical
Publication of CN111860321B publication Critical patent/CN111860321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The invention discloses an obstacle identification method, which comprises the following steps: acquiring a first depth image of the surrounding environment in the driving route; cutting the first depth image to obtain a second depth image corresponding to the driving lane; converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain point cloud information of a target and the ground, and deleting the point cloud information of the ground to obtain a projected image without a ground part; and based on a variable neighborhood region growing algorithm, carrying out clustering analysis on the projected image to obtain a plurality of obstacle clustering point sets, and determining the position information of each obstacle. Correspondingly, the invention also discloses an obstacle identification system. By the method and the device, the obstacle avoidance algorithm is optimized, and the obstacle can be monitored in real time more optimally.

Description

Obstacle identification method and system
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a system for identifying obstacles.
Background
With the development of science and technology, robots or express unmanned vehicles are widely applied more and more. Obstacle identification and avoidance become important embodiments of robot intellectualization. With continuous research and development of various robots, the requirements for obstacle avoidance of the robots are increased day by day, the real environments are often complex and change in real time, and the obstacles need to be accurately identified and the distance between the obstacles and the robots can be acquired. An obstacle identification method with a patent application number of CN2019111710204 comprises the following steps: s1, acquiring a first depth image of the surrounding environment in the driving route; s2, cutting the first depth image to obtain a second depth image corresponding to the driving lane; s3, performing ground fitting based on the second depth image, determining a ground straight line, removing a ground part according to the ground straight line, and acquiring a third depth image without the ground part; and S4, performing cluster analysis on the third depth image to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle. In steps S3 and S4, since the ground is projected laterally, the ground cannot be detected well when the ground is inclined laterally or is hollow; and the technical problem of obstacle segmentation fault caused by incomplete depth data also occurs, for example, one obstacle may be segmented into two obstacles. Therefore, the present invention provides an improved obstacle detection technical solution to solve the above technical problems effectively.
Disclosure of Invention
Based on this, the invention aims to provide an obstacle identification method and system, which can monitor an obstacle in real time better in the robot movement process, so that the obstacle can be avoided better.
In order to achieve the above object, the present invention provides an obstacle identification method, including:
s1, acquiring a first depth image of the surrounding environment in the driving route;
s2, cutting the first depth image to obtain a second depth image corresponding to the driving lane;
s3, converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain point cloud information of a target and the ground, and deleting the point cloud information of the ground to obtain a projection image without a ground part;
s4, based on a variable neighbor region growing algorithm, carrying out cluster analysis on the projection images to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
Preferably, the step S2 includes:
converting the first depth image into a point cloud image based on a depth image-to-point cloud computing method;
cutting the x-axis driving lane range, the y-axis driving height range and the z-axis detection range of the point cloud picture respectively to obtain the cut point cloud picture;
And acquiring a second depth image corresponding to the driving lane according to the cut point cloud image.
Preferably, the step S3 includes:
s301, performing point cloud coordinate conversion on the second depth image to obtain corresponding point cloud information;
s302, performing coordinate conversion on the point cloud information, and converting an (x, z) coordinate system plane in the point cloud information into a state parallel to the ground;
s303, setting a plurality of grids under the (x, z) coordinate system, and projecting all pixel points in the second depth image to each grid;
s304, counting the number of projected pixel points falling in each grid, wherein if the number of the projected pixel points is greater than a preset number threshold, the projected pixel points in the grid are target projected pixel points, and otherwise, the projected pixel points are ground projected pixel points;
s305, deleting the pixel points of the ground projection to obtain the projection image without the ground part.
Preferably, the step S302 specifically includes:
the installation angle of the depth camera is calibrated, the point cloud information is subjected to coordinate conversion, and an (x, z) coordinate system plane in the point cloud information is converted into a state parallel to the ground.
Preferably, the step S303 includes: the number of grids in the (x, z) coordinate system is determined by the length of the z-axis in the (x, z) coordinate system and the size of the grids.
Preferably, the step S303 includes:
the size of the grid is related to the pixel density of the second depth image, and when the pixel density is higher, the grid is smaller;
the length of the z-axis in the (x, z) coordinate system is determined by the maximum distance of the pixel points of the second depth image.
Preferably, the step S3 further includes:
and expanding and corroding the projection image, and performing median filtering processing on the expanded and corroded projection image.
Preferably, the step S4 includes:
traversing from the origin of the projection image and traversing according to the sequence from top left to bottom right;
firstly, performing region growth on four adjacent points of the origin, namely, the upper point, the lower point, the left point and the right point, and performing region growth on the adjacent points of the four adjacent points of the origin if the target cannot grow;
and by analogy, starting region growing by taking the points as seed points one by one until all the points finish region growing operation.
Preferably, the method further comprises:
Associating the pixel points of the projected image with corresponding pixel points in the second depth image, determining target pixel points and ground pixel points in the second depth image, and deleting the ground pixel points to obtain a third depth image without ground;
and based on an area generation method, carrying out cluster analysis on the third depth image to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
To achieve the above object, the present invention provides an obstacle recognition system, including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first depth image of the surrounding environment in a driving route;
the cutting module is used for cutting the first depth image to obtain a second depth image corresponding to the driving lane;
the projection module is used for converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain point cloud information of a target and the point cloud information of the ground, and deleting the point cloud information of the ground to obtain a projected image without a ground part;
and the clustering module is used for carrying out clustering analysis on the projected images based on a variable neighbor region growing algorithm to obtain a plurality of obstacle clustering point sets and determining the position information of each obstacle.
Compared with the prior art, the obstacle identification method and the obstacle identification system have the beneficial effects that: in the running process of the robot or the unmanned vehicle, the technical problem that the ground cannot be well divided due to the fact that the ground inclines in the left-right direction or the ground is hollow and the like is solved, the technical scheme of obstacle identification is optimized, and the obstacles can be better monitored in real time, so that more correct obstacle avoidance decisions can be made for the robot and the unmanned vehicle; according to the technical scheme, an optimized large-range region growing algorithm is adopted, so that the technical problem of fault of the obstacle caused by incomplete depth image data is effectively solved, and the obstacle can be accurately identified.
Drawings
Fig. 1 is a flowchart illustrating an obstacle identification method according to an embodiment of the present invention.
Fig. 2 is a system diagram of an obstacle identification system according to an embodiment of the invention.
Detailed Description
The present invention will be described in detail with reference to the specific embodiments shown in the drawings, which are not intended to limit the present invention, and structural, methodological, or functional changes made by those skilled in the art according to the specific embodiments are included in the scope of the present invention.
In one embodiment of the present invention as shown in fig. 1, the present invention provides an obstacle identification method, including:
s1, acquiring a first depth image of the surrounding environment in the driving route;
s2, cutting the first depth image to obtain a second depth image corresponding to the driving lane;
s3, converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain point cloud information of a target and the ground, and deleting the point cloud information of the ground to obtain a projection image without a ground part;
s4, based on a variable neighbor region growing algorithm, carrying out cluster analysis on the projection images to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
In the step S1, a first depth image of the surroundings in the driving route is acquired. A depth map is an image or image channel containing information about the distance of the surface of the scene object from a viewpoint, each pixel value being the actual distance of the sensor from the object. In a particular embodiment of the present invention, the electronic device may acquire a first depth image of the surroundings in the travel route of the robot or the unmanned vehicle. The electronic device may acquire a first depth image of an ambient environment in the driving route using the depth camera, and perform parameter calibration on the depth camera, where the parameter calibration includes a focal length and an optical center of the depth camera.
According to an embodiment of the present invention, the step S1 further includes a preprocessing step for the first depth image, the preprocessing step includes: according to a bilateral filtering algorithm, filtering the first depth image to obtain a filtered first depth image, and through filtering, the integrity of the barrier edge in the first depth image can be ensured; traversing from pixel points at the upper left corner of the filtered first depth image to the lower right corner line by line, and taking each traversed pixel point as a central pixel point; comparing the depth value of each pixel point in a preset domain area of the central pixel point with the depth value of the central pixel point, and recording the pixel point if the difference value is greater than a preset difference threshold value; counting the number of pixel points of which all the differences are larger than the preset difference threshold value in the domain area, and marking the number of the pixel points corresponding to the central pixel point; and if the number of the pixel points corresponding to the central pixel point is greater than a preset number, the central pixel point is a flying pixel point, and the flying pixel point is rejected. By filtering the flying pixel points, each outlier pixel point in the first depth image can be discriminated and eliminated. And carrying out a treatment mode of expanding firstly and corroding secondly on the first depth image after the flying pixel points are removed, and filling the holes in the depth image.
According to an embodiment of the present invention, the preprocessing step further includes: and carrying out image interpolation processing on the first depth image to increase the resolution of the y axis direction in the first depth image. And under the condition that the resolution of the y axis direction in the depth camera is small, the first depth image is subjected to image interpolation processing, so that the resolution of the y axis direction is increased, and the detection of obstacles in the subsequent depth image is facilitated. This step may not be processed if the y-axis resolution given by the depth camera is sufficient.
In step S2, the first depth image is cut to obtain a second depth image corresponding to the driving lane. Specifically, the first depth image is converted into a point cloud image based on a depth image-to-point cloud computing method; cutting the x-axis driving lane range, the y-axis driving height range and the z-axis detection range of the point cloud picture respectively to obtain the cut point cloud picture; and acquiring a second depth image corresponding to the driving lane according to the cut point cloud image. The cutting of the x-axis driving lane range specifically comprises: setting the surplus width of a driving lane; the x-axis travel lane range is the sum of half the vehicle width and the surplus width. The surplus width is used to ensure that no obstacle suddenly appears in the driving lane. For example, the vehicle width is 1.2 meters, the surplus width of the driving lane is surplus distance required for detecting obstacles on both sides of the vehicle, the surplus width is set to be 1 meter, the x-axis driving lane range is (1.2/2+1) × 2-3.2 meters with the depth camera as the center. Based on the point cloud image, the x coordinate of the depth camera in the point cloud image is 0, and according to the calculation, the x coordinate is in the range of-1.6 to + 1.6. And cutting the range of the x-axis driving lane of the point cloud picture. The cutting of the y-axis running height range is mainly set according to the height required by vehicle passing. And the cutting of the z-axis detection range is mainly set according to the vehicle running speed and the detection distance parameter of the depth camera. The x-axis running lane range, the y-axis running height range and the z-axis detection range can be changed as required. And according to the cut point cloud image, performing corresponding range cutting on the second depth image, and setting pixel points which do not meet the range requirement as 0 to obtain a second depth image corresponding to the driving lane.
In step S3, the second depth image is converted into point cloud information, ground projection is performed according to the point cloud information to obtain point cloud information of a target and point cloud information of the ground, and the point cloud information of the ground is deleted to obtain a projected image without a ground portion. According to an embodiment of the present invention, the step S3 includes:
s301, performing point cloud coordinate conversion on the second depth image to obtain corresponding point cloud information;
the depth map is a pixel value of each pixel point in the image represented by (x, y) coordinates in an image coordinate system, and the pixel value represents the Euclidean distance of the target point from the camera. The point cloud is a massive point set which expresses target space distribution and target surface characteristics under the same space reference system, and after the space coordinates of each sampling point on the surface of the object are obtained, a point set is obtained, which is called as the point cloud. The point cloud is a 3D concept, whose coordinate system (x, y, z) is the distance of the target point from the camera in the direction of three coordinate axes. And according to the calculation method for converting the depth map into the point cloud, performing point cloud coordinate conversion on the second depth image, and converting pixel points in the depth image into coordinate points in a corresponding point cloud coordinate system.
S302, performing coordinate conversion on the point cloud information, and converting an (x, z) coordinate system plane in the point cloud information into a state parallel to the ground;
the installation angle of the depth camera is calibrated, the point cloud information is subjected to coordinate conversion, and an (x, z) coordinate system plane in the point cloud information is converted into a state parallel to the ground.
S303, setting a plurality of grids under the (x, z) coordinate system, and projecting all pixel points in the second depth image to each grid;
the projection method is that a 2D plane coordinate system is set by taking x and z axes as the plane coordinate system, a series of grids are arranged in the plane coordinate system, and the grids are a plurality of grids arranged under the (x, z) coordinate system, and the length and the width of each grid are equal. And projecting all pixel points in the second depth image to the (x, z) coordinate system, and projecting to each grid under the coordinate system. According to an embodiment of the present invention, the number of grids in the (x, z) coordinate system is determined by the length of the z-axis in the (x, z) coordinate system and the size of the grids. The size of the grid is related to the pixel density of the second depth image, and when the pixel density is higher, the grid is smaller. The length of the z-axis in the (x, z) coordinate system is determined by the maximum distance of the pixel points of the second depth image. And determining the length of the z axis and the size of the grid according to the maximum distance of the pixel points of the second depth image and the density of the pixel points. For example, in the case of a depth image of 640 × 480, with a maximum distance of 4 meters, the number of grids is typically 400 × 400 in the coordinate system, and each grid has a size of about 1cm × 1cm in the world coordinate system.
S304, counting the number of projected pixel points falling in each grid, wherein if the number of the projected pixel points is greater than a preset number threshold, the projected pixel points in the grid are target projected pixel points, and otherwise, the projected pixel points are ground projected pixel points;
and projecting all pixel points in the second depth image to the ground of each grid, wherein projected pixel points falling into the grid exist in the grid, and counting the number of the projected pixel points in the grid. Because the ground is a thin layer, the number of points in the grid projected on the ground is small, and the number of projected pixel points in the same grid is large because the number of pixel points at the same position of the target object is large, so that whether the points of the grid are the points of the target object or the points of the ground can be distinguished through the technical scheme. And setting a quantity threshold, wherein if the quantity of the projected pixel points is greater than the quantity threshold, the projected pixel points in the grid are the projected pixel points of the target, and otherwise, the projected pixel points or the miscellaneous points on the ground are set.
S305, deleting the pixel points of the ground projection to obtain the projection image without the ground part.
And deleting the projected pixel points on the ground in the grid, and obtaining the projected image without the ground part by using the remaining pixel points which are the projected pixel points of the target.
According to an embodiment of the present invention, the step S3 further includes: the projection images are dilated and eroded, the size of the dilated and eroded nuclei being determined by the depth data and being adaptively adjusted. And performing median filtering processing on the expanded and corroded projection images.
In step S4, based on a variable neighborhood region growing algorithm, performing cluster analysis on the projection images to obtain a plurality of obstacle cluster point sets, and determining position information of each obstacle. The growing field of the traditional region growing algorithm is generally that four adjacent points of the traversal point are arranged above, below, left and right. In the invention, compared with the first depth image obtained by shooting, the grid points subjected to projection processing have sparse pixel points and relatively poor continuity of the target points, so that the target textures cannot be clustered normally by using a traditional region generation algorithm, and the method adopts a variable adjacent region growing algorithm to perform cluster analysis. The extraction of the seed points of the variable neighbor region growing algorithm is obtained by traversing the projection image from top left to bottom right, namely traversing from the origin of the projection image, starting region growing by taking the points as the seed points one by one, and the grown region is not taken as the seed points when being traversed subsequently. Specifically, the original point of the projection image starts to traverse, four adjacent points of the original point, namely, the upper, lower, left and right adjacent points, are firstly subjected to region growth according to the sequence from top left to bottom right, if the target cannot grow, the region growth search range is expanded, the adjacent points of the four adjacent points of the original point, namely, the upper, lower, left and right adjacent points are subjected to region growth, and by analogy, the points are used as seed points one by one to start the region growth until all the points finish the region growth operation. And respectively calculating each obstacle clustering point set to obtain the minimum depth value of each obstacle in the projection image so as to obtain the closest distance of each obstacle. And arranging all the obstacles according to the closest distance of each obstacle in the order of the distance from the near to the far.
According to an embodiment of the invention, the method further comprises: after the step S3, adopting another technical scheme, by using a method of associating the projection image with the depth image, associating pixel points of the projection image with corresponding pixel points in the second depth image, determining target pixel points and ground pixel points in the second depth image, and deleting the ground pixel points to obtain a third depth image without ground; and based on an area generation method, carrying out cluster analysis on the third depth image to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
In one embodiment of the present invention as shown in fig. 2, the present invention provides an obstacle identification system, the system comprising:
an obtaining module 20, configured to obtain a first depth image of a surrounding environment in a driving route;
the cutting module 21 is configured to perform cutting processing on the first depth image to obtain a second depth image corresponding to the driving lane;
the projection module 22 is configured to convert the second depth image into point cloud information, perform ground projection according to the point cloud information to obtain point cloud information of a target and point cloud information of the ground, and delete the point cloud information of the ground to obtain a projection image without a ground part;
And the clustering module 23 is configured to perform clustering analysis on the projection images based on a variable neighborhood region growing algorithm, obtain a plurality of obstacle clustering point sets, and determine position information of each obstacle.
The acquisition module acquires a first depth image of the surrounding environment in a driving route, carries out filtering processing on the first depth image according to a bilateral filtering algorithm to obtain a filtered first depth image, can ensure the integrity of the barrier edge in the first depth image through filtering processing, and eliminates flying pixels.
The cutting module cuts the first depth image to obtain a second depth image corresponding to the driving lane, and the first depth image is converted into a point cloud image based on a depth image-to-point cloud computing method. Cutting the x-axis driving lane range, the y-axis driving height range and the z-axis detection range of the point cloud picture respectively to obtain the cut point cloud picture; and acquiring a second depth image corresponding to the driving lane according to the cut point cloud image. And according to the cut point cloud image, performing corresponding range cutting on the second depth image, and setting pixel points which do not meet the range requirement as 0 to obtain a second depth image corresponding to the driving lane.
The projection module converts the second depth image into point cloud information, performs ground projection according to the point cloud information to obtain point cloud information of a target and the ground, and deletes the point cloud information of the ground to obtain a projected image without a ground part. Performing point cloud coordinate conversion on the second depth image to obtain corresponding point cloud information; performing coordinate conversion on the point cloud information, and converting an (x, z) coordinate system plane in the point cloud information into a state parallel to the ground; setting a plurality of grids under the (x, z) coordinate system, and projecting all pixel points in the second depth image to each grid; counting the number of projected pixel points falling in each grid, wherein if the number of the projected pixel points is greater than a preset number threshold, the projected pixel points in the grid are the projected pixel points of the target, otherwise, the projected pixel points are the projected pixel points on the ground; and deleting the pixel points of the projection of the ground to obtain the projection image without the ground part.
The clustering module carries out clustering analysis on the projected images based on a variable neighbor region growing algorithm to obtain a plurality of obstacle clustering point sets and determine the position information of each obstacle. Clustering the projected images to obtain a plurality of obstacle clustering point sets, respectively calculating each obstacle clustering point set, and obtaining the minimum depth value of each obstacle in the projected images to obtain the nearest distance of each obstacle.
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.

Claims (10)

1. An obstacle recognition method, comprising:
s1, acquiring a first depth image of the surrounding environment in the driving route;
s2, cutting the first depth image to obtain a second depth image corresponding to the driving lane;
s3, converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain point cloud information of a target and the ground, and deleting the point cloud information of the ground to obtain a projection image without a ground part;
s4, based on a variable neighbor region growing algorithm, carrying out cluster analysis on the projection images to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
2. The obstacle identification method according to claim 1, wherein the step S2 includes:
converting the first depth image into a point cloud image based on a depth image-to-point cloud computing method;
Cutting the x-axis driving lane range, the y-axis driving height range and the z-axis detection range of the point cloud picture respectively to obtain the cut point cloud picture;
and acquiring a second depth image corresponding to the driving lane according to the cut point cloud image.
3. The obstacle identification method according to claim 1, wherein the step S3 includes:
s301, performing point cloud coordinate conversion on the second depth image to obtain corresponding point cloud information;
s302, performing coordinate conversion on the point cloud information, and converting an (x, z) coordinate system plane in the point cloud information into a state parallel to the ground;
s303, setting a plurality of grids under the (x, z) coordinate system, and projecting all pixel points in the second depth image to each grid;
s304, counting the number of projected pixel points falling in each grid, wherein if the number of the projected pixel points is greater than a preset number threshold, the projected pixel points in the grid are target projected pixel points, and otherwise, the projected pixel points are ground projected pixel points;
s305, deleting the pixel points of the ground projection to obtain the projection image without the ground part.
4. The obstacle identification method according to claim 3, wherein the step S302 specifically includes:
the installation angle of the depth camera is calibrated, the point cloud information is subjected to coordinate conversion, and an (x, z) coordinate system plane in the point cloud information is converted into a state parallel to the ground.
5. The obstacle identification method according to claim 3, wherein the step S303 includes:
the number of grids in the (x, z) coordinate system is determined by the length of the z-axis in the (x, z) coordinate system and the size of the grids.
6. The obstacle identification method according to claim 5, wherein the step S303 includes:
the size of the grid is related to the pixel density of the second depth image, and when the pixel density is higher, the grid is smaller;
the length of the z-axis in the (x, z) coordinate system is determined by the maximum distance of the pixel points of the second depth image.
7. The obstacle identification method according to claim 1, wherein the step S3 further includes:
and expanding and corroding the projection image, and performing median filtering processing on the expanded and corroded projection image.
8. The obstacle identification method according to claim 1, wherein the step S4 includes:
Traversing from the origin of the projection image and traversing according to the sequence from top left to bottom right;
firstly, performing region growth on four adjacent points of the origin, namely, the upper point, the lower point, the left point and the right point, and performing region growth on the adjacent points of the four adjacent points of the origin if the target cannot grow;
and by analogy, starting region growing by taking the points as seed points one by one until all the points finish region growing operation.
9. The obstacle identification method of claim 1, further comprising:
associating the pixel points of the projected image with corresponding pixel points in the second depth image, determining target pixel points and ground pixel points in the second depth image, and deleting the ground pixel points to obtain a third depth image without ground;
and based on an area generation method, carrying out cluster analysis on the third depth image to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
10. An obstacle identification system, characterized in that the system comprises:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first depth image of the surrounding environment in a driving route;
The cutting module is used for cutting the first depth image to obtain a second depth image corresponding to the driving lane;
the projection module is used for converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain point cloud information of a target and the point cloud information of the ground, and deleting the point cloud information of the ground to obtain a projected image without a ground part;
and the clustering module is used for carrying out clustering analysis on the projected images based on a variable neighbor region growing algorithm to obtain a plurality of obstacle clustering point sets and determining the position information of each obstacle.
CN202010701313.5A 2020-07-20 2020-07-20 Obstacle recognition method and system Active CN111860321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010701313.5A CN111860321B (en) 2020-07-20 2020-07-20 Obstacle recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010701313.5A CN111860321B (en) 2020-07-20 2020-07-20 Obstacle recognition method and system

Publications (2)

Publication Number Publication Date
CN111860321A true CN111860321A (en) 2020-10-30
CN111860321B CN111860321B (en) 2023-12-22

Family

ID=73001649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010701313.5A Active CN111860321B (en) 2020-07-20 2020-07-20 Obstacle recognition method and system

Country Status (1)

Country Link
CN (1) CN111860321B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113075692A (en) * 2021-03-08 2021-07-06 北京石头世纪科技股份有限公司 Target detection and control method, system, device and storage medium
CN113487669A (en) * 2021-07-07 2021-10-08 广东博智林机器人有限公司 Job track determination method and device, computer equipment and storage medium
CN116630390A (en) * 2023-07-21 2023-08-22 山东大学 Obstacle detection method, system, equipment and medium based on depth map template

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene
CN106339669A (en) * 2016-08-16 2017-01-18 长春理工大学 Multiline point cloud data machine learning human target recognition method and anti-collision device
US20170191826A1 (en) * 2016-01-05 2017-07-06 Texas Instruments Incorporated Ground Plane Estimation in a Computer Vision System
CN107292276A (en) * 2017-06-28 2017-10-24 武汉大学 A kind of vehicle-mounted cloud clustering method and system
CN108509820A (en) * 2017-02-23 2018-09-07 百度在线网络技术(北京)有限公司 Method for obstacle segmentation and device, computer equipment and readable medium
CN109711410A (en) * 2018-11-20 2019-05-03 北方工业大学 Three-dimensional object rapid segmentation and identification method, device and system
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110065067A (en) * 2018-01-23 2019-07-30 丰田自动车株式会社 Motion profile generating device
CN110390237A (en) * 2018-04-23 2019-10-29 北京京东尚科信息技术有限公司 Processing Method of Point-clouds and system
CN110879991A (en) * 2019-11-26 2020-03-13 杭州光珀智能科技有限公司 Obstacle identification method and system
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
CN111060923A (en) * 2019-11-26 2020-04-24 武汉乐庭软件技术有限公司 Multi-laser-radar automobile driving obstacle detection method and system
CN111079765A (en) * 2019-12-13 2020-04-28 电子科技大学 Sparse point cloud densification and pavement removal method based on depth map
US20200158874A1 (en) * 2018-11-19 2020-05-21 Dalong Li Traffic recognition and adaptive ground removal based on lidar point cloud statistics
CN111210429A (en) * 2020-04-17 2020-05-29 中联重科股份有限公司 Point cloud data partitioning method and device and obstacle detection method and device
CN111291708A (en) * 2020-02-25 2020-06-16 华南理工大学 Transformer substation inspection robot obstacle detection and identification method integrated with depth camera
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160154999A1 (en) * 2014-12-02 2016-06-02 Nokia Technologies Oy Objection recognition in a 3d scene
US20170191826A1 (en) * 2016-01-05 2017-07-06 Texas Instruments Incorporated Ground Plane Estimation in a Computer Vision System
CN107016705A (en) * 2016-01-05 2017-08-04 德州仪器公司 Ground level estimation in computer vision system
CN106339669A (en) * 2016-08-16 2017-01-18 长春理工大学 Multiline point cloud data machine learning human target recognition method and anti-collision device
CN108509820A (en) * 2017-02-23 2018-09-07 百度在线网络技术(北京)有限公司 Method for obstacle segmentation and device, computer equipment and readable medium
CN107292276A (en) * 2017-06-28 2017-10-24 武汉大学 A kind of vehicle-mounted cloud clustering method and system
CN110065067A (en) * 2018-01-23 2019-07-30 丰田自动车株式会社 Motion profile generating device
JP2019126866A (en) * 2018-01-23 2019-08-01 トヨタ自動車株式会社 Motion trajectory generation apparatus
CN110390237A (en) * 2018-04-23 2019-10-29 北京京东尚科信息技术有限公司 Processing Method of Point-clouds and system
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
US20200158874A1 (en) * 2018-11-19 2020-05-21 Dalong Li Traffic recognition and adaptive ground removal based on lidar point cloud statistics
CN109711410A (en) * 2018-11-20 2019-05-03 北方工业大学 Three-dimensional object rapid segmentation and identification method, device and system
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110879991A (en) * 2019-11-26 2020-03-13 杭州光珀智能科技有限公司 Obstacle identification method and system
CN111060923A (en) * 2019-11-26 2020-04-24 武汉乐庭软件技术有限公司 Multi-laser-radar automobile driving obstacle detection method and system
CN111079765A (en) * 2019-12-13 2020-04-28 电子科技大学 Sparse point cloud densification and pavement removal method based on depth map
CN111291708A (en) * 2020-02-25 2020-06-16 华南理工大学 Transformer substation inspection robot obstacle detection and identification method integrated with depth camera
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data
CN111210429A (en) * 2020-04-17 2020-05-29 中联重科股份有限公司 Point cloud data partitioning method and device and obstacle detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄兴;应群伟;: "应用激光雷达与相机信息融合的障碍物识别", 计算机测量与控制, no. 01, pages 189 - 193 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113075692A (en) * 2021-03-08 2021-07-06 北京石头世纪科技股份有限公司 Target detection and control method, system, device and storage medium
CN113487669A (en) * 2021-07-07 2021-10-08 广东博智林机器人有限公司 Job track determination method and device, computer equipment and storage medium
CN116630390A (en) * 2023-07-21 2023-08-22 山东大学 Obstacle detection method, system, equipment and medium based on depth map template
CN116630390B (en) * 2023-07-21 2023-10-17 山东大学 Obstacle detection method, system, equipment and medium based on depth map template

Also Published As

Publication number Publication date
CN111860321B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN108519605B (en) Road edge detection method based on laser radar and camera
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
CN110879991B (en) Obstacle identification method and system
Cheng et al. 3D building model reconstruction from multi-view aerial imagery and lidar data
CN111860321B (en) Obstacle recognition method and system
CN111598916A (en) Preparation method of indoor occupancy grid map based on RGB-D information
CN112613378B (en) 3D target detection method, system, medium and terminal
CN112115980A (en) Binocular vision odometer design method based on optical flow tracking and point line feature matching
CN111340922A (en) Positioning and mapping method and electronic equipment
CN115049700A (en) Target detection method and device
CN113593017A (en) Method, device and equipment for constructing surface three-dimensional model of strip mine and storage medium
CN111998862B (en) BNN-based dense binocular SLAM method
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN107679458A (en) The extracting method of roadmarking in a kind of road color laser point cloud based on K Means
CN115861968A (en) Dynamic obstacle removing method based on real-time point cloud data
CN113008247A (en) High-precision map construction method and device for mining area
CN114842340A (en) Robot binocular stereoscopic vision obstacle sensing method and system
CN116958837A (en) Municipal facilities fault detection system based on unmanned aerial vehicle
CN115937810A (en) Sensor fusion method based on binocular camera guidance
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
CN112365600B (en) Three-dimensional object detection method
Feng et al. Automated extraction of building instances from dual-channel airborne LiDAR point clouds
CN115994934B (en) Data time alignment method and device and domain controller
CN111414848B (en) Full-class 3D obstacle detection method, system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant