CN111860321B - Obstacle recognition method and system - Google Patents

Obstacle recognition method and system Download PDF

Info

Publication number
CN111860321B
CN111860321B CN202010701313.5A CN202010701313A CN111860321B CN 111860321 B CN111860321 B CN 111860321B CN 202010701313 A CN202010701313 A CN 202010701313A CN 111860321 B CN111860321 B CN 111860321B
Authority
CN
China
Prior art keywords
point cloud
depth image
ground
image
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010701313.5A
Other languages
Chinese (zh)
Other versions
CN111860321A (en
Inventor
黄泽仕
余小欢
门阳
陈嵩
白云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yixun Technology Co ltd
Original Assignee
Zhejiang Guangpo Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Guangpo Intelligent Technology Co ltd filed Critical Zhejiang Guangpo Intelligent Technology Co ltd
Priority to CN202010701313.5A priority Critical patent/CN111860321B/en
Publication of CN111860321A publication Critical patent/CN111860321A/en
Application granted granted Critical
Publication of CN111860321B publication Critical patent/CN111860321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for identifying an obstacle, which comprises the following steps: acquiring a first depth image of the surrounding environment in the driving route; cutting the first depth image to obtain a second depth image corresponding to the driving lane; converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain target point cloud information and ground point cloud information, and deleting the ground point cloud information to obtain a projection image without a ground part; and carrying out cluster analysis on the projection image based on a neighboring region growing algorithm to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle. Correspondingly, the invention also discloses an obstacle recognition system. According to the invention, the obstacle avoidance algorithm is optimized, and the obstacle can be monitored in real time more optimally.

Description

Obstacle recognition method and system
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a system for identifying obstacles.
Background
With the development of scientific technology, robots or express unmanned vehicles are increasingly widely used. Obstacle recognition and avoidance become important manifestations of robot intellectualization. Along with the continuous research and development of various robots, the requirements for obstacle avoidance are increasingly increased, the real environment is often complex and changes in real time, and the obstacle can be accurately identified and the distance between the obstacle and the robot can be acquired. The obstacle identification method with the patent application number of CN2019111710204 comprises the following steps: s1, acquiring a first depth image of the surrounding environment in a driving route; s2, cutting the first depth image to obtain a second depth image corresponding to the driving lane; s3, performing ground fitting based on the second depth image, determining a ground straight line, removing a ground part according to the ground straight line, and obtaining a third depth image without the ground part; s4, carrying out cluster analysis on the third depth image to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle. In the steps S3 and S4, the technical scheme of laterally projecting the ground is adopted, so that the ground cannot be detected well when the ground is laterally inclined or the ground is recessed; and there is also a technical problem that an obstacle is divided into two because of the incomplete depth data, for example, one obstacle may be divided into two. Therefore, the invention provides an improved technical scheme for detecting the obstacle aiming at the technical problems, and the technical problems can be effectively solved.
Disclosure of Invention
Based on the above, the invention aims to provide an obstacle recognition method and system, which can monitor an obstacle in real time better in the movement process of a robot, so that the obstacle can be avoided better.
In order to achieve the above object, the present invention provides an obstacle recognition method, the method comprising:
s1, acquiring a first depth image of the surrounding environment in a driving route;
s2, cutting the first depth image to obtain a second depth image corresponding to the driving lane;
s3, converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain target point cloud information and ground point cloud information, and deleting the ground point cloud information to obtain a projection image without a ground part;
and S4, carrying out cluster analysis on the projection image based on a growing algorithm of the adjacent area to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
Preferably, the step S2 includes:
converting the first depth image into a point cloud image based on a calculation method of a depth image turning point cloud;
cutting the point cloud image in an x-axis driving lane range, cutting the point cloud image in a y-axis driving height range and cutting the point cloud image in a z-axis detection range respectively to obtain a cut point cloud image;
and acquiring a second depth image corresponding to the driving lane according to the cut point cloud image.
Preferably, the step S3 includes:
s301, performing point cloud coordinate conversion on the second depth image to obtain corresponding point cloud information;
s302, carrying out coordinate conversion on the point cloud information, and converting a (x, z) coordinate system plane in the point cloud information into a state parallel to the ground;
s303, setting a plurality of grids under the (x, z) coordinate system, and projecting all pixel points in the second depth image into each grid;
s304, counting the number of projected pixel points falling in each grid, wherein if the number of the projected pixel points is larger than a preset number threshold, the projected pixel points in the grid are the projected pixel points of the target, otherwise, the projected pixel points are the projected pixel points of the ground;
and S305, deleting the projected pixel points of the ground to obtain a projected image without the ground part.
Preferably, the step S302 specifically includes:
and (3) calibrating the installation angle of the depth camera, performing coordinate conversion on the point cloud information, and converting the (x, z) coordinate system plane in the point cloud information into a state parallel to the ground.
Preferably, the step S303 includes: the number of grids in the (x, z) coordinate system is determined by the length of the z-axis in the (x, z) coordinate system and the size of the grids.
Preferably, the step S303 includes:
the size of the grid is related to the pixel density of the second depth image, and the grid is smaller when the pixel density is larger;
the length of the z-axis in the (x, z) coordinate system is determined by the maximum distance of the pixels of the second depth image.
Preferably, the step S3 further includes:
and expanding and corroding the projection image, and performing median filtering treatment on the expanded and corroded projection image.
Preferably, the step S4 includes:
starting traversing by using the origin of the projection image, and traversing according to the sequence from top left to bottom right;
firstly, carrying out region growth on four adjacent points of the origin, namely, the upper, lower, left and right adjacent points of the origin, and if the targets cannot grow, carrying out region growth on the adjacent points of the four adjacent points of the origin;
and the like, starting the region growth by taking the points as seed points one by one until all the points finish the region growth operation.
Preferably, the method further comprises:
associating the pixel points of the projection image with the corresponding pixel points in the second depth image, determining a target pixel point and a ground pixel point in the second depth image, and deleting the ground pixel point to obtain a third depth image without ground;
and carrying out cluster analysis on the third depth image based on a region generation method, obtaining a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
To achieve the above object, the present invention provides an obstacle recognition system, comprising:
the acquisition module is used for acquiring a first depth image of the surrounding environment in the driving route;
the cutting module is used for cutting the first depth image to obtain a second depth image corresponding to the driving lane;
the projection module is used for converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain target point cloud information and ground point cloud information, and deleting the ground point cloud information to obtain a projection image without a ground part;
and the clustering module is used for carrying out cluster analysis on the projection image based on a neighboring area growing algorithm, acquiring a plurality of obstacle clustering point sets and determining the position information of each obstacle.
Compared with the prior art, the obstacle recognition method and system provided by the invention have the beneficial effects that: in the running process of the robot or the unmanned vehicle, the technical problem that the ground cannot be well segmented due to the fact that the ground is inclined in the left-right direction or the ground is hollow is solved, the technical scheme of obstacle recognition is optimized, the obstacle can be monitored in real time better, and therefore a more correct obstacle avoidance decision can be made for the robot and the unmanned vehicle; according to the technical scheme, an optimized large-range region growing algorithm is adopted, so that the technical problem of obstacle faults caused by incomplete depth image data is effectively solved, and the obstacles can be accurately identified.
Drawings
Fig. 1 is a flow chart of an obstacle recognition method according to an embodiment of the present invention.
Fig. 2 is a system schematic diagram of an obstacle recognition system according to one embodiment of the invention.
Detailed Description
The present invention will be described in detail below with reference to the specific embodiments shown in the drawings, but these embodiments are not limited to the present invention, and structural, method, or functional modifications made by those skilled in the art based on these embodiments are included in the scope of the present invention.
In one embodiment of the present invention as shown in fig. 1, the present invention provides an obstacle identifying method, the method comprising:
s1, acquiring a first depth image of the surrounding environment in a driving route;
s2, cutting the first depth image to obtain a second depth image corresponding to the driving lane;
s3, converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain target point cloud information and ground point cloud information, and deleting the ground point cloud information to obtain a projection image without a ground part;
and S4, carrying out cluster analysis on the projection image based on a growing algorithm of the adjacent area to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
In step S1, a first depth image of the surroundings in the driving route is acquired. A depth map is an image or image channel containing information about the distance of the surface of the scene object from the viewpoint, each pixel value being the actual distance of the sensor from the object. In a specific embodiment of the present invention, the electronic device may acquire a first depth image of the surrounding environment in the path of travel of the robot or the unmanned vehicle. The electronic device can acquire a first depth image of the surrounding environment in the driving route by using the depth camera, and parameter calibration is carried out on the depth camera, wherein the parameter calibration comprises the focal length, the optical center and the like of the depth camera.
According to a specific embodiment of the present invention, the step S1 further includes a preprocessing step for the first depth image, where the preprocessing step includes: according to a bilateral filtering algorithm, filtering the first depth image to obtain a filtered first depth image, and ensuring the integrity of the edge of the obstacle in the first depth image through filtering; traversing from the pixel point at the upper left corner of the filtered first depth image to the lower right corner row by row, and taking each traversed pixel point as a central pixel point; comparing the depth value of each pixel point of a preset field area of the central pixel point with the depth value of the central pixel point, and recording the pixel point if the difference is larger than a preset difference threshold; counting the number of all pixel points with the difference value larger than the preset difference value threshold in the field area, and marking the number as the number of the pixel points corresponding to the central pixel point; and if the number of the pixel points corresponding to the central pixel point is greater than a preset number, the central pixel point is a flying pixel point, and the flying pixel point is removed. And filtering out the flying pixel points, so that each outlier pixel point in the first depth image can be screened out and removed. And (3) performing a processing mode of expanding and then corroding the first depth image after removing the flying pixel points, and filling the holes in the depth image.
According to a specific embodiment of the present invention, the preprocessing step further includes: and performing image interpolation processing on the first depth image to increase the resolution of the first depth image in the y-axis direction. And under the condition that the resolution ratio of the y-axis direction in the depth camera is smaller, performing image interpolation processing on the first depth image, increasing the resolution ratio of the y-axis direction, and facilitating the detection of obstacles in the subsequent depth image. This step may not be processed if the y-axis resolution given by the depth camera is sufficient.
In the step S2, the first depth image is cut, and a second depth image corresponding to the driving lane is obtained. Specifically, the first depth image is converted into a point cloud image based on a calculation method of a depth image point cloud; cutting the point cloud image in an x-axis driving lane range, cutting the point cloud image in a y-axis driving height range and cutting the point cloud image in a z-axis detection range respectively to obtain a cut point cloud image; and acquiring a second depth image corresponding to the driving lane according to the cut point cloud image. The cutting of the range of the x-axis driving lane specifically comprises: setting surplus width of a driving lane; the x-axis travel lane range is a sum of half of the vehicle width and the surplus width. The surplus width is used for ensuring that no obstacle suddenly appears in the driving lane. For example, the vehicle width is 1.2 meters, the surplus width of the driving lane refers to the surplus distance required by obstacle detection on two sides of the vehicle, the surplus width is set to be 1 meter, the depth camera is taken as the center, and the range of the x-axis driving lane is (1.2/2+1) ×2=3.2 meters. Based on the point cloud, the x coordinate of the depth camera in the point cloud is 0, and according to the calculation, the x coordinate is in the range of-1.6 to +1.6. And cutting the range of the X-axis driving lane on the point cloud picture. The cutting of the y-axis running height range is mainly set according to the height required by vehicle traffic. The cutting of the z-axis detection range is mainly set according to the running speed of the vehicle and the detection distance parameters of the depth camera. The x-axis driving lane range, the y-axis driving height range and the z-axis detection range can be changed as required. And cutting a corresponding range on the second depth image according to the cut point cloud image, and setting the pixel points which do not meet the range requirement to be 0 to obtain the second depth image corresponding to the driving lane.
In the step S3, the second depth image is converted into point cloud information, and ground projection is performed according to the point cloud information, so as to obtain target point cloud information and ground point cloud information, and the ground point cloud information is deleted, so as to obtain a projection image without a ground part. According to a specific embodiment of the present invention, the step S3 includes:
s301, performing point cloud coordinate conversion on the second depth image to obtain corresponding point cloud information;
the depth map is a coordinate in the image coordinate system (x, y) representing a pixel value of each pixel point in the image, the pixel value representing the euclidean distance of the target point from the camera. The point cloud is a massive point set expressing the target space distribution and the target surface characteristics under the same space reference system, and after the space coordinates of each sampling point of the object surface are obtained, the point set is obtained and is called as 'point cloud'. The point cloud is a 3D concept, and its coordinate system (x, y, z) is the distance of the target point from the camera in three coordinate axis directions. And according to the calculation method of the depth map conversion point cloud, carrying out point cloud coordinate conversion on the second depth image, and converting pixel points in the depth image into coordinate points under a corresponding point cloud coordinate system.
S302, carrying out coordinate conversion on the point cloud information, and converting a (x, z) coordinate system plane in the point cloud information into a state parallel to the ground;
and (3) calibrating the installation angle of the depth camera, performing coordinate conversion on the point cloud information, and converting the (x, z) coordinate system plane in the point cloud information into a state parallel to the ground.
S303, setting a plurality of grids under the (x, z) coordinate system, and projecting all pixel points in the second depth image into each grid;
the projection method is to set a series of grids in a 2D plane coordinate system with x and z axes, and the grids are to set a plurality of grids under the (x and z) coordinate system, and the length and the width of each grid are equal. And projecting all pixel points in the second depth image to the (x, z) coordinate system and projecting all pixel points to each grid under the coordinate system. According to a specific embodiment of the present invention, the number of grids in the (x, z) coordinate system is determined by the length of the z-axis in the (x, z) coordinate system and the size of the grids. The size of the grid is related to the pixel density of the second depth image, the grid being smaller as the pixel density is greater. The length of the z-axis in the (x, z) coordinate system is determined by the maximum distance of the pixels of the second depth image. The length of the z-axis and the size of the grid are determined by the maximum distance of the pixels and the pixel density of the second depth image. For example, in a depth image of 640×480, in the case of a farthest distance of 4 meters, the number of grids is generally set to 400×400 in the coordinate system, and the size of each grid is about 1cm×1cm in the world coordinate system.
S304, counting the number of projected pixel points falling in each grid, wherein if the number of the projected pixel points is larger than a preset number threshold, the projected pixel points in the grid are the projected pixel points of the target, otherwise, the projected pixel points are the projected pixel points of the ground;
and projecting all pixel points in the second depth image to the ground of each grid, wherein projected pixel points falling in the grid exist in the grid, and counting the number of the projected pixel points in the grid. Because the ground is a thin layer, the number of points in the grid projected by the ground is relatively small, and the number of projected pixels in the same grid of the target object is relatively large because of relatively large number of pixels at the same position, so that whether the points of the grid are the points of the target object or the points of the ground can be distinguished by the technical scheme. Setting a quantity threshold, if the quantity of the projected pixel points is larger than the quantity threshold, the projected pixel points in the grid are the projected pixel points of the target, otherwise, the projected pixel points or the impurity points of the ground are the projected pixel points.
And S305, deleting the projected pixel points of the ground to obtain a projected image without the ground part.
And deleting the projected pixels of the ground in the grid, wherein the rest pixels are projected pixels of the target, so that a projected image without a ground part can be obtained.
According to an embodiment of the present invention, the step S3 further includes: the projection image is inflated and eroded, and the kernel size of the inflation and erosion is determined by the depth data and adaptively adjusted. And carrying out median filtering treatment on the expanded and corroded projection image.
In the step S4, based on a neighboring area growing algorithm, a cluster analysis is performed on the projection image, a plurality of obstacle cluster point sets are obtained, and position information of each obstacle is determined. The growth field of the traditional region growing algorithm is generally four adjacent points of the traversing point. In the invention, compared with a first depth image obtained by shooting, the grid points subjected to projection processing have sparse pixel points and relatively poor continuity of target points, so that the target textures cannot be clustered normally due to the traditional region generation algorithm, and the invention adopts a neighboring region growing algorithm for cluster analysis. The extraction of the seed points of the growing algorithm of the neighboring regions is obtained by traversing the projection image from top left to bottom right, namely traversing from the original point of the projection image, starting the growth of the regions by taking the points as the seed points, and taking the grown regions as the seed points no longer when the regions are traversed later. Specifically, the origin of the projection image starts traversing, four adjacent points of the origin are firstly subjected to region growth according to the sequence from top left to bottom right, if the target cannot grow, the region growth search range is enlarged, the adjacent points of the four adjacent points of the origin are subjected to region growth, the same is pushed, and the region growth is started by taking one point as a seed point until all the points finish the region growth operation. And respectively calculating each obstacle clustering point set to obtain the minimum depth value of each obstacle in the projection image, and obtaining the nearest distance of each obstacle. And arranging all the obstacles in sequence from the near to the far according to the nearest distance of each obstacle.
According to one embodiment of the invention, the method further comprises: after the step S3, another technical scheme is adopted, and a method of associating the projection image with the depth image is adopted, so that the pixel point of the projection image is associated with the corresponding pixel point in the second depth image, the target pixel point and the ground pixel point in the second depth image are determined, and the ground pixel point is deleted, so that a third depth image without ground is obtained; and carrying out cluster analysis on the third depth image based on a region generation method, obtaining a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
In one embodiment of the present invention as shown in fig. 2, the present invention provides an obstacle recognition system, the system comprising:
an acquisition module 20, configured to acquire a first depth image of a surrounding environment in a driving route;
the cutting module 21 is configured to perform cutting processing on the first depth image to obtain a second depth image corresponding to the driving lane;
the projection module 22 is configured to convert the second depth image into point cloud information, perform ground projection according to the point cloud information, obtain point cloud information of a target and point cloud information of the ground, and delete the point cloud information of the ground to obtain a projection image without a ground part;
and the clustering module 23 is used for carrying out cluster analysis on the projection images based on a neighboring area growing algorithm, obtaining a plurality of obstacle clustering point sets and determining the position information of each obstacle.
The acquisition module acquires a first depth image of the surrounding environment in the driving route, performs filtering processing on the first depth image according to a bilateral filtering algorithm to obtain a filtered first depth image, and can ensure the integrity of the edge of an obstacle in the first depth image and remove flying pixel points through filtering processing.
And the cutting module performs cutting processing on the first depth image to obtain a second depth image corresponding to the driving lane, and converts the first depth image into a point cloud image based on a calculation method of a depth image turning point cloud. Cutting the point cloud image in an x-axis driving lane range, cutting the point cloud image in a y-axis driving height range and cutting the point cloud image in a z-axis detection range respectively to obtain a cut point cloud image; and acquiring a second depth image corresponding to the driving lane according to the cut point cloud image. And cutting a corresponding range on the second depth image according to the cut point cloud image, and setting the pixel points which do not meet the range requirement to be 0 to obtain the second depth image corresponding to the driving lane.
The projection module converts the second depth image into point cloud information, performs ground projection according to the point cloud information to obtain target point cloud information and ground point cloud information, and deletes the ground point cloud information to obtain a projection image without a ground part. Converting the point cloud coordinates of the second depth image to obtain corresponding point cloud information; converting the coordinates of the point cloud information, and converting the (x, z) coordinate system plane in the point cloud information into a state parallel to the ground; setting a plurality of grids under the (x, z) coordinate system, and projecting all pixel points in the second depth image into each grid; counting the number of projected pixel points falling in each grid, wherein if the number of the projected pixel points is larger than a preset number threshold, the projected pixel points in the grid are the projected pixel points of the target, otherwise, the projected pixel points are the projected pixel points of the ground; and deleting the projected pixel points of the ground to obtain a projected image without a ground part.
And the clustering module performs cluster analysis on the projection image based on a neighboring region growing algorithm to acquire a plurality of obstacle clustering point sets and determines the position information of each obstacle. And clustering the projection images to obtain a plurality of obstacle clustering point sets, respectively calculating each obstacle clustering point set to obtain the minimum depth value of each obstacle in the projection images, and obtaining the nearest distance of each obstacle.
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.

Claims (9)

1. A method of identifying an obstacle, comprising the steps of:
s1, acquiring a first depth image of the surrounding environment in a driving route;
s2, cutting the first depth image to obtain a second depth image corresponding to the driving lane;
s3, converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain target point cloud information and ground point cloud information, and deleting the ground point cloud information to obtain a projection image without a ground part;
s4, based on a growing algorithm of a neighboring area, carrying out cluster analysis on the projection image to obtain a plurality of obstacle cluster point sets, and determining the position information of each obstacle;
the step S3 includes the steps of:
s301, performing point cloud coordinate conversion on the second depth image to obtain corresponding point cloud information;
s302, carrying out coordinate conversion on the point cloud information, and converting a (x, z) coordinate system plane in the point cloud information into a state parallel to the ground;
s303, setting a plurality of grids under the (x, z) coordinate system, and projecting all pixel points in the second depth image into each grid;
s304, counting the number of projected pixel points falling in each grid, wherein if the number of the projected pixel points is larger than a preset number threshold, the projected pixel points in the grid are the projected pixel points of the target, otherwise, the projected pixel points are the projected pixel points of the ground;
and S305, deleting the projected pixel points of the ground to obtain a projected image without the ground part.
2. The obstacle identifying method as claimed in claim 1, wherein the step S2 comprises:
converting the first depth image into a point cloud image based on a calculation method of a depth image turning point cloud;
cutting the point cloud image in an x-axis driving lane range, cutting the point cloud image in a y-axis driving height range and cutting the point cloud image in a z-axis detection range respectively to obtain a cut point cloud image;
and acquiring a second depth image corresponding to the driving lane according to the cut point cloud image.
3. The obstacle identifying method as claimed in claim 2, wherein the step S302 specifically comprises:
and (3) calibrating the installation angle of the depth camera, performing coordinate conversion on the point cloud information, and converting the (x, z) coordinate system plane in the point cloud information into a state parallel to the ground.
4. The obstacle identifying method as claimed in claim 2, wherein the step S303 comprises: the number of grids in the (x, z) coordinate system is determined by the length of the z-axis in the (x, z) coordinate system and the size of the grids.
5. The obstacle identifying method as claimed in claim 4, wherein the step S303 comprises: the size of the grid is related to the pixel density of the second depth image, and the grid is smaller when the pixel density is larger;
the length of the z-axis in the (x, z) coordinate system is determined by the maximum distance of the pixels of the second depth image.
6. The obstacle identifying method as claimed in claim 1, wherein the step S3 further comprises: and expanding and corroding the projection image, and performing median filtering treatment on the expanded and corroded projection image.
7. The obstacle identifying method as claimed in claim 1, wherein the step S4 comprises: starting traversing by using the origin of the projection image, and traversing according to the sequence from top left to bottom right;
firstly, carrying out region growth on four adjacent points of the origin, namely, the upper, lower, left and right adjacent points of the origin, and if the targets cannot grow, carrying out region growth on the adjacent points of the four adjacent points of the origin;
and the like, starting the region growth by taking the points as seed points one by one until all the points finish the region growth operation.
8. The obstacle identifying method as claimed in claim 1, wherein the method further comprises: associating the pixel points of the projection image with the corresponding pixel points in the second depth image, determining a target pixel point and a ground pixel point in the second depth image, and deleting the ground pixel point to obtain a third depth image without ground;
and carrying out cluster analysis on the third depth image based on a region generation method, obtaining a plurality of obstacle cluster point sets, and determining the position information of each obstacle.
9. An obstacle recognition system, the system comprising:
the acquisition module is used for acquiring a first depth image of the surrounding environment in the driving route;
the cutting module is used for cutting the first depth image to obtain a second depth image corresponding to the driving lane;
the projection module is used for converting the second depth image into point cloud information, performing ground projection according to the point cloud information to obtain target point cloud information and ground point cloud information, and deleting the ground point cloud information to obtain a projection image without a ground part;
the clustering module is used for carrying out cluster analysis on the projection image based on a neighboring area changing growth algorithm, acquiring a plurality of obstacle clustering point sets and determining the position information of each obstacle;
the projection module is specifically configured to perform point cloud coordinate conversion on the second depth image to obtain corresponding point cloud information, perform coordinate conversion on the point cloud information, convert a (x, z) coordinate system plane in the point cloud information into a state parallel to the ground, set a plurality of grids under the (x, z) coordinate system, project all pixel points in the second depth image into each grid, count the number of projected pixel points falling in each grid, and if the number of projected pixel points is greater than a preset number threshold, the projected pixel points in the grid are the projected pixel points of the target, otherwise are the projected pixel points of the ground, and delete the projected pixel points of the ground to obtain a projected image without a ground part.
CN202010701313.5A 2020-07-20 2020-07-20 Obstacle recognition method and system Active CN111860321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010701313.5A CN111860321B (en) 2020-07-20 2020-07-20 Obstacle recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010701313.5A CN111860321B (en) 2020-07-20 2020-07-20 Obstacle recognition method and system

Publications (2)

Publication Number Publication Date
CN111860321A CN111860321A (en) 2020-10-30
CN111860321B true CN111860321B (en) 2023-12-22

Family

ID=73001649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010701313.5A Active CN111860321B (en) 2020-07-20 2020-07-20 Obstacle recognition method and system

Country Status (1)

Country Link
CN (1) CN111860321B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113075692A (en) * 2021-03-08 2021-07-06 北京石头世纪科技股份有限公司 Target detection and control method, system, device and storage medium
CN113487669A (en) * 2021-07-07 2021-10-08 广东博智林机器人有限公司 Job track determination method and device, computer equipment and storage medium
CN115431695A (en) * 2022-09-09 2022-12-06 中国第一汽车股份有限公司 Suspension parameter adjusting method and device, electronic equipment and storage medium
WO2024138508A1 (en) * 2022-12-29 2024-07-04 华为技术有限公司 Obstacle detection method and related apparatus
CN116630390B (en) * 2023-07-21 2023-10-17 山东大学 Obstacle detection method, system, equipment and medium based on depth map template

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339669A (en) * 2016-08-16 2017-01-18 长春理工大学 Multiline point cloud data machine learning human target recognition method and anti-collision device
CN107016705A (en) * 2016-01-05 2017-08-04 德州仪器公司 Ground level estimation in computer vision system
CN107292276A (en) * 2017-06-28 2017-10-24 武汉大学 A kind of vehicle-mounted cloud clustering method and system
CN108509820A (en) * 2017-02-23 2018-09-07 百度在线网络技术(北京)有限公司 Method for obstacle segmentation and device, computer equipment and readable medium
CN109711410A (en) * 2018-11-20 2019-05-03 北方工业大学 Three-dimensional object rapid segmentation and identification method, device and system
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110065067A (en) * 2018-01-23 2019-07-30 丰田自动车株式会社 Motion profile generating device
CN110390237A (en) * 2018-04-23 2019-10-29 北京京东尚科信息技术有限公司 Processing Method of Point-clouds and system
CN110879991A (en) * 2019-11-26 2020-03-13 杭州光珀智能科技有限公司 Obstacle identification method and system
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
CN111060923A (en) * 2019-11-26 2020-04-24 武汉乐庭软件技术有限公司 Multi-laser-radar automobile driving obstacle detection method and system
CN111079765A (en) * 2019-12-13 2020-04-28 电子科技大学 Sparse point cloud densification and pavement removal method based on depth map
CN111210429A (en) * 2020-04-17 2020-05-29 中联重科股份有限公司 Point cloud data partitioning method and device and obstacle detection method and device
CN111291708A (en) * 2020-02-25 2020-06-16 华南理工大学 Transformer substation inspection robot obstacle detection and identification method integrated with depth camera
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2532948B (en) * 2014-12-02 2021-04-14 Vivo Mobile Communication Co Ltd Object Recognition in a 3D scene
US10823855B2 (en) * 2018-11-19 2020-11-03 Fca Us Llc Traffic recognition and adaptive ground removal based on LIDAR point cloud statistics

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016705A (en) * 2016-01-05 2017-08-04 德州仪器公司 Ground level estimation in computer vision system
CN106339669A (en) * 2016-08-16 2017-01-18 长春理工大学 Multiline point cloud data machine learning human target recognition method and anti-collision device
CN108509820A (en) * 2017-02-23 2018-09-07 百度在线网络技术(北京)有限公司 Method for obstacle segmentation and device, computer equipment and readable medium
CN107292276A (en) * 2017-06-28 2017-10-24 武汉大学 A kind of vehicle-mounted cloud clustering method and system
CN110065067A (en) * 2018-01-23 2019-07-30 丰田自动车株式会社 Motion profile generating device
JP2019126866A (en) * 2018-01-23 2019-08-01 トヨタ自動車株式会社 Motion trajectory generation apparatus
CN110390237A (en) * 2018-04-23 2019-10-29 北京京东尚科信息技术有限公司 Processing Method of Point-clouds and system
CN110893617A (en) * 2018-09-13 2020-03-20 深圳市优必选科技有限公司 Obstacle detection method and device and storage device
CN109711410A (en) * 2018-11-20 2019-05-03 北方工业大学 Three-dimensional object rapid segmentation and identification method, device and system
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110879991A (en) * 2019-11-26 2020-03-13 杭州光珀智能科技有限公司 Obstacle identification method and system
CN111060923A (en) * 2019-11-26 2020-04-24 武汉乐庭软件技术有限公司 Multi-laser-radar automobile driving obstacle detection method and system
CN111079765A (en) * 2019-12-13 2020-04-28 电子科技大学 Sparse point cloud densification and pavement removal method based on depth map
CN111291708A (en) * 2020-02-25 2020-06-16 华南理工大学 Transformer substation inspection robot obstacle detection and identification method integrated with depth camera
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data
CN111210429A (en) * 2020-04-17 2020-05-29 中联重科股份有限公司 Point cloud data partitioning method and device and obstacle detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
应用激光雷达与相机信息融合的障碍物识别;黄兴;应群伟;;计算机测量与控制(第01期);189-193+199 *

Also Published As

Publication number Publication date
CN111860321A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111860321B (en) Obstacle recognition method and system
CN110942449B (en) Vehicle detection method based on laser and vision fusion
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
CN113436260B (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN106650640B (en) Negative obstacle detection method based on laser radar point cloud local structure characteristics
CN114842438B (en) Terrain detection method, system and readable storage medium for automatic driving automobile
Cheng et al. 3D building model reconstruction from multi-view aerial imagery and lidar data
CN111598916A (en) Preparation method of indoor occupancy grid map based on RGB-D information
CN110879991B (en) Obstacle identification method and system
CN109752701A (en) A kind of road edge detection method based on laser point cloud
WO2022188663A1 (en) Target detection method and apparatus
CN110780305A (en) Track cone bucket detection and target point tracking method based on multi-line laser radar
CN109872329A (en) A kind of ground point cloud fast partition method based on three-dimensional laser radar
CN111028350B (en) Method for constructing grid map by using binocular stereo camera
CN110570453A (en) Visual odometer method based on binocular vision and closed-loop tracking characteristics
CN112819883B (en) Rule object detection and positioning method
CN111998862B (en) BNN-based dense binocular SLAM method
CN111612728A (en) 3D point cloud densification method and device based on binocular RGB image
CN114463521B (en) Building target point cloud rapid generation method for air-ground image data fusion
CN111325138A (en) Road boundary real-time detection method based on point cloud local concave-convex characteristics
CN107679458A (en) The extracting method of roadmarking in a kind of road color laser point cloud based on K Means
CN114332134B (en) Building facade extraction method and device based on dense point cloud
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN112906450A (en) Vehicle, system and method for determining an entry of an occupancy map in the vicinity of the vehicle
CN116508071A (en) System and method for annotating automotive radar data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240426

Address after: Room 509-2, 5th Floor, Building 1, Dingchuang Wealth Center, Cangqian Street, Yuhang District, Hangzhou City, Zhejiang Province, 311100

Patentee after: Hangzhou Yixun Technology Co.,Ltd.

Country or region after: China

Address before: Room 303-5, block B, building 1, 268 Shiniu Road, nanmingshan street, Liandu District, Lishui City, Zhejiang Province 323000

Patentee before: Zhejiang Guangpo Intelligent Technology Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right