CN112308039A - Obstacle segmentation processing method and chip based on TOF camera - Google Patents

Obstacle segmentation processing method and chip based on TOF camera Download PDF

Info

Publication number
CN112308039A
CN112308039A CN202011340709.8A CN202011340709A CN112308039A CN 112308039 A CN112308039 A CN 112308039A CN 202011340709 A CN202011340709 A CN 202011340709A CN 112308039 A CN112308039 A CN 112308039A
Authority
CN
China
Prior art keywords
obstacle
travel distance
depth
target obstacle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011340709.8A
Other languages
Chinese (zh)
Inventor
戴剑锋
赖钦伟
肖刚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN202011340709.8A priority Critical patent/CN112308039A/en
Publication of CN112308039A publication Critical patent/CN112308039A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a method and a chip for dividing and processing obstacles based on a TOF camera, wherein the method for dividing and processing the obstacles comprises the following steps: extracting an actual physical contour model of the target obstacle from a depth image of the target obstacle currently acquired by the TOF camera; the method comprises the following steps of (1) equally spacing and dividing a part of an actual physical contour model of a target obstacle in a horizontal projection range of the diameter of a robot body into five equal traveling distance areas; according to the identification type of the target obstacle, processing the depth information measured in the five equal parts of travel distance area in a statistical operation mode to obtain the maximum obstacle avoidance travel distance, so that the travel distance of the robot from the current position along the current travel direction does not exceed the maximum obstacle avoidance travel distance; the execution main body of the obstacle segmentation method is a robot with a TOF camera assembled at the front end of a body, and the target obstacle is in the current view field area of the TOF camera.

Description

Obstacle segmentation processing method and chip based on TOF camera
Technical Field
The invention relates to the technical field of intelligent robot ranging, in particular to a barrier segmentation processing method and a chip based on a TOF camera.
Background
TOF is an abbreviation of time of Flight (TOF) technology, i.e. a sensor emits modulated near infrared light, which is reflected after encountering an object, and the sensor converts the distance of a shot scene by calculating the time difference or phase difference between light emission and reflection to generate depth information.
In the prior art, depth information is directly extracted and calculated from a contour line (contour feature) of a target object of a depth image and used for ranging of a robot, but errors exist between the contour line of the target object of the depth image subjected to image segmentation processing and the actual size and shape, for example, the contour line of a certain parallel wall surface of a wall or furniture in the depth image is not parallel, the shape change trend of the contour line of the depth image acquired by collecting a small toy or a short cylinder is different from the actual trend, the short small parts on two sides of the toy are not segmented to extract the contour, and the two sides of some isolated columns are not originally attached with objects but segmented to extract redundant circular contours.
The robot is lack of effectiveness in distance measurement information of the obstacle, and therefore obstacle avoidance effect of the robot is affected.
Disclosure of Invention
In order to provide more effective obstacle ranging information for the robot and ensure the obstacle avoidance effect of the robot, the invention provides the following technical scheme:
an obstacle segmentation processing method based on a TOF camera comprises the following steps: extracting an actual physical contour model of the target obstacle from a depth image of the target obstacle currently acquired by the TOF camera; equally dividing the part of the actual physical contour model of the target obstacle, which falls in the projection range of the diameter of the robot body, into five equal parts of travel distance areas at equal intervals; adaptively processing the depth information measured in the travel distance area of the five equal parts according to the identification type of the target obstacle to obtain the maximum obstacle avoidance travel distance; the execution main body of the obstacle segmentation method is a robot with a TOF camera assembled at the front end of a body, and the target obstacle is in the current view field area of the TOF camera.
Compared with the prior art, the technical scheme divides the outline of the target obstacle into five equal parts at equal intervals according to the horizontal width of the robot body, and adaptively processes the depth information measured in the travel distance area of the five equal parts according to the identification type of the target obstacle to obtain the maximum obstacle avoidance travel distance, so that the effectiveness of the robot on the distance measurement information of the obstacle can be still ensured under the condition that the outline of the depth image acquired by the TOF camera is incomplete, and the obstacle avoidance effect of the robot is improved.
Further, the method for processing the depth information measured in the travel distance area of the five equal parts according to the identification type of the target obstacle to obtain the maximum obstacle avoidance travel distance includes: when the target obstacle is identified and classified as a wall type obstacle or a threshold type obstacle, selecting the optimal depth from the travel distance area of each equal part obtained by equal-interval segmentation, then calculating the average value of the optimal depths in the travel distance area of the five equal parts, and then determining the average value as the maximum obstacle avoidance travel distance of the parallel plane. Therefore, the problem of depth data errors caused by the fact that contour lines of a certain parallel wall surface of a wall or furniture are not parallel in a depth image is solved, the optimal distance representing the robot and the wall and other obstacles with parallel blocking surfaces is obtained, and the robot and the space separation type obstacles have the representative distance.
Further, the method for processing the depth information measured in the travel distance area of the five equal parts according to the identification type of the target obstacle to obtain the maximum obstacle avoidance travel distance further includes: when the target obstacle is identified and classified as a toy type obstacle, selecting the optimal depth from the travel distance area of each equal part obtained by equal-interval segmentation, performing weighted average operation on the optimal depth in the travel distance area of the five equal parts, and determining the result of weighted average as the maximum obstacle avoidance travel distance of the toy; wherein the smaller the optimal depth within the travel distance region, the greater its configured weight; the greater the optimal depth within the travel distance zone, the less weight its configuration. According to the technical scheme, the optimal depth in the travel distance area of five equal parts is weighted and averaged in the width range of the diameter of the machine body, so that the maximum obstacle avoidance travel distance obtained through processing is a distance variable with statistical significance, the overall shape characteristic of the target obstacle and the accessible area in front of the target obstacle are effectively represented, and the problems of incomplete outline, redundancy and inaccurate outline trend of the target obstacle segmented in the depth image can be solved.
Further, the method for processing the depth information measured in the travel distance area of the five equal parts according to the identification type of the target obstacle to obtain the maximum obstacle avoidance travel distance further includes: when the target obstacle is identified and classified as the electric wire type obstacle, the optimal depth is selected from the travel distance area of each equal part obtained by equal-distance segmentation, and the optimal depth (the shortest distance) with the minimum value is selected from the travel distance area of the five equal parts to determine the maximum obstacle avoidance travel distance of the electric wire. Compared with the prior art, the curve change characteristics reflecting the winding objects (such as wires and cables wound together in an indoor environment) do not need to be screened out to obtain the depth value.
Further, the optimal depth within the travel distance zone of each aliquot is: the average of all the measured depth values within the travel distance region corresponding to the aliquot, or the median of these depth values, or the depth value at the right hand split of the travel distance region corresponding to the aliquot, or the depth value at the left hand split of the travel distance region corresponding to the aliquot. And simplifying the acquisition mode of the depth values in the equally-divided regions.
Further, when the maximum obstacle avoidance walking distance is a safety threshold value, controlling the robot to execute deceleration obstacle avoidance or deceleration obstacle detouring, or triggering a collision warning signal; and the safety door limit value is used for preventing the robot from touching the target obstacle when the robot decelerates to zero and changing due to the change of the type of the target obstacle. The obtained maximum obstacle avoidance walking distance has a wide application range.
Further, the method for forming the actual physical contour model of the target obstacle comprises the following steps: filtering and communicating domain analysis are carried out on a depth image of a target obstacle currently acquired by a TOF camera so as to segment an image contour of the target obstacle and determine depth information of the image contour of the target obstacle; and then combining the depth information of the image contour of the target obstacle and the internal and external parameters of the TOF camera, and converting the image contour of the target obstacle into an actual physical contour model of the target obstacle under a world coordinate system by using a trigonometric principle from an imaging plane of the TOF camera. The technical scheme reduces the three-dimensional profile characteristics of the target obstacle, and is favorable for detecting the 3-dimensional coordinate information around the target obstacle.
Further, the actual physical contour model of the target obstacle includes: in a field of view area of a TOF camera, the horizontal distance between the leftmost side of the target obstacle and the center of the robot body, the horizontal distance between the rightmost side of the target obstacle and the center of the robot body, and the longitudinal height information of the target obstacle; the view field area of the TOF camera is an overlapping area of an effective distance measurement range of the TOF camera and a view angle range of the TOF camera. The technical scheme analyzes the shape and the horizontal ground coverage range of the target obstacle, so that the condition of the target obstacle in front of the robot can be positioned.
Further, the depth information and the longitudinal height information of the target obstacle are classified based on a filtering and statistical algorithm, so that the target obstacle is classified into a wall type obstacle, a threshold type obstacle, a toy type obstacle and a wire type obstacle. The method simplifies the classification and identification method of the obstacles and provides a necessary obstacle type result for the actual obstacle avoidance requirement.
A chip is used for storing program codes corresponding to the obstacle segmentation processing method based on the TOF camera in the technical scheme.
Drawings
Fig. 1 is a flowchart of an obstacle segmentation processing method based on a TOF camera according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings in the embodiments of the present invention. It should be noted that, in the present application, the whole text of chinese patent CN111624997A is introduced into the text of the present application, so as to complete the description of calculating the relative position information of the obstacle region from the depth image acquired by the TOF camera and the description of the map calibration marking method.
It should be noted that TOF is an abbreviation of Time of Flight (TOF) technology, that is, a sensor emits modulated near-infrared light, which is reflected after encountering an object, and the sensor converts the distance of a shot scene by calculating the Time difference or phase difference between light emission and reflection to generate depth information (depth data and depth distance, the unit is a pixel), and in addition, the three-dimensional profile of the object can be presented in a manner that different colors represent topographic images at different distances by combining with the shooting of a traditional camera, so as to obtain a three-dimensional 3D model, and the TOF camera is a camera for acquiring data by adopting the TOF technology.
As an embodiment, the present invention discloses a method for dividing and processing an obstacle based on a TOF camera, as shown in fig. 1, the method includes the following specific steps:
step S101, extracting an actual physical contour model of the target obstacle from the depth image of the target obstacle currently acquired by the TOF camera, and then entering step S102. The execution main body of the obstacle segmentation method is a robot with a TOF camera assembled at the front end of a body, and the target obstacle is in the current view field area of the TOF camera. Preferably, the TOF camera is installed in front of the mobile robot, and the optical axis of the TOF camera is arranged obliquely downwards or horizontally relative to the top surface of the robot, so that the brightness value of the target obstacle within the detection visual angle range is effective and the contour line of the target obstacle is complete to meet the depth positioning requirement.
Specifically, the method for extracting the actual physical contour model of the target obstacle includes: firstly, filtering and communicating domain analysis are carried out on a depth image of a target obstacle currently acquired by a TOF camera so as to segment an image contour of the target obstacle and determine depth information of the image contour of the target obstacle; the related filtering algorithm of the depth image data comprises median filtering, Gaussian filtering, guided filtering, bilateral filtering, mean filtering, time domain median filtering, statistical filtering, straight-through filtering, radius filtering and voxel filtering; the connected domain analysis comprises Two of Two-pass and seed-filing. And then combining the depth information of the image contour of the target obstacle and the internal and external parameters of the TOF camera, and converting the image contour of the target obstacle into an actual physical contour model of the target obstacle under a world coordinate system by using a trigonometric principle from an imaging plane of the TOF camera, namely calculating the coordinate information of each position point on the actual physical contour of the target obstacle. The three-dimensional contour feature of the target obstacle is restored, and 3-dimensional coordinate information around the target obstacle is favorably detected.
It should be noted that the actual physical contour model of the target obstacle includes: in a view field area of the TOF camera, calculating the horizontal distance between the leftmost side of the target obstacle and the center of the robot body, the horizontal distance between the rightmost side of the target obstacle and the center of the robot body, and the longitudinal height information of the target obstacle, wherein the position information is calculated by combining the internal and external parameters of the TOF camera and the depth information of pixel points on a depth image based on a triangular principle; in the embodiment, the coordinates of each position point obtained by calculation are converted into a world coordinate system (in a global coordinate system) with the body center of the robot as the origin to analyze the shape and the horizontal ground coverage of the target obstacle, so that the target obstacle in front of the robot can be positioned. The field of view region of the TOF camera is an overlapping region of an effective ranging range of the TOF camera and a view angle range of the TOF camera, so that the collected target obstacles are located in front of the robot, and obstacle avoidance significance is generated.
Step S102, a part of the actual physical contour model of the target obstacle, which falls within the horizontal projection range of the diameter of the robot body, is equally divided into five equal parts of travel distance areas, and then the process goes to step S103. In the step, a projection line segment of the target obstacle in front of the robot in the wide body range of the robot body is segmented into five equal parts, so that the projection line segment is segmented into five equal parts of travel distance areas, and depth information in the travel distance area of each equal part is acquired by the TOF camera currently. Specifically, the body wide range of the robot is the diameter of the body of the robot, is located on the traveling plane of the robot, and is perpendicular to the current traveling direction of the robot (the installation direction of the TOF camera on the body), and the horizontal projections of the position points of the partial contour lines of the actual physical contour model of the target obstacle in the corresponding field of view are all within the diameter of the body of the robot, and the partial contour lines are not necessarily complete. Of course, the degree of completeness of the contour line of the actual physical contour model of the target obstacle should also satisfy the depth positioning requirement.
Step S103, according to the identification type of the target obstacle (or the actual physical contour model of the target obstacle), the depth information measured in the travel distance area of the five equal parts is processed in a statistical operation mode adaptively to obtain the maximum obstacle avoidance travel distance, so that the travel distance of the robot along the current travel direction from the current position does not exceed the maximum obstacle avoidance travel distance. The maximum obstacle avoidance walking distance is an effective representative physical distance between the current position (the fixed position for collecting the depth image and performing statistical processing on the depth value) of the robot and the target obstacle; the depth data of the pixel points of the actual physical contour model of the partial obstacle is continuous (wall), but the depth data of the partial pixel points is discontinuous (winding), so that the depth information in the travel distance areas of the equal parts can be respectively counted, and then the position points on the contour line of the target obstacle are processed in a partition counting mode, so that the result of the depth information in the travel distance areas of the five equal parts after counting processing is representative, the effective distance between the target obstacle and the robot can be represented, a passable area meeting the obstacle avoidance of the robot body can be provided for the robot, and the defect that the actual physical contour model is incomplete is overcome. Therefore, the effectiveness of the robot on the distance measurement information of the obstacle can be still ensured under the condition that the contour line of the depth image acquired by the TOF camera is incomplete, and the obstacle avoidance effect of the robot is improved.
Preferably, after step S103 is executed, when the maximum obstacle avoidance walking distance is a safety threshold value, the robot is controlled to execute deceleration obstacle avoidance or deceleration obstacle avoidance, or trigger a collision warning signal; the safety threshold value is the distance which the robot does not touch the target obstacle when the current speed is reduced to zero, and is changed due to the change of the type of the target obstacle. According to the embodiment, the obtained maximum obstacle avoidance walking distance has a wide application range.
Preferably, the present embodiment classifies the depth information and the longitudinal height information of the target obstacle based on a filtering and statistical algorithm to classify the target obstacle into a wall type obstacle, a threshold type obstacle, a toy type obstacle, and a wire type obstacle. The related filtering algorithm of the depth image data comprises median filtering, Gaussian filtering, guided filtering, bilateral filtering, mean filtering, time domain median filtering, statistical filtering, straight-through filtering, radius filtering and voxel filtering; the connected domain analysis comprises Two of Two-pass and seed-filing. Compared with a method of obtaining a classification result through deep learning, the method is lower in operation load and less in operation amount.
It should be noted that the classification result of the target obstacle (or the actual physical contour model of the target obstacle) is: a geometry, a combination of geometries, etc. composed or abstracted based on contour lines and/or feature points for matching with each obstacle type. Wherein the geometry, combination of geometries may be based on the full outline or partial representation of the outline of the identified obstacle. For example, the shape features provided based on the island type include one or more combinations of circles, spheres, arcs, squares, cubes, pi-shapes, and the like. For example, the shoe shape features comprise a plurality of arc shapes which are connected end to end, and the chair shape features comprise a pi shape, an eight-claw shape and the like. The shape characteristics provided based on the type of wrap include at least one or more of a combination of curvilinear shapes, serpentine shapes, and the like. The shape features provided based on the space division type include at least one or more combinations of a straight line shape, a broken line shape, a rectangle shape, and the like. No matter what the shape characteristics of the actual physical contour model of the target obstacle are, the depth information measured in the five equal parts of the travel distance region obtained by segmentation can be adaptively processed by using the steps S102 and S103 in a statistical operation manner, so as to obtain the maximum obstacle avoidance travel distance under the corresponding shape characteristics.
As an embodiment, when the target obstacle is identified and classified as a wall-type obstacle or a threshold-type obstacle, the optimal depth is selected from the travel distance area of each equal part obtained by the equidistant segmentation in step S102, the average value of the optimal depths in the travel distance area of the five equal parts is obtained, and the average value is determined as the maximum obstacle avoidance travel distance of the parallel plane. In the depth image actually acquired by the TOF camera, the contour lines of a certain parallel wall surface of the threshold and the wall extracted are not parallel, although the contour lines of the target obstacles meet the rectangular characteristic condition, the depth values corresponding to each pixel point of the contour line on the corresponding depth image are not all equal, at this time, the depth values of the pixel points (corresponding to the position points on the contour line of the actual physical contour model) of the contour line falling within the width range of the machine body diameter are averaged to obtain a depth average value with representativeness for realizing effective obstacle avoidance, namely, an effective distance between the current position of the robot and the parallel surface of the target obstacle described in the embodiment, so that the problem of depth data errors caused by the fact that the contour lines of the certain parallel wall surface of the wall or furniture are not parallel in the depth image is solved, and the optimal distance for representing the obstacle with the parallel surface such as the robot and the wall is obtained, The robot has a representative distance from a space-division type obstacle. And the robot starts from the current position, the walking distance along the current walking direction does not exceed the maximum obstacle avoidance walking distance, or the robot collides with the target obstacle.
As another embodiment, when the target obstacle is identified and classified as a toy-type obstacle, the toy-type obstacle further includes a sofa-bottom supporting column, a bed-bottom supporting column, and a table-chair-bottom supporting column, in this embodiment, within the range of the body diameter of the robot, an optimal depth is selected from the travel distance area of each equal part obtained by the equidistant segmentation in step S102, then the optimal depths in the travel distance areas of the five equal parts are subjected to weighted average operation, and then the result of weighted average is determined as the maximum obstacle avoidance travel distance of the toy; wherein the smaller the optimal depth within the travel distance region, the greater its configured weight; the greater the optimal depth within the travel distance zone, the less weight its configuration; the statistical operation method of this embodiment considers configuring a larger weight for the depth value of the corresponding location point when the depth value of the location point is smaller, so that the maximum obstacle avoidance walking distance obtained by weighted average is biased to be smaller in depth value (the contour line of the target obstacle closer to the current position of the robot). In some implementation scenes, the depth values of the contours segmented from the depth image are directly used for distance measurement analysis, but the shape change trend of the contour lines of the depth image acquired by collecting a small toy or a short cylinder is different from the actual trend, the short small parts on the two sides of the toy are not segmented to extract the contours, and the redundant circular contours extracted by segmentation originally have no object on the two sides of the supporting columns at the bottom of a sofa, the supporting columns at the bottom of a bed and the supporting columns at the bottom of a table and a chair meet the actual trend, so that the problem of misjudgment of the depth values is easily caused.
Specifically, the present embodiment finds the weighted average by configuring the corresponding weights for the optimal depths in the travel distance region of five equal parts. In the embodiment, based on weight distribution factors, the optimal depth of the depth image contour line acquired by the small toy or the short cylinder in the travel distance area of the five equal parts is subjected to weighted average operation, an effective depth distance result is obtained, and the depth distance result is not influenced by the contour curve change characteristics of the surface of the target obstacle, so that the robot starts from the current position and walks along the current walking direction within the depth distance result obtained by the weighted average operation; the embodiment is based on the influence of a weight distribution factor (the greater the optimal depth in a travel distance area is, the smaller the configured weight is) on the depth value of a pixel point of a defective contour line in a depth image (the pixel point with an overlarge depth value) can be weakened; or the influence of the depth values of the pixel points of the incomplete contour lines in the depth image (the pixel points with the possibly too small depth values) is weakened to shorten the safe distance between the robot and the target obstacle, but the robot is not easy to collide with the target obstacle based on the factor of reasonable weight distribution. Or taking the depth values of the pixel points of the redundant contour lines (the pixel points with the possibly too small depth values) in the depth image into consideration to be averaged to obtain a closer maximum obstacle avoidance walking distance, wherein even if the actual contour lines are not as close to the current position of the robot, the weighted average maximum obstacle avoidance walking distance can prompt the robot to prepare for obstacle avoidance in advance, and when the maximum obstacle avoidance walking distance is smaller, the walking distance of the robot is limited to be smaller, so that the robot is favorable for avoiding obstacles in advance; or the influence of the pixel depth values of the redundant contour lines (considered as pixel points with possibly overlarge depth values) in the depth image on the obstacle avoidance walking distance is weakened based on the weight distribution factor (the larger the optimal depth in the travel distance area is, the smaller the configured weight is).
As an embodiment, when the target obstacle is identified and classified as an electric wire type obstacle, the electric wire type obstacle is mainly a winding type, an optimal depth is selected from the travel distance area of each equal part obtained by the equally spaced segmentation in the step S102, an optimal depth with a minimum value is selected from the travel distance area of the five equal parts, and the optimal depth is determined as a maximum obstacle avoidance travel distance of the electric wire. In this embodiment, the change characteristic of the contour curve of the winding object (for example, a group of wires and cables wound in an indoor environment) is not considered on the target obstacle, only the position point closest to the current position of the robot is selected, and the depth value of the position point is processed as the maximum obstacle avoidance walking distance of the wire, after all, the contour line of the depth image of the winding object such as the wire is wound as a small group, and the occurrence is not easy: over-segmentation or under-segmentation or inconsistent with the reality after the contour curve is extracted. In order to improve the obstacle avoidance effect, the optimal depth with the minimum numerical value in the travel distance area cut by the actual physical profile model of the wire type obstacle is selected as the maximum obstacle avoidance travel distance.
In the foregoing embodiment, the optimal depth within the travel distance zone of each aliquot is: the average of all the measured depth values within the travel distance region corresponding to the aliquot, or the median of these depth values, or the depth value at the right hand split of the travel distance region corresponding to the aliquot, or the depth value at the left hand split of the travel distance region corresponding to the aliquot. Therefore, in the statistical operation process of step S103, the manner of obtaining the depth values in the equally divided regions is simplified.
The embodiment of the invention also discloses a chip, which is used for storing the program code corresponding to the obstacle segmentation processing method based on the TOF camera in the technical scheme. The obstacle avoidance effect of the robot with the chip is improved, and the running load of the robot is reduced.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. An obstacle segmentation processing method based on a TOF camera is characterized by comprising the following steps:
extracting an actual physical contour model of the target obstacle from a depth image of the target obstacle currently acquired by the TOF camera;
the method comprises the following steps of (1) equally spacing and dividing a part of an actual physical contour model of a target obstacle in a horizontal projection range of the diameter of a robot body into five equal traveling distance areas;
according to the identification type of the target obstacle, processing the depth information measured in the five equal parts of travel distance area in a statistical operation mode to obtain the maximum obstacle avoidance travel distance, so that the travel distance of the robot from the current position along the current travel direction does not exceed the maximum obstacle avoidance travel distance;
the execution main body of the obstacle segmentation method is a robot with a TOF camera assembled at the front end of a body, and the target obstacle is in the current view field area of the TOF camera.
2. The obstacle segmentation processing method according to claim 1, wherein the method for adaptively processing the depth information measured in the travel distance area of the five equal parts in a statistical operation manner according to the identification type of the target obstacle to obtain the maximum obstacle avoidance travel distance includes:
when the target obstacle is identified and classified as a wall type obstacle or a threshold type obstacle, selecting the optimal depth from the travel distance area of each equal part obtained by equal-interval segmentation, then calculating the average value of the optimal depths in the travel distance area of the five equal parts, and then determining the average value as the maximum obstacle avoidance travel distance of the parallel plane.
3. The obstacle segmentation processing method according to claim 1, wherein the method for adaptively processing the depth information measured in the travel distance area of the five equal parts in a statistical operation manner according to the identification type of the target obstacle to obtain the maximum obstacle avoidance travel distance further includes:
when the target obstacle is identified and classified as a toy type obstacle, selecting the optimal depth from the travel distance area of each equal part obtained by equal-interval segmentation, performing weighted average operation on the optimal depth in the travel distance area of the five equal parts, and determining the result of weighted average as the maximum obstacle avoidance travel distance of the toy;
wherein the smaller the optimal depth within the travel distance region, the greater its configured weight; the greater the optimal depth within the travel distance zone, the less weight its configuration.
4. The obstacle segmentation processing method according to claim 1, wherein the method for adaptively processing the depth information measured in the travel distance area of the five equal parts in a statistical operation manner according to the identification type of the target obstacle to obtain the maximum obstacle avoidance travel distance further includes:
when the target obstacles are identified and classified as electric wire type obstacles, selecting the optimal depth from the travel distance area of each equal part obtained by equal-distance segmentation, selecting the optimal depth with the minimum value from the travel distance area of the five equal parts, and determining the optimal obstacle avoidance travel distance of the electric wire.
5. The obstacle segmentation processing method according to any one of claims 1 to 4, wherein the optimal depth within the travel distance region of each aliquot is: the average of all the measured depth values within the travel distance region corresponding to the aliquot, or the median of these depth values, or the depth value at the right hand split of the travel distance region corresponding to the aliquot, or the depth value at the left hand split of the travel distance region corresponding to the aliquot.
6. The obstacle segmentation processing method according to claim 5, wherein when the maximum obstacle avoidance walking distance is a safety threshold value, the robot is controlled to perform deceleration obstacle avoidance or deceleration obstacle avoidance, or trigger a collision warning signal;
the safety threshold value is the distance which the robot does not touch the target obstacle when the current speed is reduced to zero, and is changed due to the change of the type of the target obstacle.
7. The obstacle segmentation processing method according to claim 5, wherein the method for extracting the actual physical contour model of the target obstacle includes:
filtering and communicating domain analysis are carried out on a depth image of a target obstacle currently acquired by a TOF camera so as to segment an image contour of the target obstacle and determine depth information of the image contour of the target obstacle;
and then combining the depth information of the image contour of the target obstacle and the internal and external parameters of the TOF camera, and converting the image contour of the target obstacle into an actual physical contour model of the target obstacle under a world coordinate system by using a trigonometric principle from an imaging plane of the TOF camera.
8. The obstacle segmentation processing method according to claim 7, wherein the actual physical contour model of the target obstacle includes: in a field of view area of a TOF camera, the horizontal distance between the leftmost side of the target obstacle and the center of the robot body, the horizontal distance between the rightmost side of the target obstacle and the center of the robot body, and the longitudinal height information of the target obstacle;
the view field area of the TOF camera is an overlapping area of an effective distance measurement range of the TOF camera and a view angle range of the TOF camera.
9. The obstacle segmentation processing method according to claim 8, wherein the depth information and the longitudinal height information of the target obstacle are classified based on a filtering and statistical algorithm to classify the target obstacle into a wall-type obstacle, a threshold-type obstacle, a toy-type obstacle, and a wire-type obstacle.
10. A chip, wherein the chip is configured to store a program corresponding to the method for processing obstacle segmentation based on a TOF camera according to any one of claims 1 to 9.
CN202011340709.8A 2020-11-25 2020-11-25 Obstacle segmentation processing method and chip based on TOF camera Pending CN112308039A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011340709.8A CN112308039A (en) 2020-11-25 2020-11-25 Obstacle segmentation processing method and chip based on TOF camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011340709.8A CN112308039A (en) 2020-11-25 2020-11-25 Obstacle segmentation processing method and chip based on TOF camera

Publications (1)

Publication Number Publication Date
CN112308039A true CN112308039A (en) 2021-02-02

Family

ID=74336110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011340709.8A Pending CN112308039A (en) 2020-11-25 2020-11-25 Obstacle segmentation processing method and chip based on TOF camera

Country Status (1)

Country Link
CN (1) CN112308039A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150136209A (en) * 2014-05-26 2015-12-07 서울대학교산학협력단 Obstacle avoidance system and method based on multiple images
CN111067439A (en) * 2019-12-31 2020-04-28 深圳飞科机器人有限公司 Obstacle processing method and cleaning robot
CN111358360A (en) * 2018-12-26 2020-07-03 珠海市一微半导体有限公司 Method and device for preventing robot from winding wire, chip and sweeping robot
CN111624997A (en) * 2020-05-12 2020-09-04 珠海市一微半导体有限公司 Robot control method and system based on TOF camera module and robot
CN111726591A (en) * 2020-06-22 2020-09-29 珠海格力电器股份有限公司 Map updating method, map updating device, storage medium and electronic equipment
CN111897335A (en) * 2020-08-02 2020-11-06 珠海市一微半导体有限公司 Obstacle avoidance control method and control system for robot walking in Chinese character' gong
CN111949021A (en) * 2020-07-30 2020-11-17 尚科宁家(中国)科技有限公司 Self-propelled robot and control method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150136209A (en) * 2014-05-26 2015-12-07 서울대학교산학협력단 Obstacle avoidance system and method based on multiple images
CN111358360A (en) * 2018-12-26 2020-07-03 珠海市一微半导体有限公司 Method and device for preventing robot from winding wire, chip and sweeping robot
CN111067439A (en) * 2019-12-31 2020-04-28 深圳飞科机器人有限公司 Obstacle processing method and cleaning robot
CN111624997A (en) * 2020-05-12 2020-09-04 珠海市一微半导体有限公司 Robot control method and system based on TOF camera module and robot
CN111726591A (en) * 2020-06-22 2020-09-29 珠海格力电器股份有限公司 Map updating method, map updating device, storage medium and electronic equipment
CN111949021A (en) * 2020-07-30 2020-11-17 尚科宁家(中国)科技有限公司 Self-propelled robot and control method thereof
CN111897335A (en) * 2020-08-02 2020-11-06 珠海市一微半导体有限公司 Obstacle avoidance control method and control system for robot walking in Chinese character' gong

Similar Documents

Publication Publication Date Title
CN112327878B (en) Obstacle classification and obstacle avoidance control method based on TOF camera
EP3349041A1 (en) Object detection system
JP5531474B2 (en) Map generation device, runway estimation device, movable region estimation device, and program
JP3349060B2 (en) Outside monitoring device
JP5822255B2 (en) Object identification device and program
EP3293669A1 (en) Enhanced camera object detection for automated vehicles
US11093762B2 (en) Method for validation of obstacle candidate
CN110865393A (en) Positioning method and system based on laser radar, storage medium and processor
CN111537994B (en) Unmanned mine card obstacle detection method
CN112363513A (en) Obstacle classification and obstacle avoidance control method based on depth information
CN106371104A (en) Vehicle targets recognizing method and anti-collision device using multi-line point cloud data machine learning
JP5868586B2 (en) Road characteristic analysis based on video image, lane detection, and lane departure prevention method and apparatus
JPH085388A (en) Running road detecting device
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
WO2020080088A1 (en) Information processing device
CN112327879A (en) Edge obstacle avoidance method based on depth information
CN113379776A (en) Road boundary detection method
CN114103993A (en) Vehicle driving control device and method
CN116311127A (en) Road boundary detection method, computer equipment, readable storage medium and motor vehicle
KR20230101560A (en) Vehicle lidar system and object detecting method thereof
CN112308039A (en) Obstacle segmentation processing method and chip based on TOF camera
US11861914B2 (en) Object recognition method and object recognition device
US20230258813A1 (en) LiDAR Free Space Data Generator and LiDAR Signal Processing Method Using Multi-Modal Noise Filtering Scheme
CN110618420A (en) Ultrasonic data processing method and system, vehicle and storage medium
CN113496199B (en) Histogram-based L-shaped detection of target objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: Room 105-514, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

CB02 Change of applicant information