CN113269838A - Obstacle visual detection method based on FIRA platform - Google Patents

Obstacle visual detection method based on FIRA platform Download PDF

Info

Publication number
CN113269838A
CN113269838A CN202110552402.2A CN202110552402A CN113269838A CN 113269838 A CN113269838 A CN 113269838A CN 202110552402 A CN202110552402 A CN 202110552402A CN 113269838 A CN113269838 A CN 113269838A
Authority
CN
China
Prior art keywords
obstacles
obstacle
pixels
coordinate
row
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110552402.2A
Other languages
Chinese (zh)
Other versions
CN113269838B (en
Inventor
刘瑾瑜
武彤晖
危渊
钟梦溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202110552402.2A priority Critical patent/CN113269838B/en
Publication of CN113269838A publication Critical patent/CN113269838A/en
Application granted granted Critical
Publication of CN113269838B publication Critical patent/CN113269838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention discloses a barrier visual detection method based on a FIRA platform, which comprises the following steps: image preprocessing, namely performing gray processing and corrosion expansion operation on an input image; longitudinal obstacle extraction, namely filtering out pixels to which obstacles belong from the preprocessed image, and extracting the position of each row of pixels corresponding to obstacles by counting the number of the pixels in each row; dividing obstacles, namely performing scanning operation in the row direction according to the extracted entity coordinates corresponding to the pixels in each row, judging whether each row belongs to the same obstacle or not, and judging the shielding relation between adjacent obstacles; performing exception handling, namely performing exception handling on the result of preliminary obstacle segmentation, extracting side-by-side obstacles with overlong width, taking a middle point to segment the obstacles into two obstacles, extracting the obstacles at the edge of the picture, and determining the positions of the obstacles through the vertexes of the side edges; and outputting the result. The invention greatly reduces the requirement of computing resources by fully utilizing the environmental geometric information and the camera imaging principle under the condition of meeting the precision requirement.

Description

Obstacle visual detection method based on FIRA platform
Technical Field
The invention belongs to the field of robot motion control, and particularly relates to a barrier visual detection method based on a FIRA platform.
Background
The visual obstacle avoidance is an important method for acquiring environmental information in the field of robot motion control, is a main method for avoiding obstacles by animal motion in the nature, and is one of main challenges in the fields of automatic driving and robot automation. Several methods currently exist to perform visual obstacle avoidance for ground robots, such as SLAM, moments of images, and so on.
The FIRA simulation obstacle avoidance challenging environment is a virtual platform for driving a turnlebot wheeled robot to move and avoid obstacles based on a Gazebo simulation physical engine under an ROS (robot Operating System), and is an official platform of FIRA SimeroSot robot challenge of international events. In this environment, it is necessary to control the robot with two wheels by using only one RGB image sensor on the turtle bot, and move the robot to the end point of the field in a predetermined field. In the FIRA environment, there are 3-6 static or moving obstacles, which are all pure black cuboids with a side length of 0.5m, as shown in FIG. 1.
Visual slam (simultaneous Localization And mapping) was first applied in the field of robotics. The goal of it is to construct a real-time map of the surrounding environment based on sensor data without any prior knowledge and to infer robot pose from the map at the same time. Under the FIRA challenge scene, because the texture information of the obstacle is not clear enough, the efficiency of acquiring key points by using the visual SLAM technology is low, and the prior knowledge with the same box size cannot be used for assisting obstacle avoidance.
The Image Moment (Image Moment) can provide qualitative obstacle position information by calculating the grayscale information of each dimension of one Image. Obstacle avoidance can be completed in a scene with few obstacles, but the defects of low precision and low logic are existed.
The existing research provides some schemes for visual obstacle avoidance of a planar robot, but the following problems still remain to be improved in the obstacle avoidance scene:
1. part of methods are slow in detection speed and consume large computing resources;
2. the prior information set by the scene is difficult to be fully utilized;
3. the provided visual information can only be applied to local planning and not to global planning.
Disclosure of Invention
The invention provides a barrier visual detection method based on a FIRA platform, which aims to solve the problems of low detection precision and low speed of the traditional detection method in a FIRA obstacle avoidance environment, and directly provides coordinates and a shielding relation of a box body. The detection precision within 2ms of speed and 1cm (the distance between obstacles is not more than 3m) or within 3cm (the distance between obstacles is not more than 6m) can be realized on a common Personal Computer (PC). The comprehensive matching path planning method can complete ten-second-level obstacle avoidance tasks, and the comprehensive success rate exceeds 99%.
The invention is realized by adopting the following technical scheme:
a barrier visual detection method based on a FIRA platform comprises the following steps:
step 1, image preprocessing, namely performing graying processing on an input image and performing continuous two-time corrosion expansion operation;
step 2, extracting longitudinal obstacles, namely filtering out pixels to which the obstacles belong from the preprocessed image through a pixel threshold, and extracting the position of the obstacle corresponding to each row of pixels by means of a camera imaging principle through counting the number of each row of pixels;
step 3, dividing obstacles, namely performing scanning operation in the row direction according to the extracted entity coordinates corresponding to the pixels in each row, judging whether each row belongs to the same obstacle according to the distance difference between the front pixel and the rear pixel, and judging the shielding relation between the adjacent obstacles according to whether a blank row exists between the two adjacent obstacles;
step 4, exception handling, namely, exception handling is carried out on the result of preliminary obstacle segmentation, parallel obstacles with overlong width are extracted, a middle point is taken and segmented into two obstacles, the obstacles at the edge of the picture are extracted, and the positions of the obstacles are determined through the top points of the side edges;
and 5, outputting a result, namely outputting the result after exception handling, wherein the result comprises a plane two-dimensional coordinate of the box body and a mutual shielding relation.
A further improvement of the invention is that in step 1, the erosion and dilation kernel sizes are both 2 pixels in size.
The further improvement of the invention is that the specific implementation method of the step 2 is as follows:
performing thresholding screening on each pixel point according to the specific light condition requirement of the simulation environment, reserving pixels belonging to the barrier part, setting the rest parts as null values, then counting the number of barrier pixels in each row, and according to formulas (1) and (2), calculating the corresponding position of the barrier corresponding to the alpha-th row containing beta barrier pixels relative to the robot camera;
Figure BDA0003075931820000031
Y=X×(α-HALFWID)×VALUEAT1M (2)
wherein, LENTHAT1M and VALUEAT1M are undetermined coefficients of the surface of the obstacle at x and y coordinates of 1 m.
The invention is further improved in that the relevant parameters in the formula are calibrated according to the special cases of the formula (1) and the formula (2) when the distance is 1 m.
The further improvement of the invention is that the specific implementation method of the step 3 is as follows:
carrying out scene obstacle segmentation by changing the relation between the positions of the neighborhood pixels; scanning the picture from left to right in sequence: if α is1X coordinate and alpha are corresponded at the coordinate position1The difference of x coordinates corresponding to +6 is less than 0.004m, then the alpha is1The coordinates are regarded as the starting point of the front face of the barrier box body; if α is2X coordinate and alpha are corresponded at the coordinate position2The difference of +4 corresponding to x coordinates is more than 0.13m, then alpha is2The +1 coordinate is regarded as the termination point of the front of the barrier box body; if the starting point is alpha1And end point alpha2If the difference of x coordinates corresponding to the +1 position is less than 0.05m, then alpha is selected1The x coordinate of the position is taken as the x coordinate of the box body; if the x coordinate of the starting point and the ending point corresponds toIf the difference is greater than 0.05m, (alpha) is selected12The x coordinate at +1)/2 is taken as the x coordinate of the box body; if there is no empty element between the two boxes, it represents that there is a shielding relationship between them, and the box with a close distance and a far distance is shielded by the box, and the relative shielding relationship is marked at this time.
The further improvement of the invention is that the specific implementation method of the step 4 is as follows:
supplementing or correcting the condition which cannot be processed in the segmentation process; for the parallel boxes which cannot be distinguished in the segmentation step, the left and right boundaries of the obstacle are kept unchanged, and the centers of the two boxes which can maintain the left and right boundaries are found to be used as new obstacle targets; when the obstacle is in the left-over and right-over part of the picture, the edge of the obstacle is extracted from the leftmost side and the rightmost side to the middle, the pixels extracted from the last column of pixels and closest to the middle column of pixels correspond to the part of the obstacle closest to the center of the field, and the center coordinate can be obtained through the xy value of the point.
The invention has at least the following beneficial technical effects:
the invention greatly reduces the requirement of computing resources by fully utilizing the environmental geometric information and the camera imaging principle under the condition of meeting the precision requirement. The method has the following specific advantages:
firstly, the method comprises the following steps: the method fully utilizes the task environment information to perform detection and dimension reduction processing, so that the method can give consideration to both detection precision and speed. The living beings fully know the sensory perception attribute of self sense in the visual navigation process, and the prior knowledge of the environment is utilized for comparison, the invention fully considers the size of the box body participating in the environment in the imaging camera, thereby realizing single-frame direct measurement by utilizing RGB images, and simultaneously ensuring high detection speed of 2ms and high detection precision of 3cm by using a dimension reduction detection method. Compared with the traditional detection method, the method has the advantage that the effective information is more fully utilized.
Secondly, the method comprises the following steps: and exception handling is carried out according to different limit imaging conditions, so that the stability and reliability of detection are ensured. Aiming at various unusual image conditions such as shielding, edges, side surfaces and the like, the invention provides a proper investigation flow and an abnormal processing mechanism, and can be self-adaptive to measurement challenges brought by various relative position changes in the motion process.
Thirdly, the method comprises the following steps: the invention adopts a setting method of proper parameters, and can adapt to various environment settings. Even under the condition that the camera parameters cannot be known completely, the method can simply and conveniently measure the required parameters through experiments, reduce the degree of dependence on the environment, consider the comprehensive action result of various noise signals and have small system deviation.
Fourthly: the judgment of the barrier relation understood based on the environmental information enables the invention to provide the condition that one barrier is blocked, and more information is provided for further data screening and path planning.
Drawings
Fig. 1 is a schematic diagram of a fia simulation obstacle avoidance environment.
Fig. 2 is a schematic diagram of an input picture sample.
Fig. 3 is a schematic diagram of a sample of a preprocessed image.
FIG. 4 is a schematic view of a sample case of side-by-side cases.
FIG. 5 is a schematic view of a sample image edge box.
FIG. 6 is a flow chart of the method of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention provides a barrier visual detection method based on a FIRA platform, which comprises five parts from front to back, namely preprocessing, extraction, segmentation, exception handling and result output.
The input picture pattern is, for example, as shown in fig. 1, a color RGB image of 640 × 480 pixels, where black parts are shown as an obstacle box. The calculation amount can be effectively reduced without losing the obstacle detail information by converting the color image into the gray image. The obstacle information of the part shielded by the field cycloid can be compensated through the gray image corrosion and expansion operation, meanwhile, the noise information of the next step is reduced, and the result after the image sample preprocessing is shown in figure 2.
The extraction means that pixels of the image which longitudinally belong to the obstacles are extracted and counted, a threshold value is set for a specific simulated light environment, the number of the obstacle pixels in each row is counted, and finally 640 element arrays with one value between 0 and 480 are obtained. According to the imaging principle of a camera, the number of pixels in a certain column and the column coordinate x where the pixels are located can determine the unique two-dimensional coordinate xy of the corresponding entity. In the robot FLU coordinate system, the position XY of the box is in correspondence with the α -th row containing β obstacle pixels.
Figure BDA0003075931820000061
Y=X×(α-HALFWID)×VALUEAT1M (2)
Wherein, LENTHAT1M and VALUEAT1M are undetermined coefficients of the surface of the obstacle at x and y coordinates of 1 m. The position of the actual object corresponding to each column of the image can be calculated, but here the noise remains to be further processed.
The segmentation means that the pixel coordinates of the obstacles are clustered, and the attribution relationship between each part of pixels of the actual image and the obstacles is judged, so that the column pixels are converted into the coordinates of the actual object. The parts of the general box body which can be shot are mainly the front and the side, and a coordinate jump accompanied with noise is certain at the initial and final positions of the box body. Since the pixels have continuity to the box in the α direction, the key point is to determine the start and end positions of the box, and the criterion for dividing the obstacle is as follows:
1) if α is1X coordinate and alpha are corresponded at the coordinate position1The difference of x coordinates corresponding to +6 is less than 0.004m, then the alpha is1The coordinates can be considered as the starting point of the front face of an obstacle box. If α is2X coordinate and alpha are corresponded at the coordinate position2+4 corresponds to the x coordinateA difference of more than 0.13m, then2The +1 coordinate may be considered as an end point of the front face of the barrier housing.
2) If the starting point is alpha1And end point alpha2If the difference of x coordinates corresponding to the +1 position is less than 0.05m, then alpha is selected1The x coordinate of the position is taken as the x coordinate of the box body; if the difference between the corresponding x coordinates of the starting point and the ending point is more than 0.05m, (alpha) is selected12The x coordinate at +1)/2 is taken as the x coordinate of this case.
3) In the initial calculation, the y coordinate of the box body is taken as a starting point, and the ending point corresponds to the average value of the y coordinate, but the coordinate may generate certain deviation due to the shielding relation. If no empty element exists between the two boxes, the shielding relationship exists between the two boxes, namely the box with the close distance and the box with the longer shielding distance can mark the relative shielding relationship. And correspondingly, correcting the box body with the right side shielded by taking the left side as a starting point, and for the box body with both sides shielded by taking the left side as a starting point, not obtaining an accurate y coordinate at the moment and taking the middle point of the starting and ending point as an approximate reservation.
4) Through the above steps, three-dimensional information can be obtained for any obstacle that exposes the front portion: xy coordinates and mutual occlusion relationships.
Exception handling is the addition or correction of special cases that cannot be handled during the segmentation process. The cases requiring exception handling mainly include recognition of boxes at the edges of the picture and division of boxes in parallel.
For the parallel boxes which cannot be distinguished in the dividing step, as shown in fig. 4, the remarkable characteristic is that the width of the box is far larger than that of a single obstacle, the left and right boundaries of the obstacle are kept unchanged, and the centers of the two boxes which can maintain the left and right boundaries are found as new obstacle targets.
When the obstacle is in the left-right part of the screen, as shown in fig. 5, a prominent feature is that there are pixels at the edge of the image, but the image is not divided into boxes, and at this time, exception handling is performed. At this time, the edge of the obstacle is extracted from the leftmost side, the rightmost side to the middle, the pixel extracted from the last column of pixels is corresponding to the part of the obstacle closest to the center of the field, and the center coordinate can be obtained by the xy value of the point.
Examples
The obstacle detection of the dimensionality reduction method mainly comprises five parts, namely image preprocessing, longitudinal obstacle extraction, obstacle segmentation, exception handling and result output, wherein the five parts are as follows:
1. image preprocessing:
as shown in fig. 3, first, RGB to gray scale color dimensionality reduction is performed on the input image (640 x 480). And sequentially carrying out corrosion and expansion operations twice, wherein the sizes of corrosion and expansion cores are both 2 pixels.
2. Longitudinal obstacle extraction:
and taking the lightness 20 as a threshold value, and extracting the number of pixels with the brightness of each column smaller than the threshold value to obtain a one-dimensional array with the length of 640. And processing according to the formula 1 to obtain xy coordinates, if a column does not have any obstacle pixel, dividing by zero in the formula brings program errors, special judgment is carried out before division, and the result is directly marked as a special value. If the absolute value of the x coordinate of the obstacle is larger than six meters, or the absolute value of the y coordinate of the obstacle is smaller than three meters and larger than three meters, the obstacle detection is invalid when the obstacle detection is out of bounds, and the obstacle detection is an abnormal value caused by noise in the detection process and is directly eliminated.
3. Obstacle segmentation:
the method is used for clustering in the row-column direction, whether the distance of each column of obstacles is a special point of the box body is judged by sequentially analyzing the distance from left to right, if the distance difference between a certain column of the obstacles and the sixth column of the obstacle is less than 0.004m, the obstacle represented by the pixel is regarded as the starting point of the box body, an ending point exists on the right side of the starting point of the box body, and if the distance difference between the certain column of the obstacles and the fourth column of the obstacle is more than 0.13m, the obstacle represented by the next column of the obstacle is regarded as the ending point of the box body. At the moment, firstly, preliminarily calculating the x-axis coordinate of the detected box center under the FLU coordinate system of the robot, if the difference of the x-axis coordinates of the starting point and the ending point is less than 0.05m, taking the x-axis coordinate of the starting point as the box center coordinate, and otherwise, selecting the x-axis coordinate of the middle point of the starting point and the ending point under the pixel coordinate system as the box center coordinate.
In a pixel coordinate system, if a blank row exists between the right boundary of the left box and the left boundary of the right box, the two boxes do not have a shielding relation at the visual angle, otherwise, the box with the smaller central x coordinate has a shielding part for the other box, and shielding information of adjacent obstacles is compared according to the principle. And for the box body which is not completely shielded or the box body of which both sides are shielded, taking the average value of the y coordinates of the starting point and the ending point of the box body under the FLU coordinate system as the value of the center of the box body, and for the box body of which one side is shielded, taking the coordinates of the starting point or the ending point of the side which is not shielded to infer the coordinates of the center of the whole box body.
4. Exception handling
Two abnormal situations are detected. And detecting and processing the side-by-side boxes. If the distance between the y coordinates of the start point and the end point of a certain box is more than 1.5 times of the length of the box, the boxes which are arranged side by side appear, as shown in fig. 4. Taking the starting point and the ending point of the abnormal condition as the starting point and the ending point of the two contained boxes respectively, and calculating the corresponding center coordinate by taking the starting point and the ending point as the reference.
And detecting and processing the picture edge box body. If a box pixel exists in the leftmost column of the image but is not detected as a starting point, it represents that a left edge box exists (the same applies to the right), as shown in fig. 5. And if the difference between a certain row of pixels and the next fourth row of pixels is more than 0.20m, the pixels at the position represent the rightmost boundary behind the box body, and the box body central coordinate is calculated by taking the boundary as a reference so as to supplement the position of the obstacle and the shielding information.
5. Result output
Outputting a three-dimensional detection result: x, y coordinates of the box and occlusion information. The shielding information comprises (left side shielding, right side shielding, double side shielding and beyond picture shielding), and the obstacle information is provided for the obstacle avoidance method for path planning.
Although the invention has been described in detail hereinabove with respect to a general description and specific embodiments thereof, it will be apparent to those skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (6)

1. A barrier visual detection method based on a FIRA platform is characterized by comprising the following steps:
step 1, image preprocessing, namely performing graying processing on an input image and performing continuous two-time corrosion expansion operation;
step 2, extracting longitudinal obstacles, namely filtering out pixels to which the obstacles belong from the preprocessed image through a pixel threshold, and extracting the position of the obstacle corresponding to each row of pixels by means of a camera imaging principle through counting the number of each row of pixels;
step 3, dividing obstacles, namely performing scanning operation in the row direction according to the extracted entity coordinates corresponding to the pixels in each row, judging whether each row belongs to the same obstacle according to the distance difference between the front pixel and the rear pixel, and judging the shielding relation between the adjacent obstacles according to whether a blank row exists between the two adjacent obstacles;
step 4, exception handling, namely, exception handling is carried out on the result of preliminary obstacle segmentation, parallel obstacles with overlong width are extracted, a middle point is taken and segmented into two obstacles, the obstacles at the edge of the picture are extracted, and the positions of the obstacles are determined through the top points of the side edges;
and 5, outputting a result, namely outputting the result after exception handling, wherein the result comprises a plane two-dimensional coordinate of the box body and a mutual shielding relation.
2. The visual inspection method of obstacles based on the FIRA platform as claimed in claim 1, wherein in step 1, the size of corroded and swelled nuclei is 2 pixels.
3. The visual obstacle detection method based on the FIRA platform as claimed in claim 1, wherein the specific implementation method of step 2 is as follows:
performing thresholding screening on each pixel point according to the specific light condition requirement of the simulation environment, reserving pixels belonging to the barrier part, setting the rest parts as null values, then counting the number of barrier pixels in each row, and according to formulas (1) and (2), calculating the corresponding position of the barrier corresponding to the alpha-th row containing beta barrier pixels relative to the robot camera;
Figure FDA0003075931810000021
Y=X×(α-HALFWID)×VALUEAT1M (2)
wherein, LENTHAT1M and VALUEAT1M are undetermined coefficients of the surface of the obstacle at x and y coordinates of 1 m.
4. The visual obstacle detection method based on the FIRA platform is characterized in that relevant parameters in the formula are calibrated according to the special cases of the formula (1) and (2) at the distance of 1 m.
5. The visual obstacle detection method based on the FIRA platform as claimed in claim 1, wherein the specific implementation method of step 3 is as follows:
carrying out scene obstacle segmentation by changing the relation between the positions of the neighborhood pixels; scanning the picture from left to right in sequence: if α is1X coordinate and alpha are corresponded at the coordinate position1The difference of x coordinates corresponding to +6 is less than 0.004m, then the alpha is1The coordinates are regarded as the starting point of the front face of the barrier box body; if α is2X coordinate and alpha are corresponded at the coordinate position2The difference of +4 corresponding to x coordinates is more than 0.13m, then alpha is2The +1 coordinate is regarded as the termination point of the front of the barrier box body; if the starting point is alpha1And end point alpha2If the difference of x coordinates corresponding to the +1 position is less than 0.05m, then alpha is selected1The x coordinate of the position is taken as the x coordinate of the box body; if the difference between the corresponding x coordinates of the starting point and the ending point is more than 0.05m, (alpha) is selected12The x coordinate at +1)/2 is taken as the x coordinate of the box body; if there is no empty element between the two boxes, it represents that there is a shielding relationship between them, and the box with a close distance and a far distance is shielded by the box, and the relative shielding relationship is marked at this time.
6. The visual obstacle detection method based on the FIRA platform as claimed in claim 1, wherein the specific implementation method of step 4 is as follows:
supplementing or correcting the condition which cannot be processed in the segmentation process; for the parallel boxes which cannot be distinguished in the segmentation step, the left and right boundaries of the obstacle are kept unchanged, and the centers of the two boxes which can maintain the left and right boundaries are found to be used as new obstacle targets; when the obstacle is in the left-over and right-over part of the picture, the edge of the obstacle is extracted from the leftmost side and the rightmost side to the middle, the pixels extracted from the last column of pixels and closest to the middle column of pixels correspond to the part of the obstacle closest to the center of the field, and the center coordinate can be obtained through the xy value of the point.
CN202110552402.2A 2021-05-20 2021-05-20 Obstacle visual detection method based on FIRA platform Active CN113269838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110552402.2A CN113269838B (en) 2021-05-20 2021-05-20 Obstacle visual detection method based on FIRA platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110552402.2A CN113269838B (en) 2021-05-20 2021-05-20 Obstacle visual detection method based on FIRA platform

Publications (2)

Publication Number Publication Date
CN113269838A true CN113269838A (en) 2021-08-17
CN113269838B CN113269838B (en) 2023-04-07

Family

ID=77232060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110552402.2A Active CN113269838B (en) 2021-05-20 2021-05-20 Obstacle visual detection method based on FIRA platform

Country Status (1)

Country Link
CN (1) CN113269838B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113925390A (en) * 2021-10-19 2022-01-14 珠海一微半导体股份有限公司 Cross-regional channel identification method based on map image, robot and chip
CN115480591A (en) * 2022-10-20 2022-12-16 广东电网有限责任公司云浮供电局 Safety obstacle avoidance method for unmanned aerial vehicle for power distribution network equipment environment inspection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248166A (en) * 2017-06-29 2017-10-13 武汉工程大学 Dbjective state predictor method under dynamic environment
CN109034018A (en) * 2018-07-12 2018-12-18 北京航空航天大学 A kind of low latitude small drone method for barrier perception based on binocular vision
US20200048871A1 (en) * 2017-08-24 2020-02-13 Hitachi Construction Machinery Co., Ltd. Working machine
CN111323767A (en) * 2020-03-12 2020-06-23 武汉理工大学 Night unmanned vehicle obstacle detection system and method
US20210117704A1 (en) * 2019-06-27 2021-04-22 Sensetime Group Limited Obstacle detection method, intelligent driving control method, electronic device, and non-transitory computer-readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107248166A (en) * 2017-06-29 2017-10-13 武汉工程大学 Dbjective state predictor method under dynamic environment
US20200048871A1 (en) * 2017-08-24 2020-02-13 Hitachi Construction Machinery Co., Ltd. Working machine
CN109034018A (en) * 2018-07-12 2018-12-18 北京航空航天大学 A kind of low latitude small drone method for barrier perception based on binocular vision
US20210117704A1 (en) * 2019-06-27 2021-04-22 Sensetime Group Limited Obstacle detection method, intelligent driving control method, electronic device, and non-transitory computer-readable storage medium
CN111323767A (en) * 2020-03-12 2020-06-23 武汉理工大学 Night unmanned vehicle obstacle detection system and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DINGFU ZHOU等: "Moving object detection and segmentation in urban environments from a moving platform", 《IMAGE AND VISION COMPUTING》 *
SHUO CHANG等: "Spatial Attention Fusion for Obstacle Detection Using MmWave Radar and Vision Sensor", 《SENSORS 2020》 *
曲峰: "基于视觉的结构化道路及障碍物检测技术研究", 《工程科技Ⅱ辑》 *
沙世伟: "铁路线路障碍物雷达检测关键算法研究", 《铁道运输与经济》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113925390A (en) * 2021-10-19 2022-01-14 珠海一微半导体股份有限公司 Cross-regional channel identification method based on map image, robot and chip
CN115480591A (en) * 2022-10-20 2022-12-16 广东电网有限责任公司云浮供电局 Safety obstacle avoidance method for unmanned aerial vehicle for power distribution network equipment environment inspection
CN115480591B (en) * 2022-10-20 2023-09-12 广东电网有限责任公司云浮供电局 Safety obstacle avoidance method for unmanned aerial vehicle in power distribution network equipment environment inspection

Also Published As

Publication number Publication date
CN113269838B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111429514B (en) Laser radar 3D real-time target detection method integrating multi-frame time sequence point cloud
CN111798475B (en) Indoor environment 3D semantic map construction method based on point cloud deep learning
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
Wang et al. A region based stereo matching algorithm using cooperative optimization
CN111665842B (en) Indoor SLAM mapping method and system based on semantic information fusion
CN113269838B (en) Obstacle visual detection method based on FIRA platform
CN108776989B (en) Low-texture planar scene reconstruction method based on sparse SLAM framework
CN112837352B (en) Image-based data processing method, device and equipment, automobile and storage medium
CN112015275A (en) Digital twin AR interaction method and system
CN110838145B (en) Visual positioning and mapping method for indoor dynamic scene
CN112465903A (en) 6DOF object attitude estimation method based on deep learning point cloud matching
US20230019499A1 (en) Image processing system and method
CN112883850A (en) Multi-view aerospace remote sensing image matching method based on convolutional neural network
CN110706285A (en) Object pose prediction method based on CAD model
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
CN114494150A (en) Design method of monocular vision odometer based on semi-direct method
CN115424240A (en) Ground obstacle detection method, system, medium, equipment and terminal
CN114549549B (en) Dynamic target modeling tracking method based on instance segmentation in dynamic environment
CN114170596A (en) Posture recognition method and device, electronic equipment, engineering machinery and storage medium
CN113487631A (en) Adjustable large-angle detection sensing and control method based on LEGO-LOAM
CN114494323A (en) Obstacle detection method, device, equipment and storage medium
US20230020713A1 (en) Image processing system and method
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN116052120A (en) Excavator night object detection method based on image enhancement and multi-sensor fusion
CN111915632B (en) Machine learning-based method for constructing truth database of lean texture target object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant