CN110826512A - Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium - Google Patents

Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium Download PDF

Info

Publication number
CN110826512A
CN110826512A CN201911103707.4A CN201911103707A CN110826512A CN 110826512 A CN110826512 A CN 110826512A CN 201911103707 A CN201911103707 A CN 201911103707A CN 110826512 A CN110826512 A CN 110826512A
Authority
CN
China
Prior art keywords
obstacle
polar coordinate
contour
angle
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911103707.4A
Other languages
Chinese (zh)
Other versions
CN110826512B (en
Inventor
赵健章
邹振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth Digital Technology Co Ltd
Original Assignee
Shenzhen Skyworth Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth Digital Technology Co Ltd filed Critical Shenzhen Skyworth Digital Technology Co Ltd
Priority to CN201911103707.4A priority Critical patent/CN110826512B/en
Publication of CN110826512A publication Critical patent/CN110826512A/en
Priority to PCT/CN2020/112132 priority patent/WO2021093418A1/en
Application granted granted Critical
Publication of CN110826512B publication Critical patent/CN110826512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a ground obstacle detection method, equipment and a computer readable storage medium, wherein the method comprises the following steps: when an obstacle is detected based on a camera device installed on a vehicle body, performing projection in a preset direction on each obstacle to generate a projection height of each obstacle; dividing each obstacle into a first obstacle and a second obstacle according to each projection height, and identifying a first contour of the first obstacle in a preset direction; performing texture edge detection on the second obstacle to generate a second contour of the second obstacle; and determining a target obstacle closest to the position of the vehicle body according to the first contour and the second contour. The invention can not interfere the image processing due to the shaking of the vehicle body for the high barrier, thereby ensuring the accuracy of the recognition; meanwhile, detection can be realized for short obstacles without reducing ground precision, various obstacles in the transportation environment can be comprehensively identified, and the environment compatibility is strong.

Description

Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a ground obstacle detection method, ground obstacle detection equipment and a computer-readable storage medium.
Background
With the development of intelligent technology, the application of the intelligent warehouse is more and more extensive; at present, an intelligent warehouse guides automatic driving of vehicles through navigation, and transportation of goods is achieved. In the transportation process, if obstacles exist on the transportation path and the periphery of the transportation path, the distance of the obstacles needs to be detected so as to avoid the danger of collision.
Although traditional lidar SLAM (simultaneous localization and mapping) can be used directly for navigation; however, in the process of processing the image of the obstacle, the attitude of the vehicle body may shake to influence the actual measurement angle of a camera mounted on the vehicle body, which causes great ground removal interference and even false obstacle interference; when the ground precision is reduced, some very short obstacles cannot be identified, so that the detection of the obstacles is incomplete.
Disclosure of Invention
The invention mainly aims to provide a ground obstacle detection method, ground obstacle detection equipment and a computer readable storage medium, and aims to solve the technical problems of inaccuracy and incompleteness in ground obstacle detection in the prior art.
In order to achieve the above object, the present invention provides a ground obstacle detecting method, including the steps of:
when an obstacle is detected based on a camera device installed on a vehicle body, performing projection in a preset direction on each obstacle to generate a projection height of each obstacle;
dividing each obstacle into a first obstacle and a second obstacle according to each projection height, and identifying a first contour of the first obstacle in a preset direction;
performing texture edge detection on the second obstacle to generate a second contour of the second obstacle;
and determining a target obstacle closest to the position of the vehicle body according to the first contour and the second contour.
Preferably, the step of determining a target obstacle closest to the position of the vehicle body according to the first profile and the second profile includes:
converting the first contour into a first polar coordinate, converting the second contour into a second polar coordinate, and judging whether the first polar coordinate and the second polar coordinate are positioned in a preset angle range;
if the first polar coordinate and the second polar coordinate are within the preset angle range, combining the first polar coordinate and the second polar coordinate to generate a polar coordinate set;
and determining the target obstacle closest to the position of the vehicle body according to the polar coordinate set.
Preferably, the step of converting the first contour into a first polar coordinate comprises:
reading the installation height, the installation angle, the vertical view field angle, the horizontal view field angle, the effective pixel line number and the effective pixel column number of the camera device;
reading all pixel points in the first contour as measuring points, and executing the following steps for each measuring point one by one:
detecting the depth value between the measuring point and the camera device, and the number of the pixel rows and the number of the pixel columns of the measuring point;
determining a polar coordinate module value of the measuring point according to the installation angle, the vertical view field angle, the number of the pixel lines, the number of the effective pixel lines and the depth value;
determining the polar coordinate angle of the measuring point according to the horizontal view field angle, the mounting height, the mounting angle, the vertical view field angle, the number of the pixel columns, the number of the effective pixel columns, the number of the pixel rows and the number of the effective pixel rows;
and setting the polar coordinate module value and the polar coordinate angle as the polar coordinates of the measuring points, and forming each polar coordinate into a first polar coordinate after each measuring point generates the polar coordinate.
Preferably, the step of determining the target obstacle closest to the position of the vehicle body according to the polar coordinate set includes:
sequentially carrying out median filtering, denoising and mean filtering on each element in the polar coordinate set to generate a processing result;
merging all elements in the processing result to generate a target element, and calculating a first angle and a first distance between the target element and the camera device;
and determining a target obstacle closest to the position of the vehicle body according to the first angle and the first distance.
Preferably, the step of determining whether the first polar coordinate and the second polar coordinate are located within a preset angle range includes:
if the first polar coordinate and the second polar coordinate are not located in a preset angle range, determining a second angle and a second distance between the first polar coordinate and the camera device, and a third angle and a third distance between the second polar coordinate and the camera device;
and comparing the element group formed between the second angle and the second distance with the element group formed between the third angle and the third distance to generate a comparison result, and determining the target obstacle closest to the position of the vehicle body according to the comparison result.
Preferably, the step of projecting each obstacle in a preset direction to generate a projection height of each obstacle includes:
reading the installation height, the installation angle, the view field angle and the number of effective pixel lines of the camera device, taking each obstacle as a point to be measured, and executing the following steps for each point to be measured one by one:
detecting a measurement depth value between the camera device and the point to be measured and the number of pixel lines of the point to be measured;
determining a deflection angle of a row where a pixel corresponding to the point to be measured is located according to the installation angle, the view field angle, the effective pixel row number and the pixel row number where the point is located;
determining a projection intermediate value of the obstacle in a preset direction according to the measured depth value and a deflection angle of a row where the pixel is located;
and generating the projection height of each obstacle according to the installation height and the projection intermediate value.
Preferably, the step of identifying a first contour of the first obstacle in a preset direction comprises:
and reading a projection drawing of the first barrier in a preset direction, and sequentially carrying out mean value filtering, edge extraction, contour searching and broken line fitting processing on the projection drawing to obtain the first contour.
Preferably, the step of performing texture edge detection on the second obstacle and generating a second contour of the second obstacle includes:
acquiring an imaging image corresponding to the second obstacle based on the camera device, and calling a first preset algorithm to level up the imaging image;
and calling a second preset algorithm, extracting the initial contour of the imaging image subjected to the filling and leveling processing, and determining a second contour of the second obstacle according to the initial contour.
Preferably, the step of dividing each obstacle into a first obstacle and a second obstacle according to each projection height includes:
comparing the projection heights of the obstacles with the projection threshold one by one, and determining a target projection height which is greater than the projection threshold in the projection heights;
dividing each obstacle into the first obstacle and the second obstacle according to the target projection height.
Further, to achieve the above object, the present invention also provides a ground obstacle detecting apparatus comprising a memory, a processor and a ground obstacle detecting program stored on the memory and operable on the processor, the ground obstacle detecting program when executed by the processor implementing the steps of the ground obstacle detecting method as described above.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a ground obstacle detection program which, when executed by a processor, implements the steps of the ground obstacle detection method as described above.
When the obstacle is detected by the camera device arranged on the vehicle body, projection processing in a preset direction is firstly carried out on each obstacle, and each obstacle is divided into a first obstacle and a second obstacle according to the height of each obstacle represented by the projection height; then, identifying a first contour in a preset direction aiming at the first obstacle, and carrying out texture edge detection on the second obstacle to obtain a second contour of the second obstacle; and then determining the target obstacle closest to the position of the vehicle body based on the positions of the first contour and the second contour. Because the relevance between the recognized first contour and the angle of the camera device is not large, the shaking of the vehicle body can not interfere with the image processing, and the recognition accuracy is ensured; and the second contour is generated based on texture edge detection, so that the detection of short obstacles can be realized without reducing the ground precision, various obstacles in the transportation environment can be comprehensively identified, and the environment compatibility is strong.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a ground obstacle detection method according to a first embodiment of the present invention;
fig. 3 is a schematic flow chart of a ground obstacle detection method according to a second embodiment of the present invention;
fig. 4 is a schematic flow chart of a ground obstacle detection method according to a third embodiment of the present invention;
FIG. 5 is a schematic diagram of a first profile generated in the method for detecting a ground obstacle of the present invention;
fig. 6 is a schematic view of contour line generation in polar coordinate manner of the first contour and the second contour in the ground obstacle detecting method of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the ground obstacle detection may include: a processor 1001, such as a CPU, a user interface 1003, a network interface 1004, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
It will be appreciated by those skilled in the art that the ground obstacle detection arrangement shown in fig. 1 does not constitute a limitation to ground obstacle detection and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a ground obstacle detection program. The operating system is a program for managing and controlling the hardware and software resources of the ground obstacle detection, and supports the operation of the ground obstacle detection program and other software or programs.
In the ground obstacle detection shown in fig. 1, the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; the network interface 1004 is mainly used for connecting a background server and performing data communication with the background server; and the processor 1001 may be configured to call the ground obstacle detection program stored in the memory 1005, and perform the following operations:
when an obstacle is detected based on a camera device installed on a vehicle body, performing projection in a preset direction on each obstacle to generate a projection height of each obstacle;
dividing each obstacle into a first obstacle and a second obstacle according to each projection height, and identifying a first contour of the first obstacle in a preset direction;
performing texture edge detection on the second obstacle to generate a second contour of the second obstacle;
and determining a target obstacle closest to the position of the vehicle body according to the first contour and the second contour.
Further, the step of determining a target obstacle closest to the position of the vehicle body according to the first contour and the second contour includes:
converting the first contour into a first polar coordinate, converting the second contour into a second polar coordinate, and judging whether the first polar coordinate and the second polar coordinate are positioned in a preset angle range;
if the first polar coordinate and the second polar coordinate are within the preset angle range, combining the first polar coordinate and the second polar coordinate to generate a polar coordinate set;
and determining the target obstacle closest to the position of the vehicle body according to the polar coordinate set.
Further, the step of converting the first contour into a first polar coordinate comprises:
reading the installation height, the installation angle, the vertical view field angle, the horizontal view field angle, the effective pixel line number and the effective pixel column number of the camera device;
reading all pixel points in the first contour as measuring points, and executing the following steps for each measuring point one by one:
detecting the depth value between the measuring point and the camera device, and the number of the pixel rows and the number of the pixel columns of the measuring point;
determining a polar coordinate module value of the measuring point according to the installation angle, the vertical view field angle, the number of the pixel lines, the number of the effective pixel lines and the depth value;
determining the polar coordinate angle of the measuring point according to the horizontal view field angle, the mounting height, the mounting angle, the vertical view field angle, the number of the pixel columns, the number of the effective pixel columns, the number of the pixel rows and the number of the effective pixel rows;
and setting the polar coordinate module value and the polar coordinate angle as the polar coordinates of the measuring points, and forming each polar coordinate into a first polar coordinate after each measuring point generates the polar coordinate.
Further, the step of determining the target obstacle closest to the position of the vehicle body according to the polar coordinate set includes:
sequentially carrying out median filtering, denoising and mean filtering on each element in the polar coordinate set to generate a processing result;
merging all elements in the processing result to generate a target element, and calculating a first angle and a first distance between the target element and the camera device;
and determining a target obstacle closest to the position of the vehicle body according to the first angle and the first distance.
Further, after the step of determining whether the first polar coordinate and the second polar coordinate are located within the preset angle range, the processor 1001 may be configured to call a ground obstacle detection program stored in the memory 1005, and perform the following operations:
if the first polar coordinate and the second polar coordinate are not located in a preset angle range, determining a second angle and a second distance between the first polar coordinate and the camera device, and a third angle and a third distance between the second polar coordinate and the camera device;
and comparing the element group formed between the second angle and the second distance with the element group formed between the third angle and the third distance to generate a comparison result, and determining the target obstacle closest to the position of the vehicle body according to the comparison result.
Further, the step of performing projection in a preset direction on each obstacle and generating a projection height of each obstacle includes:
reading the installation height, the installation angle, the view field angle and the number of effective pixel lines of the camera device, taking each obstacle as a point to be measured, and executing the following steps for each point to be measured one by one:
detecting a measurement depth value between the camera device and the point to be measured and the number of pixel lines of the point to be measured;
determining a deflection angle of a row where a pixel corresponding to the point to be measured is located according to the installation angle, the view field angle, the effective pixel row number and the pixel row number where the point is located;
determining a projection intermediate value of the obstacle in a preset direction according to the measured depth value and a deflection angle of a row where the pixel is located;
and generating the projection height of each obstacle according to the installation height and the projection intermediate value.
Further, the step of identifying a first contour of the first obstacle in a preset direction comprises:
and reading a projection drawing of the first barrier in a preset direction, and sequentially carrying out mean value filtering, edge extraction, contour searching and broken line fitting processing on the projection drawing to obtain the first contour.
Further, the step of performing texture edge detection on the second obstacle and generating a second contour of the second obstacle includes:
acquiring an imaging image corresponding to the second obstacle based on the camera device, and calling a first preset algorithm to level up the imaging image;
and calling a second preset algorithm, extracting the initial contour of the imaging image subjected to the filling and leveling processing, and determining a second contour of the second obstacle according to the initial contour.
Further, the step of dividing each obstacle into a first obstacle and a second obstacle according to each projection height includes:
comparing the projection heights of the obstacles with the projection threshold one by one, and determining a target projection height which is greater than the projection threshold in the projection heights;
dividing each obstacle into the first obstacle and the second obstacle according to the target projection height.
Based on the above structure, various embodiments of the ground obstacle detection method are proposed.
Referring to fig. 2, fig. 2 is a schematic flow chart of a ground obstacle detection method according to a first embodiment of the present invention.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein.
Specifically, the ground obstacle detection method includes:
step S10, when the obstacle is detected based on the camera device installed on the vehicle body, the obstacle is projected in the preset direction, and the projected height of each obstacle is generated;
the ground obstacle detection method is applied to detecting a driving path and obstacles around the driving path in the intelligent automatic driving process so as to ensure the driving safety; the intelligent automatic driving is applicable to warehouse freight in a closed environment and also applicable to road transportation in an open environment, and the warehouse freight is taken as an example in the embodiment for description. Particularly, a camera device is installed on the vehicle body for realizing automatic driving, and the camera device is preferably a stereo camera; in the running process of the vehicle, the stereo camera carries out scanning detection on the surrounding environment in real time, and judges whether the running path and the surrounding thereof have obstacles or not.
In addition, a three-dimensional space coordinate system is established in advance in the embodiment, the three-dimensional space coordinate system takes the position of the stereo camera as a coordinate origin, the plane of the vehicle as an XY plane, and the upper space perpendicular to the XY plane as the space in which the positive direction of the Y axis is located; for the XY plane, the direction right in front of the vehicle is the Y-axis direction, and the direction perpendicular to the Y-axis direction on the right side of the vehicle is the X-axis direction. And taking the positive direction of the Y axis as a preset direction, imaging each obstacle by the image pickup device when the image pickup device detects the obstacle, forming a projection of each obstacle in the preset direction, and detecting the projection height of each obstacle.
Step S20, dividing each obstacle into a first obstacle and a second obstacle according to each projection height, and identifying a first contour of the first obstacle in a preset direction;
furthermore, the projection height of each obstacle represents the height of each obstacle, in order to distinguish the high obstacle from the low obstacle, a projection threshold for dividing the high obstacle from the low obstacle is preset, and each obstacle can be divided into a first obstacle and a second obstacle according to the magnitude relation between the projection height of each obstacle and the projection threshold; the first obstacle is a tall obstacle and the second obstacle is a short obstacle. Specifically, the step of dividing each obstacle into a first obstacle and a second obstacle according to each projection height includes:
step S21, comparing the projection heights of the obstacles with the projection threshold one by one, and determining a target projection height which is greater than the projection threshold in the projection heights;
step S22, dividing each obstacle into the first obstacle and the second obstacle according to the target projection height.
In order to determine the size relationship between the projection heights of the obstacles and the projection threshold, comparing the projection heights of the obstacles with the projection threshold one by one, and screening out target projection heights larger than the projection threshold from the projection heights; and then, the obstacle with the target projection height in each obstacle is determined as a first obstacle, and other obstacles are determined as second obstacles.
Furthermore, in order to ensure that the preset projection threshold value can enable the camera device to accurately and comprehensively image various obstacles, the projection threshold value needs to be adjusted for multiple times, and the adjustment is performed before the comparison; specifically, the step of dividing each obstacle into a first obstacle and a second obstacle according to the projection height includes:
a1, after receiving a set value in a preset direction, acquiring a first camera image corresponding to the set value based on the camera device, and judging whether the first camera image is valid;
a step a2 of adjusting the set value if the first captured image is valid, and acquiring a second captured image corresponding to the adjusted set value based on the imaging device;
step a3, when the second camera image is valid, updating the adjusted set value to a set value to be verified according to a preset interference factor;
step a4, verifying the set value to be verified, and determining the set value to be verified as a projection threshold value after the set value to be verified is successfully verified.
Specifically, after the stereo camera is mounted on the vehicle body, the vehicle is driven to a flat ground, and the setting and adjustment of the projection threshold are started. The mounting height of the stereo camera is detected, a set value of the vehicle in a preset direction is set to be 0, and the error range of the set value is a value within plus or minus 5% of the mounting height. After receiving the setting value, the stereo camera photographs the ground in front of the vehicle, the photographing range of which differs depending on the setting value, and the photographed image is taken as a first photographed image corresponding to the setting value. Then, judging the effectiveness of the first camera image, wherein the effectiveness judgment is to judge whether the front ground is completely presented in the image; if all the images are displayed, the first image is judged to be valid, the set value is adjusted, and the set value is set to be a value larger than 0. If the first camera image is judged to be invalid, the installation height, the installation angle and the view field angle of the stereo camera are adjusted to ensure that the first camera image is completely presented in the image.
Further, after the setting value is set to a value greater than 0, the ground in front of the vehicle is also photographed by the stereo camera, a second photographed image corresponding to the adjusted setting value is obtained, and validity of the second photographed image is verified, the validity being a process of determining whether the projection in the preset direction of the ground in front completely disappears. If the second camera shooting image disappears completely, the second camera shooting image is judged to be effective, the value of the set value is increased by combining preset interference factors such as vehicle body inclination and shaking, and the adjusted set value is updated to the set value to be verified, so that the projection in the preset direction of the front ground cannot appear in the camera shooting image. Then, verifying the set value to be verified, and judging whether the obstacles in the front ground higher than the set value are all present in the visual field of the stereo camera; and if the three-dimensional images are all displayed in the visual field of the three-dimensional camera, judging that the verification is successful, and determining the setting to be verified as a projection threshold value for distinguishing a high obstacle from a short obstacle.
Furthermore, after the first obstacle is obtained through division, a first contour formed in the preset direction of the first obstacle is identified, and the first contour formed in the preset direction is an appearance contour of the first obstacle towards the vehicle body direction. Wherein the step of identifying a first contour of the first obstacle in the preset direction comprises:
and step S23, reading a projection diagram of the first obstacle in a preset direction, and sequentially carrying out mean value filtering, edge extraction, contour searching and broken line fitting processing on the projection diagram to obtain the first contour.
And reading a projection view of the first obstacle in a preset direction, wherein the projection view embodies the contour curve of the high obstacle. In order to extract the contour curve, mean value filtering processing is firstly carried out on the projection drawing so as to filter out impurity points which are generated when non-obstacles on the ground are imaged in a three-dimensional camera; and then carrying out edge extraction on the projection drawing to obtain edge pixel points formed by the high barrier in the projection drawing, and further carrying out contour search on the edge pixel points to obtain contour points of the high barrier. The method comprises the steps of preliminarily extracting edge pixel points of the high barrier in the projection graph, and searching contour points on the basis of the preliminarily extracted edge pixel points, so that the accuracy of the obtained contour points can be ensured. And then, performing broken line fitting processing on each contour point to generate a first contour of the first obstacle in the preset direction. Referring to fig. 6, in fig. 6, reference numeral 1 is an origin of the stereo camera, reference numeral 1.3 is an installation height of the stereo camera, reference numeral 2.1 is a projection height of the first obstacle in the Z axis, reference numeral 2.2 is a projection height of the second obstacle in the Z axis, reference numeral 4 is a projection threshold, reference numeral 5.1 is a projection profile of the second obstacle in the Z axis, and reference numeral 5.2 is a first profile of the first obstacle in the Z axis.
Step S30, performing texture edge detection on the second obstacle to generate a second contour of the second obstacle;
further, considering the characteristic that the imaging of the short obstacle has obvious difference with the ground texture, the second obstacle is subjected to texture edge detection to generate a second outline of the second obstacle; and finding out different positions where the ground frequency features are distributed through local Fourier transform, wherein the different positions are the imaged contour of the ground short barrier, so as to obtain a second contour of a second short barrier.
And step S40, determining a target obstacle closest to the position of the vehicle body according to the first contour and the second contour.
Further, the distance between different obstacles and the vehicle body is different, and the closer the distance is, the more the collision risk is likely to occur. The first contour represents the distance between a high ground obstacle and the vehicle body, and the second contour represents the distance between a low ground obstacle and the vehicle body; in order to determine the obstacle which is most likely to collide with the vehicle body, the first outline and the second outline can be respectively converted into polar coordinates relative to the origin of coordinates, and the target obstacle which is closest to the position of the vehicle body can be determined through the polar coordinates of the first outline and the second outline; and further, whether obstacle avoidance processing is needed or not is determined according to the distance between the target obstacle and the vehicle body, and safety of the vehicle in the automatic driving process is guaranteed. Referring to fig. 6, in fig. 6, reference numeral 1 is a coordinate origin of the stereo camera, reference numeral 5.1 is a projection profile of the second obstacle in the Z axis, reference numeral 5.2 is a first profile of the first obstacle in the Z axis, reference numeral 5.3 is a second profile of the second obstacle, reference numeral 5.4 is a projection profile of the first obstacle in the XY plane, reference numeral 6.1 is a polar coordinate mode contour of the second profile, and reference numeral 6.2 is a polar coordinate mode contour of the first profile.
In the embodiment, when an obstacle is detected by a camera device arranged on a vehicle body, projection processing in a preset direction is firstly carried out on each obstacle, and each obstacle is divided into a first obstacle and a second obstacle according to the height of each obstacle represented by a projection height; then, identifying a first contour in a preset direction aiming at the first obstacle, and carrying out texture edge detection on the second obstacle to obtain a second contour of the second obstacle; and then determining the target obstacle closest to the position of the vehicle body based on the positions of the first contour and the second contour. Because the relevance between the recognized first contour and the angle of the camera device is not large, the shaking of the vehicle body can not interfere with the image processing, and the recognition accuracy is ensured; and the second contour is generated based on texture edge detection, so that the detection of short obstacles can be realized without reducing the ground precision, various obstacles in the transportation environment can be comprehensively identified, and the environment compatibility is strong.
Further, a second embodiment of the ground obstacle detection method of the present invention is proposed.
Referring to fig. 3, fig. 3 is a flowchart illustrating a ground obstacle detection method according to a second embodiment of the present invention.
The second embodiment of the ground obstacle detection method is different from the first embodiment of the ground obstacle detection method in that the step of determining the target obstacle closest to the position of the vehicle body according to the first contour and the second contour includes:
step 41, converting the first contour into a first polar coordinate, converting the second contour into a second polar coordinate, and determining whether the first polar coordinate and the second polar coordinate are within a preset angle range;
further, in the present embodiment, in the process of converting the first contour and the second contour into the polar coordinates with respect to the origin of coordinates, respectively, the first contour is converted into the first polar coordinates, and the second contour is converted into the second polar coordinates. Because the first contour and the second contour are both composed of a plurality of pixel points, the first polar coordinate and the second polar coordinate obtained by conversion are a polar coordinate set which is substantially composed of polar coordinate values of the plurality of pixel points. Wherein the step of converting the first contour into a first polar coordinate comprises:
step 411, reading the installation height, the installation angle, the vertical view field angle, the horizontal view field angle, the effective pixel line number and the effective pixel column number of the camera device;
step 412, reading all the pixel points in the first contour as measurement points, and executing the following steps for each measurement point one by one:
step 413, detecting a depth value between the measuring point and the camera device, and the number of pixel rows and the number of pixel columns of the measuring point;
step 414, determining a polar coordinate module value of the measuring point according to the installation angle, the vertical view field angle, the number of pixel lines, the number of effective pixel lines and the depth value;
step 415, determining the polar coordinate angle of the measuring point according to the horizontal view field angle, the installation height, the installation angle, the vertical view field angle, the number of the located pixel columns, the number of the effective pixel columns, the number of the located pixel rows and the number of the effective pixel rows;
step 416, setting the polar coordinate module value and the polar coordinate angle as the polar coordinates of the measurement points, and after generating polar coordinates at each of the measurement points, forming each of the polar coordinates as a first polar coordinate.
Further, the mounting parameters of the stereo camera are read, and the mounting parameters comprise mounting height H, mounting angle theta and vertical view field angle omegazHorizontal field angle omegahThe number of effective pixel rows L and the number of effective pixel columns C; the effective pixel row number is the imaging maximum pixel value of the stereo camera in the Y-axis direction, and the effective pixel column number is the imaging maximum pixel value of the stereo camera in the X-axis direction. Then, each pixel point contained in the first contour is read as a measuring point, and the measuring points are processed one by one. During processing, firstly detecting a depth value D between a measuring point and a camera device, and the number n of pixel rows where the measuring point is located and the number m of pixel columns where the measuring point is located; then the installation angle theta and the vertical view field angle omega are measuredzThe pixel line number n and the effective pixel line number L are transmitted to formula (1), and the deflection angle α of the pixel line is obtained through the calculation of formula (1), wherein the formula (1) is:
α=θ-(ωz/2)+(ωz*n/L) (1);
after the deflection angle α of the row where the pixel is located is calculated by formula (1), the deflection angle α and the depth value D are transmitted to formula (2), and the polar coordinate modulus r of the measuring point is calculated by formula (2), wherein formula (2) is:
r=D*Cos(α) (2)。
further, the absolute value coordinates (| Xmax |, | Ymax |) of the farthest projection point and the absolute value coordinates (| Xmin |, |) of the nearest projection point of the stereo camera imaging are calculated; specifically, the horizontal field angle ωhMounting height H, mounting angle theta and vertical view field angle omegazTransmitting the absolute value of the maximum projection point to a formula (3), and obtaining a value of | Xmax | in the absolute value coordinate of the maximum projection point through calculation of the formula (3); mounting height H, mounting angle theta and vertical view angle omegazTransmitting the absolute value of the projection point to a formula (4), and obtaining a value of | Ymax | in the absolute value coordinate of the farthest projection point through calculation of the formula (4); at the same time, the horizontal view angle omegahMounting height H, mounting angle theta and vertical view field angle omegazTransmitting the absolute value of the absolute value coordinate to a formula (5), and obtaining an absolute value of | Xmin | in the absolute value coordinate of the latest projection through calculation of the formula (5); mounting height H, mounting angle theta and vertical view angle omegazAnd (4) transmitting the absolute value of the projection to the formula (6), and calculating the absolute value coordinate of the latest projection by the formula (6) to obtain the value of | Ymin |. Wherein the formulas (3), (4), (5) and (6) are respectively:
|Xmax|=Tan(0.5*ωh)*H/Cos(θ-0.5*ωz) (3);
|Ymax|=H/Tan(θ-0.5*ωz) (4);
|Xmin|=Tan(0.5*ωh)*H/Cos(θ+0.5*ωz) (5);
|Ymin|=H/Tan(θ+0.5*ωz) (6)。
further, absolute value coordinates (| Xc |, | Yc |) of the coordinates of the measuring points are calculated, the number m of the pixel columns, the number C of the effective pixel columns, | Xmax | and | Xmin | where the pixel columns are located are transmitted to a formula (7), a | Xc | numerical value in the absolute values of the coordinates of the measuring points is obtained through the calculation of the formula (7), a | Xc | numerical value in the absolute values of the coordinates of the measuring points is transmitted to a formula (8) through the number n of the pixel rows where the pixel columns are located, the number L of the effective pixel rows, | Ymax | and | Ymin | and a | Yc | are obtained through the calculation of the formula (8); wherein equations (7) and (8) are respectively:
|Xc|=m/C*(|Xmax|-|Xmin|)+|Xmin| (7);
|Yc|=n/L*(|Ymax|-|Ymin|)+|Ymin| (8)。
thereafter, the absolute value of the coordinate of the measuring point is transmitted to formula (9), and the polar coordinate angle of the measuring point is obtained through the calculation of formula (9), wherein formula (9) is:
=Tan-1(|Yc|/|Xc|) (9)。
understandably, the polar coordinate module value and the polar coordinate angle of the measuring point calculated by the above formulas (1) to (9) form the polar coordinate of the measuring point; after the polar coordinates of each measuring point are obtained through the calculation of the formulas (1) to (9) at each measuring point, the first polar coordinates are formed by each polar coordinate, and the first contour is completely converted into the first polar coordinates. It should be noted that the process of converting the second contour into the second polar coordinate is the same as the process of converting the first contour into the first polar coordinate, and details are not described herein.
Step 42, if the position is within the preset angle range, combining the first polar coordinate and the second polar coordinate to generate a polar coordinate set;
and 43, determining a target obstacle closest to the position of the vehicle body according to the polar coordinate set.
Considering that the obstacles in the vehicle driving process have uncertainty, a plurality of obstacles may be located in close proximity, and the obstacles in close proximity may be combined. Specifically, a preset angle range, such as 5 degrees, for representing the approaching of the position is preset; and judging whether the first polar coordinate and the second polar coordinate are both located in the preset angle range, if so, merging the first polar coordinate and the second polar coordinate, and imaging the first polar coordinate and the second polar coordinate on the same layer to form a coordinate set. The coordinate set comprises all polar coordinate points of the first polar coordinate and all polar coordinate points of the second polar coordinate, the impurity points in the coordinate set need to be denoised, and denoised effective points are combined to determine the target obstacle closest to the position of the vehicle body. Specifically, the step of determining the target obstacle closest to the position of the vehicle body according to the polar coordinate set comprises the following steps:
step 431, sequentially carrying out median filtering, denoising and mean filtering on each element in the polar coordinate set to generate a processing result;
step 432, merging the elements in the processing result to generate a target element, and calculating a first angle and a first distance between the target element and the camera device;
and 433, determining a target obstacle closest to the position of the vehicle body according to the first angle and the first distance.
Further, each polar coordinate point in the polar coordinate set is formed as each element in the polar coordinate set; carrying out median filtering processing on each element to remove salt and pepper noise points in the elements; then, removing elements with the distances between the elements and the origin of coordinates larger than the minimum value by setting the minimum value from the origin of coordinates; and then carrying out mean value filtering processing on each processed element to generate a processing result. And then combining all elements in the processing result, combining all the polar coordinate points as the elements into one polar coordinate point, wherein the polar coordinate point obtained by combining is the target element. Thereafter, calculating a first angle and a first distance of the target element relative to the origin of coordinates, and characterizing relative distances between the vehicle body and the merged plurality of obstacles by the first distance and the first angle; and determining a target obstacle which is closest to the position of the vehicle body in the plurality of obstacles according to the relative distance between the plurality of groups of combined obstacles and the vehicle body and the relative distance between each group of uncombined obstacles and the vehicle body.
Furthermore, when the positions of the plurality of obstacles are far apart, each obstacle needs to be individually processed; specifically, the step of determining whether the first polar coordinate and the second polar coordinate are located within a preset angle range includes:
step 44, if the first polar coordinate and the second polar coordinate are not within a preset angle range, determining a second angle and a second distance between the first polar coordinate and the image pickup device, and a third angle and a third distance between the second polar coordinate and the image pickup device;
and 45, comparing the element group formed between the second angle and the second distance with the element group formed between the third angle and the third distance to generate a comparison result, and determining the target obstacle closest to the position of the vehicle body according to the comparison result.
When the first polar coordinate and the second polar coordinate are determined not to be within the preset angle range through judgment and need to be processed aiming at a single obstacle, median filtering, denoising and mean filtering are respectively carried out on each element in the first polar coordinate and each element in the second polar coordinate, processed elements in the first polar coordinate are combined to generate a first element point, and processed elements in the second polar coordinate are combined to generate a second element point. Thereafter, an angle and a distance between the first element point and the origin of coordinates, i.e., a second angle and a second distance between the first polar coordinates and the image pickup device, are calculated, and an angle and a distance between the second element point and the origin of coordinates, i.e., a third angle and a third distance between the second polar coordinates and the image pickup device, are calculated.
And further, forming an element group by the second angle and the second distance, forming an element group by the third angle and the third distance, comparing the two elements to generate a comparison result representing the distance between the first polar coordinate and the vehicle body and the distance between the second polar coordinate and the vehicle body respectively, and determining the target obstacle closest to the position of the vehicle body according to the comparison result.
In the embodiment, the obstacles with close positions are combined to be processed, and the obstacles with far positions are processed independently; the increase of calculation amount for independent processing of each obstacle is avoided, and the contribution to obstacle avoidance resolution is small; the obstacle detection method has the advantages that the obstacle is comprehensively detected, and meanwhile, the detection efficiency is improved. In addition, the contour is converted into a polar coordinate, and then the accuracy of distance calculation between each obstacle and the vehicle body is improved by the processing modes of filtering, denoising and calculation, so that the detection of the target obstacle is more accurate.
Further, a third embodiment of the ground obstacle detection method of the present invention is provided.
Referring to fig. 4, fig. 4 is a flowchart illustrating a ground obstacle detection method according to a third embodiment of the present invention.
The third embodiment of the ground obstacle detection method differs from the first or second embodiment of the ground obstacle detection method in that the step of projecting each obstacle in a preset direction and generating the projection height of each obstacle includes:
step S11, reading the installation height, installation angle, view field angle, and effective pixel line number of the imaging device, and performing the following steps for each point to be measured, with each obstacle as the point to be measured:
step S12, detecting a depth measurement value between the image capture device and the point to be measured, and a number of pixel lines where the point to be measured is located;
step S13, determining a deflection angle of a line where a pixel corresponding to the point to be measured is located according to the installation angle, the view field angle, the number of effective pixel lines and the number of pixel lines where the point is located;
step S14, determining a projection intermediate value of the obstacle in a preset direction according to the measured depth value and the deflection angle of the row where the pixel is located;
step S15 is to generate a projection height of each obstacle according to the installation height and the projection median.
The method comprises the steps of projecting each obstacle to a preset direction to obtain the projection height of each obstacle, and dividing each obstacle into a first obstacle and a second obstacle through each projection height, reading the installation height H, the installation angle theta, the view field angle omega and the number L of effective pixel lines of a stereo camera, then taking each obstacle as a point to be measured, and processing each point to be measured one by one, detecting a measurement depth value D 'between the point to be measured and an image pick-up device and the number n' of pixel lines of the point to be measured during processing, then transmitting the installation angle theta, the view field angle omega, the number n 'of pixel lines of the point to be measured and the number L of effective pixel lines into a formula (10), and obtaining a deflection angle α' of the pixel line corresponding to the point to be measured through calculation of the formula (10), wherein the formula (10) is as follows:
α'=θ-(ω/2)+(ω*n'/L) (10);
after the deflection angle α ' of the row where the pixel corresponding to the point to be measured is located is calculated through the formula (10), the deflection angle α ' and the measured depth value D ' are transmitted to the formula (11), and the projection intermediate value h of the obstacle in the preset direction is calculated through the formula (11)c(ii) a Wherein formula (11) is:
hc=D'*Sin(α') (11);
thereafter, at the installation height H and the projected median HcMaking a difference between the two, and obtaining the difference result as the projection height h of the barrierz. And processing and calculating the points to be measured to obtain each calculation result, namely the projection height of each obstacle, and dividing each obstacle into a first obstacle and a second obstacle according to each projection height.
The embodiment represents the height of each obstacle by calculating the projection height of each obstacle in the preset direction, and then divides each obstacle into a first obstacle and a second obstacle, so that different modes are adopted for acquiring the outline of the obstacle aiming at different types of obstacles, and the obstacle detection is more comprehensive and accurate.
Further, a fourth embodiment of the ground obstacle detection method of the present invention is provided.
The fourth embodiment of the ground obstacle detection method differs from the first, second or third embodiment of the ground obstacle detection method in that the step of performing texture edge detection on the second obstacle to generate a second contour of the second obstacle comprises:
step S31, acquiring an imaging image corresponding to the second obstacle based on the camera device, and calling a first preset algorithm to level up the imaging image;
step S32, calling a second preset algorithm, extracting an initial contour of the image subjected to the leveling processing, and determining a second contour of the second obstacle according to the initial contour.
The embodiment performs texture edge detection on the second obstacle to obtain a second contour of the second obstacle. The method comprises the steps that a first preset algorithm and a second preset algorithm are preset, the first preset algorithm is preferably a data filling method, the second preset algorithm is local Fourier transform, and an imaging image of a second obstacle is processed through the first preset algorithm and the second preset algorithm. Specifically, a second obstacle is shot through a stereo camera to obtain an imaging image; considering the influence of various factors such as a stereo camera, ambient light and the like in an imaging image, certain pixel points possibly have no data; the pixels without data fail to calculate the second preset algorithm, and need to be processed by the first preset algorithm. The data filling method serving as the first preset algorithm comprises expansion and median filtering, the field of the imaging image is expanded through the expansion algorithm, filling and leveling processing is carried out on the cavity data in the imaging image, and meanwhile impurities in the imaging image are removed through the median filtering. Then, calling a second preset algorithm to process the filled and leveled imaging image, taking a pixel identification area of 16 x 16 as a minimum identification unit, and segmenting the imaging image to extract a target pixel identification area where the outline edge of the imaging image is located; the Fourier transform is carried out on the target pixel identification area through formulas (12) and (13), then the complex number is converted into an amplitude value through a formula (14), and then the amplitude value is switched to a logarithmic scale through a formula (15) so as to carry out logarithmic scale scaling on the image. Wherein equations (12), (13), (14) and (15) are respectively:
Figure BDA0002270183300000181
eix=cosx+isinx (13);
Figure BDA0002270183300000182
M1=log(1+M)(15);
wherein, F (k, l) is the value of the target pixel identification area of the imaging image in the frequency domain, F (i, j) is the value of the target pixel identification area of the imaging image in the spatial domain, and M is the amplitude of F (k, l); the above equations (12) to (15) are commonly used for implementing fourier transform, and are not described herein.
Further, cutting and redistributing the amplitude image, rearranging the quadrants of the result to enable the origin of coordinates to correspond to the center of the image, then carrying out normalization processing, processing the values exceeding the display range so as to be convenient for classification and judgment, further classifying the identification areas containing the edges, and regenerating the outline, thereby realizing the purpose of extracting the initial outline of the second obstacle from the ground texture.
Furthermore, the color lump expansion processing is carried out on the initially extracted contour, the edge is extracted from the original imaging image according to the processing result, and then the contour is accurately found out in the extracted edge, so that the second contour of the second obstacle is obtained.
In the embodiment, for the short second obstacle, the contour of the short second obstacle is extracted according to the characteristic of the obvious difference between the short second obstacle and the ground texture, and then the short second obstacle is identified and detected. The method and the device aim at processing the high obstacle and the short obstacle in different modes, realize the detection of the short obstacle while identifying the high obstacle, and ensure the comprehensiveness and the accuracy of the detection of various obstacles in the driving environment of the vehicle body.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, on which a ground obstacle detection program is stored, which, when executed by a processor, implements the steps of the ground obstacle detection method as described above.
The specific implementation manner of the computer-readable storage medium of the present invention is substantially the same as that of the above-mentioned embodiments of the ground obstacle detection method, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (11)

1. A ground obstacle detection method, characterized by comprising the steps of:
when an obstacle is detected based on a camera device installed on a vehicle body, performing projection in a preset direction on each obstacle to generate a projection height of each obstacle;
dividing each obstacle into a first obstacle and a second obstacle according to each projection height, and identifying a first contour of the first obstacle in a preset direction;
performing texture edge detection on the second obstacle to generate a second contour of the second obstacle;
and determining a target obstacle closest to the position of the vehicle body according to the first contour and the second contour.
2. The ground obstacle detection method according to claim 1, wherein the step of determining a target obstacle closest to the position where the vehicle body is located, based on the first profile and the second profile, includes:
converting the first contour into a first polar coordinate, converting the second contour into a second polar coordinate, and judging whether the first polar coordinate and the second polar coordinate are positioned in a preset angle range;
if the first polar coordinate and the second polar coordinate are within the preset angle range, combining the first polar coordinate and the second polar coordinate to generate a polar coordinate set;
and determining the target obstacle closest to the position of the vehicle body according to the polar coordinate set.
3. The ground obstacle detection method of claim 2, wherein the step of converting the first profile to first polar coordinates comprises:
reading the installation height, the installation angle, the vertical view field angle, the horizontal view field angle, the effective pixel line number and the effective pixel column number of the camera device;
reading all pixel points in the first contour as measuring points, and executing the following steps for each measuring point one by one:
detecting the depth value between the measuring point and the camera device, and the number of the pixel rows and the number of the pixel columns of the measuring point;
determining a polar coordinate module value of the measuring point according to the installation angle, the vertical view field angle, the number of the pixel lines, the number of the effective pixel lines and the depth value;
determining the polar coordinate angle of the measuring point according to the horizontal view field angle, the mounting height, the mounting angle, the vertical view field angle, the number of the pixel columns, the number of the effective pixel columns, the number of the pixel rows and the number of the effective pixel rows;
and setting the polar coordinate module value and the polar coordinate angle as the polar coordinates of the measuring points, and forming each polar coordinate into a first polar coordinate after each measuring point generates the polar coordinate.
4. The ground obstacle detection method according to claim 2, wherein the step of determining a target obstacle closest to the position where the vehicle body is located, based on the set of polar coordinates, includes:
sequentially carrying out median filtering, denoising and mean filtering on each element in the polar coordinate set to generate a processing result;
merging all elements in the processing result to generate a target element, and calculating a first angle and a first distance between the target element and the camera device;
and determining a target obstacle closest to the position of the vehicle body according to the first angle and the first distance.
5. The ground obstacle detection method according to claim 2, wherein the step of determining whether the first and second polar coordinates are within a preset angle range is followed by:
if the first polar coordinate and the second polar coordinate are not located in a preset angle range, determining a second angle and a second distance between the first polar coordinate and the camera device, and a third angle and a third distance between the second polar coordinate and the camera device;
and comparing the element group formed between the second angle and the second distance with the element group formed between the third angle and the third distance to generate a comparison result, and determining the target obstacle closest to the position of the vehicle body according to the comparison result.
6. The ground obstacle detection method according to any one of claims 1 to 5, wherein the step of projecting each obstacle in a predetermined direction to generate a projected height of each obstacle comprises:
reading the installation height, the installation angle, the view field angle and the number of effective pixel lines of the camera device, taking each obstacle as a point to be measured, and executing the following steps for each point to be measured one by one:
detecting a measurement depth value between the camera device and the point to be measured and the number of pixel lines of the point to be measured;
determining a deflection angle of a row where a pixel corresponding to the point to be measured is located according to the installation angle, the view field angle, the effective pixel row number and the pixel row number where the point is located;
determining a projection intermediate value of the obstacle in a preset direction according to the measured depth value and a deflection angle of a row where the pixel is located;
and generating the projection height of each obstacle according to the installation height and the projection intermediate value.
7. The ground obstacle detection method of any one of claims 1-5, wherein the step of identifying a first profile of the first obstacle in a predetermined direction comprises:
and reading a projection drawing of the first barrier in a preset direction, and sequentially carrying out mean value filtering, edge extraction, contour searching and broken line fitting processing on the projection drawing to obtain the first contour.
8. The ground obstacle detection method of any one of claims 1-5, wherein the step of performing texture edge detection on the second obstacle to generate a second contour of the second obstacle comprises:
acquiring an imaging image corresponding to the second obstacle based on the camera device, and calling a first preset algorithm to level up the imaging image;
and calling a second preset algorithm, extracting the initial contour of the imaging image subjected to the filling and leveling processing, and determining a second contour of the second obstacle according to the initial contour.
9. The ground obstacle detection method according to any one of claims 1 to 5, wherein the step of dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projected heights comprises:
comparing the projection heights of the obstacles with the projection threshold one by one, and determining a target projection height which is greater than the projection threshold in the projection heights;
dividing each obstacle into the first obstacle and the second obstacle according to the target projection height.
10. A ground obstacle detecting apparatus, characterized in that the ground obstacle detecting apparatus comprises a memory, a processor and a ground obstacle detecting program stored on the memory and executable on the processor, the ground obstacle detecting program, when executed by the processor, implementing the steps of the ground obstacle detecting method as claimed in any one of claims 1 to 9.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a ground obstacle detection program which, when executed by a processor, implements the steps of the ground obstacle detection method according to any one of claims 1 to 9.
CN201911103707.4A 2019-11-12 2019-11-12 Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium Active CN110826512B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911103707.4A CN110826512B (en) 2019-11-12 2019-11-12 Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium
PCT/CN2020/112132 WO2021093418A1 (en) 2019-11-12 2020-08-28 Ground obstacle detection method and device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911103707.4A CN110826512B (en) 2019-11-12 2019-11-12 Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110826512A true CN110826512A (en) 2020-02-21
CN110826512B CN110826512B (en) 2022-03-08

Family

ID=69554384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911103707.4A Active CN110826512B (en) 2019-11-12 2019-11-12 Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN110826512B (en)
WO (1) WO2021093418A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364693A (en) * 2020-10-12 2021-02-12 星火科技技术(深圳)有限责任公司 Barrier identification method, device and equipment based on binocular vision and storage medium
WO2021093418A1 (en) * 2019-11-12 2021-05-20 深圳创维数字技术有限公司 Ground obstacle detection method and device, and computer-readable storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861975B (en) * 2023-02-28 2023-05-12 杭州枕石智能科技有限公司 Obstacle vehicle pose estimation method and equipment
CN116147567B (en) * 2023-04-20 2023-07-21 高唐县空间勘察规划有限公司 Homeland mapping method based on multi-metadata fusion
CN116704473B (en) * 2023-05-24 2024-03-08 禾多科技(北京)有限公司 Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104375505A (en) * 2014-10-08 2015-02-25 北京联合大学 Robot automatic road finding method based on laser ranging
WO2016047890A1 (en) * 2014-09-26 2016-03-31 숭실대학교산학협력단 Walking assistance method and system, and recording medium for performing same
CN105989601A (en) * 2015-12-30 2016-10-05 安徽农业大学 Machine vision-based method for extracting inter-corn-row navigation reference line of agricultural AGV (Automated Guided Vehicle)
DE102015212581A1 (en) * 2015-07-06 2017-01-12 Robert Bosch Gmbh Driver assistance and driver assistance system
CN108230392A (en) * 2018-01-23 2018-06-29 北京易智能科技有限公司 A kind of dysopia analyte detection false-alarm elimination method based on IMU
CN109308448A (en) * 2018-07-29 2019-02-05 国网上海市电力公司 A method of it prevents from becoming distribution maloperation using image processing techniques
CN109828267A (en) * 2019-02-25 2019-05-31 国电南瑞科技股份有限公司 The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera
CN110208819A (en) * 2019-05-14 2019-09-06 江苏大学 A kind of processing method of multiple barrier three-dimensional laser radar data

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6687577B2 (en) * 2001-12-19 2004-02-03 Ford Global Technologies, Llc Simple classification scheme for vehicle/pole/pedestrian detection
DE102005015088B4 (en) * 2004-04-02 2015-06-18 Denso Corporation Vehicle environment monitoring system
CN102508246B (en) * 2011-10-13 2013-04-17 吉林大学 Method for detecting and tracking obstacles in front of vehicle
CN104182756B (en) * 2014-09-05 2017-04-12 大连理工大学 Method for detecting barriers in front of vehicles on basis of monocular vision
CN108153301B (en) * 2017-12-07 2021-02-09 深圳市杰思谷科技有限公司 Intelligent obstacle avoidance system based on polar coordinates
CN108647646B (en) * 2018-05-11 2019-12-13 北京理工大学 Low-beam radar-based short obstacle optimized detection method and device
CN108805906A (en) * 2018-05-25 2018-11-13 哈尔滨工业大学 A kind of moving obstacle detection and localization method based on depth map
CN109410234A (en) * 2018-10-12 2019-03-01 南京理工大学 A kind of control method and control system based on binocular vision avoidance
CN110826512B (en) * 2019-11-12 2022-03-08 深圳创维数字技术有限公司 Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016047890A1 (en) * 2014-09-26 2016-03-31 숭실대학교산학협력단 Walking assistance method and system, and recording medium for performing same
CN104375505A (en) * 2014-10-08 2015-02-25 北京联合大学 Robot automatic road finding method based on laser ranging
DE102015212581A1 (en) * 2015-07-06 2017-01-12 Robert Bosch Gmbh Driver assistance and driver assistance system
CN105989601A (en) * 2015-12-30 2016-10-05 安徽农业大学 Machine vision-based method for extracting inter-corn-row navigation reference line of agricultural AGV (Automated Guided Vehicle)
CN108230392A (en) * 2018-01-23 2018-06-29 北京易智能科技有限公司 A kind of dysopia analyte detection false-alarm elimination method based on IMU
CN109308448A (en) * 2018-07-29 2019-02-05 国网上海市电力公司 A method of it prevents from becoming distribution maloperation using image processing techniques
CN109828267A (en) * 2019-02-25 2019-05-31 国电南瑞科技股份有限公司 The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera
CN110208819A (en) * 2019-05-14 2019-09-06 江苏大学 A kind of processing method of multiple barrier three-dimensional laser radar data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于士友: "移动机器人研制与基于粒子滤波的同时定位与地图创建研究", 《中国优秀博硕士学位论文全文数据库(硕士) 基础科学辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021093418A1 (en) * 2019-11-12 2021-05-20 深圳创维数字技术有限公司 Ground obstacle detection method and device, and computer-readable storage medium
CN112364693A (en) * 2020-10-12 2021-02-12 星火科技技术(深圳)有限责任公司 Barrier identification method, device and equipment based on binocular vision and storage medium
CN112364693B (en) * 2020-10-12 2024-04-16 星火科技技术(深圳)有限责任公司 Binocular vision-based obstacle recognition method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110826512B (en) 2022-03-08
WO2021093418A1 (en) 2021-05-20

Similar Documents

Publication Publication Date Title
CN110826512B (en) Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium
JP6825569B2 (en) Signal processor, signal processing method, and program
JP3868876B2 (en) Obstacle detection apparatus and method
US8867790B2 (en) Object detection device, object detection method, and program
US10102433B2 (en) Traveling road surface detection apparatus and traveling road surface detection method
JP3367170B2 (en) Obstacle detection device
JP5959073B2 (en) Detection device, detection method, and program
JP2002352225A (en) Obstacle detector and its method
JP2006252473A (en) Obstacle detector, calibration device, calibration method and calibration program
WO2017138245A1 (en) Image processing device, object recognition device, device control system, and image processing method and program
JP6458651B2 (en) Road marking detection device and road marking detection method
CN112598922B (en) Parking space detection method, device, equipment and storage medium
JP5539250B2 (en) Approaching object detection device and approaching object detection method
JP6278790B2 (en) Vehicle position detection device, vehicle position detection method, vehicle position detection computer program, and vehicle position detection system
JP2000207693A (en) Obstacle detector on vehicle
JP4767052B2 (en) Optical axis deviation detector
JP5501084B2 (en) Planar area detection apparatus and stereo camera system
CN114299146A (en) Parking assisting method, device, computer equipment and computer readable storage medium
CN110852278B (en) Ground identification line recognition method, ground identification line recognition equipment and computer-readable storage medium
JP6492603B2 (en) Image processing apparatus, system, image processing method, and program
CN109784315B (en) Tracking detection method, device and system for 3D obstacle and computer storage medium
JP2003208692A (en) Vehicle recognition method and traffic flow measurement device using the method
JPH10283478A (en) Method for extracting feature and and device for recognizing object using the same method
KR102629639B1 (en) Apparatus and method for determining position of dual camera for vehicle
EP3896387B1 (en) Image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant