WO2021093418A1 - Ground obstacle detection method and device, and computer-readable storage medium - Google Patents

Ground obstacle detection method and device, and computer-readable storage medium Download PDF

Info

Publication number
WO2021093418A1
WO2021093418A1 PCT/CN2020/112132 CN2020112132W WO2021093418A1 WO 2021093418 A1 WO2021093418 A1 WO 2021093418A1 CN 2020112132 W CN2020112132 W CN 2020112132W WO 2021093418 A1 WO2021093418 A1 WO 2021093418A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
contour
angle
polar coordinate
obstacles
Prior art date
Application number
PCT/CN2020/112132
Other languages
French (fr)
Chinese (zh)
Inventor
赵健章
邹振华
Original Assignee
深圳创维数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维数字技术有限公司 filed Critical 深圳创维数字技术有限公司
Publication of WO2021093418A1 publication Critical patent/WO2021093418A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Definitions

  • This application relates to the field of intelligent driving technology, and in particular to a ground obstacle detection method, device, and computer-readable storage medium.
  • the main purpose of this application is to provide a ground obstacle detection method, equipment, and computer readable storage medium, aiming to solve the existing technical problems of inaccurate and incomplete ground obstacle detection.
  • the present application provides a ground obstacle detection method, the ground obstacle detection method includes the steps:
  • the target obstacle closest to the position of the vehicle body is determined.
  • the present application also provides a ground obstacle detection device
  • the ground obstacle detection device includes a memory, a processor, and a ground obstacle stored in the memory and capable of running on the processor
  • An object detection program which implements the steps of the above-mentioned ground obstacle detection method when the ground obstacle detection program is executed by the processor.
  • the present application also provides a computer-readable storage medium having a ground obstacle detection program stored on the computer-readable storage medium, and the ground obstacle detection program is executed by a processor to achieve the above The steps of the ground obstacle detection method described.
  • this application When an obstacle is detected by the camera device installed on the vehicle body, this application first performs projection processing on each obstacle in a preset direction, and divides each obstacle into the height of each obstacle represented by the projection height. The first obstacle and the second obstacle; thereafter, the first obstacle is identified in the preset direction for the first obstacle, and the texture edge detection of the second obstacle is performed to obtain the second contour; and then based on the first contour and The position of the second contour is to determine the target obstacle closest to the position of the vehicle body.
  • the recognized first contour has little correlation with the angle installed by the camera, vehicle body shaking will not interfere with image processing, ensuring the accuracy of recognition; while the second contour is generated based on texture edge detection without lowering the ground
  • the accuracy can realize the detection of low obstacles, can fully identify various obstacles in the transportation environment, and have strong environmental compatibility.
  • FIG. 1 is a schematic structural diagram of a hardware operating environment involved in a solution of an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a first embodiment of a ground obstacle detection method according to the present application
  • FIG. 3 is a schematic flowchart of a second embodiment of a ground obstacle detection method according to the present application.
  • FIG. 4 is a schematic flowchart of a third embodiment of a ground obstacle detection method according to the present application.
  • FIG. 5 is a schematic diagram of generating a first contour in the ground obstacle detection method of the present application.
  • Fig. 6 is a schematic diagram of contour line generation in the polar coordinate mode of the first contour and the second contour in the ground obstacle detection method of the present application.
  • FIG. 1 is a schematic structural diagram of a hardware operating environment involved in a solution of an embodiment of the present application.
  • the ground obstacle detection may include: a processor 1001, such as a CPU, a user interface 1003, a network interface 1004, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • ground obstacle detection structure shown in FIG. 1 does not constitute a limitation on ground obstacle detection, and may include more or less components than shown in the figure, or a combination of certain components, or different components.
  • the layout of the components may include more or less components than shown in the figure, or a combination of certain components, or different components.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a ground obstacle detection program.
  • the operating system is a program that manages and controls ground obstacle detection hardware and software resources, and supports the operation of ground obstacle detection programs and other software or programs.
  • the user interface 1003 is mainly used to connect to the client (user side) and communicate with the client;
  • the network interface 1004 is mainly used to connect to the background server and communicate with the background server;
  • the processor 1001 may be used to call the ground obstacle detection program stored in the memory 1005, and perform the following operations:
  • the target obstacle closest to the position of the vehicle body is determined.
  • the step of determining the target obstacle closest to the position of the vehicle body according to the first contour and the second contour includes:
  • a target obstacle that is closest to the position of the vehicle body is determined.
  • the step of converting the first contour into a first polar coordinate includes:
  • the polar coordinate modulus value and the polar coordinate angle are set as the polar coordinates of the measuring point, and after the polar coordinates are generated for each measuring point, each of the polar coordinates is formed as the first polar coordinate.
  • the step of determining the target obstacle closest to the position of the vehicle body according to the set of polar coordinates includes:
  • the target obstacle closest to the position of the vehicle body is determined.
  • the processor 1001 may be used to call a ground obstacle detection program stored in the memory 1005, and perform the following operations :
  • first polar coordinate and the second polar coordinate are not within the preset angle range, the second angle and the second distance between the first polar coordinate and the camera device, and the second polar coordinate are determined A third angle and a third distance between the coordinates and the camera device;
  • the element group formed between the second angle and the second distance is compared with the element group formed between the third angle and the third distance to generate a comparison result, and the comparison result is determined according to the comparison result.
  • the target obstacle closest to the location of the vehicle body is compared with the element group formed between the third angle and the third distance to generate a comparison result, and the comparison result is determined according to the comparison result.
  • the step of projecting each of the obstacles in a preset direction to generate the projection height of each of the obstacles includes:
  • the projection height of each of the obstacles is generated.
  • the step of identifying the first contour of the first obstacle in a preset direction includes:
  • the step of performing texture edge detection on the second obstacle to generate a second contour of the second obstacle includes:
  • the step of dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights includes:
  • each of the obstacles is divided into the first obstacle and the second obstacle.
  • FIG. 2 is a schematic flowchart of a first embodiment of a ground obstacle detection method according to the present application.
  • the embodiment of the application provides an embodiment of a ground obstacle detection method. It should be noted that although the logical sequence is shown in the flowchart, in some cases, the sequence shown here may be executed in a different order than here. Or the steps described.
  • the ground obstacle detection method includes:
  • Step S10 when an obstacle is detected based on the camera device installed on the vehicle body, project each of the obstacles in a preset direction to generate the projection height of each of the obstacles;
  • the ground obstacle detection method of this embodiment is applied in the process of intelligent automatic driving to detect the obstacles in the driving path and its surroundings to ensure the safety of driving; among them, intelligent automatic driving can be applied to warehouse freight transportation in a closed environment. It can also be applied to road transportation in an open environment.
  • warehouse freight is taken as an example for illustration.
  • a camera device is installed on the body of a vehicle that realizes automatic driving, and the camera device is preferably a stereo camera; while the vehicle is driving, the stereo camera scans the surrounding environment in real time to determine whether there are obstacles in the driving path and its surroundings. Things.
  • this embodiment has a pre-established three-dimensional space coordinate system.
  • the three-dimensional space coordinate system uses the position of the stereo camera as the coordinate origin, the plane where the vehicle is located as the XY plane, and the upper space perpendicular to the XY plane is the space where the positive direction of the Y axis is located. ; Among them, for the XY plane, the direction of the Y-axis is the direction directly in front of the vehicle and the direction perpendicular to the Y-axis on the right side of the vehicle is the X-axis direction. Taking the positive direction of the Y axis as the preset direction, once the camera detects an obstacle, it will image each obstacle to form a projection of each obstacle in the preset direction, and detect the projection height of each obstacle.
  • Step S20 dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights, and identifying a first contour of the first obstacle in a preset direction;
  • the projection height of each obstacle characterizes the height of each obstacle.
  • a projection threshold for dividing high obstacles and low obstacles is preset, based on each obstacle
  • the size relationship between the projection height and the projection threshold can divide each obstacle into a first obstacle and a second obstacle; the first obstacle is a high obstacle, and the second obstacle is a short obstacle.
  • the step of dividing each obstacle into a first obstacle and a second obstacle according to each projection height includes:
  • Step S21 comparing the projection height of each obstacle with the projection threshold one by one, and determining the target projection height of each of the projection heights that is greater than the projection threshold;
  • Step S22 Divide each of the obstacles into the first obstacle and the second obstacle according to the target projection height.
  • the projection height of each obstacle is compared with the projection threshold one by one, and the target projection height that is greater than the projection threshold is selected from the projection heights; and then the obstacles are selected.
  • the obstacle with the target projection height is determined as the first obstacle, and the other obstacles are determined as the second obstacle.
  • the projection threshold value needs to be adjusted multiple times, and the adjustment is carried out before the comparison; specifically, according to the projection height, the obstacles
  • the step of dividing the object into the first obstacle and the second obstacle includes:
  • Step a1 after receiving the set value in the preset direction, acquire a first camera image corresponding to the set value based on the camera device, and determine whether the first camera image is valid;
  • Step a2 if the first captured image is valid, adjust the set value, and acquire a second captured image corresponding to the adjusted set value based on the camera device;
  • Step a3 when the second camera image is valid, update the adjusted setting value to the setting value to be verified according to a preset interference factor
  • Step a4 verifying the setting value to be verified, and determining the setting value to be verified as the projection threshold after the verification of the setting value to be verified is successful.
  • the vehicle is driven to a level ground, and the projection threshold is set and adjusted.
  • the installation height of the stereo camera is detected, and the setting value of the vehicle in the preset direction is set to 0, and the error range of the setting value is within plus or minus 5% of the installation height.
  • the ground in front of the vehicle is photographed by a stereo camera, and the photographing range is different depending on the setting value, and the photographed image is taken as the first photographed image corresponding to the setting value.
  • the validity of the first camera image is judged, and the validity judgment is to judge whether all the ground in the front is present in the image; if all of the ground is present in the image, it is judged that the first camera image is valid, and the set value is adjusted. Set to a value greater than 0. If the first captured image is determined to be invalid, the installation height, installation angle, and field of view angle of the stereo camera are adjusted to ensure that all the first captured image is presented in the image.
  • the ground in front of the vehicle is also photographed by a stereo camera to obtain a second photographed image corresponding to the adjusted setting value, and the second photographed image is verified.
  • the validity of the validity verification is a process of judging whether the projection of the preset direction on the ground in front has completely disappeared. If it disappears completely, it is determined that the second camera image is valid, combined with preset interference factors such as vehicle body tilt and shaking, increase the value of the set value, and update the adjusted set value to the set value to be verified to ensure the prediction of the ground in front. Set the direction projection will not appear in the captured image.
  • the verification After that, verify the set value to be verified, and determine whether the obstacles on the ground ahead that are higher than the set value are all present in the field of view of the stereo camera; if they are all present in the field of view of the stereo camera, the verification is determined to be successful, and the The verification setting is determined as the projection threshold to distinguish between tall obstacles and short obstacles.
  • the step of identifying the first contour of the first obstacle in the preset direction includes:
  • Step S23 Read a projection image of the first obstacle in a preset direction, and sequentially perform mean filtering, edge extraction, contour search, and polyline fitting processing on the projection image to obtain the first contour.
  • the projection map For the first obstacle, read its projection map in the preset direction, and the projection map reflects the contour curve of the high obstacle.
  • the projection image is first subjected to mean filtering to filter out the impurity points on the ground that are not obstructed in the stereo camera imaging; after that, the edge extraction of the projection image is performed to obtain the high obstacle in the projection image
  • the formed edge pixels are then contour searched on the edge pixels to obtain contour points of high obstacles.
  • a polyline fitting process is performed on each contour point to generate a first contour of the first obstacle in a preset direction.
  • the number 1 is the origin of the stereo camera
  • the number 1.3 is the installation height of the stereo camera
  • the number 2.1 is the projection height of the first obstacle on the Z axis
  • the number 2.2 is the second obstacle on the Z axis.
  • the projection height, number 4 is the projection threshold
  • number 5.1 is the projection contour of the second obstacle on the Z axis
  • number 5.2 is the first contour of the first obstacle on the Z axis.
  • Step S30 Perform texture edge detection on the second obstacle to generate a second contour of the second obstacle
  • the second obstacle is detected by texture edge detection to generate its second contour; the local Fourier transform is used to find the ground
  • the different positions where the frequency features are distributed are the contours imaged by the ground short obstacles, so as to obtain the second contour of the second short obstacles.
  • Step S40 Determine the target obstacle closest to the position of the vehicle body according to the first contour and the second contour.
  • the first contour reflects the distance between high ground obstacles and the car body
  • the second contour reflects the distance between low ground obstacles and the car body; in order to determine the obstacles that are most likely to collide with the car body, you can first
  • the first contour and the second contour are respectively converted into polar coordinates relative to the origin of the coordinates.
  • the polar coordinates of the two are used to determine the target obstacle that is closest to the position of the vehicle body; and then the distance between the target obstacle and the vehicle body is determined Determine whether obstacle avoidance is required to ensure the safety of the vehicle during automatic driving. Please refer to Figure 6.
  • number 1 is the coordinate origin of the stereo camera
  • number 5.1 is the projection profile of the second obstacle on the Z axis
  • number 5.2 is the first profile of the first obstacle on the Z axis
  • number 5.3 is The second contour of the second obstacle
  • number 5.4 is the projection contour of the first obstacle on the XY plane
  • the number 6.1 is the contour line of the polar coordinate method of the second contour
  • the number 6.2 is the contour line of the polar coordinate method of the first contour .
  • the obstacle when an obstacle is detected by the camera device installed on the vehicle body, the obstacle is first projected in a preset direction, and each obstacle is divided by the height of each obstacle represented by the projection height. Are the first obstacle and the second obstacle; after that, the first obstacle is identified in the preset direction for the first contour, and the texture edge detection of the second obstacle is performed to obtain the second contour; and then based on the first contour And the position of the second contour, determine the target obstacle closest to the position of the vehicle body.
  • the recognized first contour has little correlation with the angle installed by the camera, vehicle body shaking will not interfere with image processing, ensuring the accuracy of recognition; while the second contour is generated based on texture edge detection without lowering the ground
  • the accuracy can realize the detection of low obstacles, can fully identify various obstacles in the transportation environment, and have strong environmental compatibility.
  • FIG. 3 is a schematic flowchart of a second embodiment of a ground obstacle detection method according to the present application.
  • the difference between the second embodiment of the ground obstacle detection method and the first embodiment of the ground obstacle detection method is that the position of the vehicle body is determined according to the first contour and the second contour.
  • the steps to the nearest target obstacle include:
  • Step 41 Convert the first contour to a first polar coordinate, convert the second contour to a second polar coordinate, and determine whether the first polar coordinate and the second polar coordinate are within a preset angle range;
  • the first contour is converted into the first polar coordinate
  • the second contour is converted into the second pole. coordinate. Since both the first contour and the second contour are composed of a plurality of pixel points, the converted first polar coordinate and the second polar coordinate are essentially a polar coordinate set composed of the polar coordinate values of the plurality of pixel points.
  • the step of converting the first contour to the first polar coordinate includes:
  • Step 411 Read the installation height, the installation angle, the vertical field of view, the horizontal field of view, the number of effective pixel rows and the number of effective pixel columns of the camera device;
  • Step 412 Read all the pixel points in the first contour as measurement points, and perform the following steps for each measurement point one by one:
  • Step 413 Detect the depth value between the measurement point and the camera device, and the number of pixel rows and pixel columns of the measurement point;
  • Step 414 Determine the polar coordinate modulus of the measurement point according to the installation angle, the vertical field of view angle, the number of pixel rows, the number of effective pixel rows, and the depth value;
  • Step 415 Determine the polar coordinates of the measurement point according to the horizontal field of view angle, installation height, installation angle, vertical field of view angle, the number of pixel columns, the number of effective pixel columns, the number of pixel rows and the number of effective pixel rows. angle;
  • Step 416 Set the polar coordinate modulus and the polar coordinate angle as the polar coordinate of the measuring point, and after generating the polar coordinate for each measuring point, form each of the polar coordinates as a first pole. coordinate.
  • the installation parameters of the stereo camera are read, and the installation parameters include the installation height H, the installation angle ⁇ , the vertical field of view angle ⁇ z , the horizontal field of view angle ⁇ h , the number of effective pixel rows L and the number of effective pixel columns C;
  • the number of effective pixel rows is the maximum imaging pixel value of the stereo camera in the Y-axis direction
  • the effective pixel column number is the maximum imaging pixel value of the stereo camera in the X-axis direction.
  • ) of the nearest projection point are calculated; specifically, the horizontal The field of view angle ⁇ h , the installation height H, the installation angle ⁇ , and the vertical field of view angle ⁇ z are transmitted to formula (3).
  • the absolute value of the coordinates of the farthest projection point is obtained as
  • the formulas (3), (4), (5), (6) are:
  • ) of the coordinates of the measuring point are calculated, and the number of pixel columns m, the number of effective pixel columns C,
  • in the absolute value of the coordinate of the measuring point is obtained, and transfer the number of pixel rows n, the number of effective pixel rows L,
  • in the absolute value of the coordinates of the measuring point is obtained; where formulas (7) and (8) are respectively:
  • the polar coordinate modulus and polar coordinate angle of the measuring point calculated by the above formulas (1) to (9) form the polar coordinates of the measuring point; each measuring point passes through formulas (1) to (9) After calculating the polar coordinates of each measuring point, each polar coordinate forms the first polar coordinate, and the first contour is completely converted into the first polar coordinate. It should be noted that the process of converting the second contour into the second polar coordinate is the same as the above-mentioned process of converting the first contour into the first polar coordinate, and will not be repeated here.
  • Step 42 If it is within a preset angle range, merge the first polar coordinate and the second polar coordinate to generate a polar coordinate set;
  • Step 43 Determine the target obstacle that is closest to the position of the vehicle body according to the set of polar coordinates.
  • a preset angle range that characterizes the proximity of the position is preset, such as 5 degrees; it is determined whether the first polar coordinate and the second polar coordinate are both within the preset angle range, and if they are within the preset angle range, the The first polar coordinate and the second polar coordinate are merged, and the first polar coordinate and the second polar coordinate are imaged on the same layer to form a coordinate set.
  • the coordinate set contains each polar coordinate point of the first polar coordinate and each polar coordinate point of the second polar coordinate.
  • the step of determining the target obstacle closest to the position of the vehicle body includes:
  • Step 431 Perform median filtering, denoising, and mean filtering on each element in the polar coordinate set in sequence to generate a processing result
  • Step 432 Combine the elements in the processing result to generate a target element, and calculate a first angle and a first distance corresponding to the target element and the camera device;
  • Step 433 Determine the target obstacle that is closest to the position of the vehicle body according to the first angle and the first distance.
  • each polar coordinate point in the polar coordinate set is taken as each element in the polar coordinate set; the median filtering process is performed on each element first to remove the salt and pepper noise points; and then by setting the minimum value of the distance from the origin of the coordinate, Among the elements, the elements whose distance from the origin of the coordinates are greater than the minimum value are removed; after that, the average filtering process is performed on the processed elements to generate the processing result. Furthermore, the elements in the processing result are merged, and the polar coordinate points as the elements are merged into one polar coordinate point, and the polar coordinate point obtained by the merge is the target element.
  • the first angle and the first distance of the target element relative to the origin of the coordinate are calculated, and the relative distance between the vehicle body and the combined obstacles is represented by the first distance and the first angle; Obstacles and the relative distances between the obstacles and the vehicle body that have not been merged are used to determine the target obstacle that is the closest to the position of the vehicle body among multiple obstacles.
  • each obstacle needs to be processed separately; specifically, the determining whether the first polar coordinate and the second polar coordinate are located at a preset angle After the steps in the scope include:
  • Step 44 If the first polar coordinate and the second polar coordinate are not within a preset angle range, determine the second angle and the second distance between the first polar coordinate and the camera device, and the A third angle and a third distance between the second polar coordinate and the camera device;
  • Step 45 Compare the element group formed between the second angle and the second distance with the element group formed between the third angle and the third distance to generate a comparison result, and compare the result according to the comparison As a result, the target obstacle closest to the position of the vehicle body is determined.
  • each element in the first polar coordinate and the second polar coordinate are performed separately.
  • Median filtering, denoising and mean filtering and merge the processed elements in the first polar coordinate to generate the first element point, and merge the processed elements in the second polar coordinate to generate the second element point.
  • calculate the angle and distance between the first element point and the origin of coordinates that is, the second angle and the second distance between the first polar coordinate and the camera device, and calculate the sum of the angles between the second element point and the origin of coordinates
  • the distance is the third angle and the third distance between the second polar coordinate and the camera.
  • the second angle and the second distance are formed into an element group
  • the third angle and the third distance are formed into an element group
  • the two elements are compared to generate the respective distances representing the first polar coordinates and the second polar coordinates.
  • the comparison result of the distance of the vehicle body is used to determine the target obstacle that is closest to the position of the vehicle body based on the comparison result.
  • the obstacles that are close in position are processed together, and the obstacles that are far apart are processed separately; it avoids that each obstacle is processed separately to increase the amount of calculation, but does not contribute much to the resolution of obstacle avoidance; While the obstacles are comprehensively detected, the detection efficiency is improved.
  • the processing methods of converting the contour to polar coordinates, filtering, denoising and calculating improve the accuracy of calculating the distance between each obstacle and the vehicle body, thereby making the detection of target obstacles more accurate.
  • Fig. 4 is a schematic flowchart of a third embodiment of a ground obstacle detection method according to the present application.
  • Step S11 Read the installation height, installation angle, field of view angle and number of effective pixel rows of the camera device, and use each obstacle as a point to be measured, and perform the following steps for each point to be measured one by one:
  • Step S12 detecting the measured depth value between the camera device and the point to be measured, and the number of pixel rows where the point of the point to be measured is located;
  • Step S13 Determine the deflection angle of the pixel row corresponding to the point to be measured according to the installation angle, the field of view angle, the number of effective pixel rows, and the number of pixel rows where the point is located;
  • Step S14 Determine the projection intermediate value of the obstacle in the preset direction according to the measured depth value and the deflection angle of the row where the pixel is located;
  • Step S15 Generate the projection height of each obstacle according to the installation height and the projection intermediate value.
  • This embodiment realizes that each obstacle is projected in a preset direction, and the projection height of each obstacle is obtained, so that each obstacle is divided into a first obstacle and a second obstacle through each projection height.
  • each point to be measured is processed and calculated as described above, and each calculation result obtained is the projection height of each obstacle, and then each obstacle is divided into a first obstacle and a second obstacle according to each projection height.
  • the height of each obstacle is characterized by calculating the projection height of each obstacle in a preset direction, and then each obstacle is divided into a first obstacle and a second obstacle, so as to target different types of obstacles. Different methods are adopted to obtain the outline of the obstacle, so that the detection of the obstacle is more comprehensive and accurate.
  • the difference between the fourth embodiment of the ground obstacle detection method and the first, second, or third embodiment of the ground obstacle detection method is that the texture edge detection is performed on the second obstacle to generate the
  • the steps of the second contour of the second obstacle include:
  • Step S31 Acquire an imaging image corresponding to the second obstacle based on the camera device, and call a first preset algorithm to perform fill-in processing on the imaging image;
  • Step S32 Invoking a second preset algorithm, extracting the initial contour of the imaged image after the leveling process, and determining the second contour of the second obstacle according to the initial contour.
  • texture edge detection is performed on the second obstacle to obtain the second contour of the second obstacle.
  • a first preset algorithm and a second preset algorithm are preset.
  • the first preset algorithm is preferably a data filling method
  • the second preset algorithm is a local Fourier transform to pass the first preset algorithm and the second preset algorithm.
  • the algorithm processes the imaging image of the second obstacle. Specifically, first use a stereo camera to shoot the second obstacle to obtain an imaged image; considering the effects of various factors such as the stereo camera and ambient light in the imaged image, there may be some pixels that have no data; this category has no data Pixels will cause the calculation of the second preset algorithm to fail, and need to be processed by the first preset algorithm.
  • the data filling method as the first preset algorithm includes expansion and median filtering.
  • the imaging image is expanded by the expansion algorithm, the hole data in the imaging image is filled in, and the impurities in the imaging image are eliminated by median filtering. Remove.
  • the second preset algorithm is called to process the imaged image after the fill-in process, and the 16*16 pixel recognition area is used as the smallest recognition unit to segment the imaged image to extract the target pixel where the contour edge of the imaged image is located.
  • F(k,l) is the value of the target pixel recognition area of the imaged image in the frequency domain
  • f(i,j) is the value of the target pixel recognition area of the imaged image in the space domain
  • M is the value of F(k,l) Amplitude
  • cut and redistribute the amplitude image rearrange the quadrants of the result so that the origin of the coordinate corresponds to the center of the image, and then perform normalization processing to process the values beyond the display range to facilitate classification judgment, and then to identify edges that contain Zone classification, and regenerate contours, so as to extract the initial contour of the second obstacle from the ground texture.
  • the color block expansion process is performed on the initially extracted contour, and the edge is extracted from the original imaging image according to the processing result, and then the contour is accurately found from the extracted edge to obtain the second contour of the second obstacle .
  • the contour is extracted according to the characteristics of the obvious difference between it and the ground texture, and then the recognition and detection are performed.
  • High obstacles and low obstacles are processed in different ways. While identifying high obstacles, it realizes the detection of low obstacles, ensuring the comprehensiveness and accuracy of the detection of various obstacles in the vehicle body driving environment. .
  • the aforementioned storage media may be read-only memory, magnetic disks, or optical disks.
  • an embodiment of the present application also proposes a computer-readable storage medium having a ground obstacle detection program stored on the computer-readable storage medium, and when the ground obstacle detection program is executed by a processor, the above-mentioned ground obstacle detection program is implemented. Steps of obstacle detection method.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present application.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Abstract

Disclosed are a ground obstacle detection method and device, and a computer-readable storage medium. The method comprises: when obstacles are detected on the basis of a photographing apparatus mounted on a vehicle body, performing projection on each of the obstacles in a preset direction, so as to generate a projection height of each of the obstacles; according to the projection heights, classifying the obstacles into a first obstacle and a second obstacle, and identifying a first contour of the first obstacle in the preset direction; performing texture edge detection on the second obstacle to generate a second contour of the second obstacle; and according to the first contour and the second contour, determining a target obstacle closest to the vehicle body. According to the present application, with regard to a high obstacle, image processing may not be interfered with due to wobbling of the vehicle body, thereby ensuring the precision of identification; and with regard to a short obstacle, detection can be realized without the need to reduce the ground precision, various obstacles in a transport environment can be comprehensively identified, and a strong environment compatibility is provided.

Description

地面障碍物检测方法、设备及计算机可读存储介质Ground obstacle detection method, equipment and computer readable storage medium
本申请要求于2019年11月12日提交中国专利局、申请号为201911103707.4、发明名称为“地面障碍物检测方法、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office, the application number is 201911103707.4, and the invention title is "ground obstacle detection method, equipment and computer-readable storage medium" on November 12, 2019, and the entire content of it is approved The reference is incorporated in the application.
技术领域Technical field
本申请涉及智能驾驶技术领域,尤其涉及一种地面障碍物检测方法、设备及计算机可读存储介质。This application relates to the field of intelligent driving technology, and in particular to a ground obstacle detection method, device, and computer-readable storage medium.
背景技术Background technique
随着智能技术的发展,智能仓库的应用也越来越广泛;目前智能仓库通过导航来引导车辆自动驾驶,实现货物的运输。运输过程中若运输路径及其周边存在障碍物则需要检测障碍物的距离,以避免碰撞发生危险。With the development of smart technology, the application of smart warehouses has become more and more extensive; at present, smart warehouses use navigation to guide vehicles to drive automatically and realize the transportation of goods. If there are obstacles on the transportation path and its surroundings during transportation, the distance of the obstacles needs to be detected to avoid the danger of collision.
虽然传统的激光雷达SLAM(simultaneous localization and mapping,即时定位与地图构建)能够给导航直接使用;但在处理障碍物的图像过程中,由于车体姿态可能产生晃动,影响车体上所安装相机的实际测量角度,造成地面去除干扰很大,甚至出现假障碍物干扰;而把地面上的精度降低之后,则会造成一些非常低矮的障碍物无法识别,导致障碍物的检测不全面。Although traditional laser radar SLAM (simultaneous localization and mapping, real-time positioning and map construction) can be used directly for navigation; however, in the process of processing obstacles, the posture of the vehicle may shake, which affects the camera installed on the vehicle. The actual measurement of the angle causes the ground to remove a lot of interference, and even false obstacle interference; and after reducing the accuracy on the ground, some very low obstacles will not be recognized, resulting in incomplete obstacle detection.
发明内容Summary of the invention
本申请的主要目的在于提供一种地面障碍物检测方法、设备及计算机可读存储介质,旨在解决现有中地面障碍物检测不准确和不全面的技术问题。The main purpose of this application is to provide a ground obstacle detection method, equipment, and computer readable storage medium, aiming to solve the existing technical problems of inaccurate and incomplete ground obstacle detection.
为实现上述目的,本申请提供一种地面障碍物检测方法,所述地面障碍物检测方法包括步骤:To achieve the above objective, the present application provides a ground obstacle detection method, the ground obstacle detection method includes the steps:
当基于车体上所安装的摄像装置检测到障碍物时,对各所述障碍物进行预设方向上的投影,生成各所述障碍物的投影高度;When an obstacle is detected based on the camera device installed on the vehicle body, projection of each of the obstacles in a preset direction is performed to generate the projection height of each of the obstacles;
根据各所述投影高度,将各所述障碍物划分为第一障碍物和第二障碍物,并识别所述第一障碍物在预设方向的第一轮廓;Dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights, and identifying a first contour of the first obstacle in a preset direction;
对所述第二障碍物进行纹理边缘检测,生成所述第二障碍物的第二轮廓;Performing texture edge detection on the second obstacle to generate a second contour of the second obstacle;
根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物。According to the first contour and the second contour, the target obstacle closest to the position of the vehicle body is determined.
此外,为实现上述目的,本申请还提供一种地面障碍物检测设备,所述地面障碍物检测设备包括存储器、处理器和存储在所述存储器上并可在所述处理器上运行的地面障碍物 检测程序,所述地面障碍物检测程序被所述处理器执行时实现如上所述的地面障碍物检测方法的步骤。In addition, in order to achieve the above object, the present application also provides a ground obstacle detection device, the ground obstacle detection device includes a memory, a processor, and a ground obstacle stored in the memory and capable of running on the processor An object detection program, which implements the steps of the above-mentioned ground obstacle detection method when the ground obstacle detection program is executed by the processor.
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有地面障碍物检测程序,所述地面障碍物检测程序被处理器执行时实现如上所述的地面障碍物检测方法的步骤。In addition, in order to achieve the above-mentioned object, the present application also provides a computer-readable storage medium having a ground obstacle detection program stored on the computer-readable storage medium, and the ground obstacle detection program is executed by a processor to achieve the above The steps of the ground obstacle detection method described.
本申请在通过车体上安装的摄像装置检测到障碍物时,先对各障碍物进行预设方向上的投影处理,通过投影高度所表征的各障碍物的高矮,来将各个障碍物划分为第一障碍物和第二障碍物;此后针对第一障碍物进行预设方向上第一轮廓的识别,并对第二障碍物进行纹理边缘检测,得到其第二轮廓;进而基于第一轮廓和第二轮廓所在的位置,确定与车体所在位置距离最近的目标障碍物。因所识别的第一轮廓与摄像装置所安装的角度相关性不大,车体晃动不会对图像处理产生干扰,确保了识别的准确性;而第二轮廓基于纹理边缘检测生成,无需降低地面精度即可实现低矮障碍物的检测,可全面识别运输环境中的各项障碍物,环境兼容性强。When an obstacle is detected by the camera device installed on the vehicle body, this application first performs projection processing on each obstacle in a preset direction, and divides each obstacle into the height of each obstacle represented by the projection height. The first obstacle and the second obstacle; thereafter, the first obstacle is identified in the preset direction for the first obstacle, and the texture edge detection of the second obstacle is performed to obtain the second contour; and then based on the first contour and The position of the second contour is to determine the target obstacle closest to the position of the vehicle body. Since the recognized first contour has little correlation with the angle installed by the camera, vehicle body shaking will not interfere with image processing, ensuring the accuracy of recognition; while the second contour is generated based on texture edge detection without lowering the ground The accuracy can realize the detection of low obstacles, can fully identify various obstacles in the transportation environment, and have strong environmental compatibility.
附图说明Description of the drawings
图1是本申请实施例方案涉及的硬件运行环境的结构示意图;FIG. 1 is a schematic structural diagram of a hardware operating environment involved in a solution of an embodiment of the present application;
图2是本申请地面障碍物检测方法第一实施例的流程示意图;2 is a schematic flowchart of a first embodiment of a ground obstacle detection method according to the present application;
图3是本申请地面障碍物检测方法第二实施例的流程示意图;FIG. 3 is a schematic flowchart of a second embodiment of a ground obstacle detection method according to the present application;
图4是本申请地面障碍物检测方法第三实施例的流程示意图;4 is a schematic flowchart of a third embodiment of a ground obstacle detection method according to the present application;
图5是本申请地面障碍物检测方法中第一轮廓的生成示意图;FIG. 5 is a schematic diagram of generating a first contour in the ground obstacle detection method of the present application;
图6是本申请地面障碍物检测方法中第一轮廓和第二轮廓的极坐标方式的轮廓线生成示意图。Fig. 6 is a schematic diagram of contour line generation in the polar coordinate mode of the first contour and the second contour in the ground obstacle detection method of the present application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics, and advantages of the purpose of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
具体实施方式Detailed ways
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the application, and not used to limit the application.
如图1所示,图1是本申请实施例方案涉及的硬件运行环境的结构示意图。As shown in FIG. 1, FIG. 1 is a schematic structural diagram of a hardware operating environment involved in a solution of an embodiment of the present application.
如图1所示,该地面障碍物检测可以包括:处理器1001,例如CPU,用户接口1003,网络接口1004,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还 可以是独立于前述处理器1001的存储装置。As shown in FIG. 1, the ground obstacle detection may include: a processor 1001, such as a CPU, a user interface 1003, a network interface 1004, a memory 1005, and a communication bus 1002. Among them, the communication bus 1002 is used to implement connection and communication between these components. The user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface. The network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface). The memory 1005 may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as a magnetic disk memory. Optionally, the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
本领域技术人员可以理解,图1中示出的地面障碍物检测结构并不构成对地面障碍物检测的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the ground obstacle detection structure shown in FIG. 1 does not constitute a limitation on ground obstacle detection, and may include more or less components than shown in the figure, or a combination of certain components, or different components. The layout of the components.
如图1所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及地面障碍物检测程序。其中,操作系统是管理和控制地面障碍物检测硬件和软件资源的程序,支持地面障碍物检测程序以及其它软件或程序的运行。As shown in FIG. 1, the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a ground obstacle detection program. Among them, the operating system is a program that manages and controls ground obstacle detection hardware and software resources, and supports the operation of ground obstacle detection programs and other software or programs.
在图1所示的地面障碍物检测中,用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;而处理器1001可以用于调用存储器1005中存储的地面障碍物检测程序,并执行以下操作:In the ground obstacle detection shown in FIG. 1, the user interface 1003 is mainly used to connect to the client (user side) and communicate with the client; the network interface 1004 is mainly used to connect to the background server and communicate with the background server; The processor 1001 may be used to call the ground obstacle detection program stored in the memory 1005, and perform the following operations:
当基于车体上所安装的摄像装置检测到障碍物时,对各所述障碍物进行预设方向上的投影,生成各所述障碍物的投影高度;When an obstacle is detected based on the camera device installed on the vehicle body, projection of each of the obstacles in a preset direction is performed to generate the projection height of each of the obstacles;
根据各所述投影高度,将各所述障碍物划分为第一障碍物和第二障碍物,并识别所述第一障碍物在预设方向的第一轮廓;Dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights, and identifying a first contour of the first obstacle in a preset direction;
对所述第二障碍物进行纹理边缘检测,生成所述第二障碍物的第二轮廓;Performing texture edge detection on the second obstacle to generate a second contour of the second obstacle;
根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物。According to the first contour and the second contour, the target obstacle closest to the position of the vehicle body is determined.
进一步地,所述根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物的步骤包括:Further, the step of determining the target obstacle closest to the position of the vehicle body according to the first contour and the second contour includes:
将所述第一轮廓转换为第一极坐标,将所述第二轮廓转换为第二极坐标,并判断所述第一极坐标和第二极坐标是否位于预设角度范围内;Converting the first contour to first polar coordinates, converting the second contour to second polar coordinates, and determining whether the first polar coordinates and the second polar coordinates are within a preset angle range;
若位于预设角度范围内,则将所述第一极坐标和第二极坐标合并,生成极坐标集合;If it is within the preset angle range, combining the first polar coordinate and the second polar coordinate to generate a polar coordinate set;
根据所述极坐标集合,确定与所述车体所在位置距离最近的目标障碍物。According to the set of polar coordinates, a target obstacle that is closest to the position of the vehicle body is determined.
进一步地,所述将所述第一轮廓转换为第一极坐标的步骤包括:Further, the step of converting the first contour into a first polar coordinate includes:
读取所述摄像装置的安装高度、安装角度、垂直视场角度、水平视场角度、有效像素行数和有效像素列数;Reading the installation height, installation angle, vertical field of view, horizontal field of view, number of effective pixel rows and effective pixel columns of the camera device;
将所述第一轮廓中的像素点均读取为测量点,并逐一针对各所述测量点执行以下步骤:Read all the pixels in the first contour as measurement points, and perform the following steps for each measurement point one by one:
检测所述测量点到所述摄像装置之间的深度值,以及所述测量点的所在像素行数和所 在像素列数;Detecting the depth value between the measurement point and the camera device, and the number of pixel rows and pixel columns of the measurement point;
根据所述安装角度、垂直视场角度、所在像素行数、有效像素行数和所述深度值,确定所述测量点的极坐标模值;Determine the polar coordinate modulus of the measurement point according to the installation angle, the vertical field of view angle, the number of pixel rows, the number of effective pixel rows, and the depth value;
根据所述水平视场角度、安装高度、安装角度、垂直视场角度、所在像素列数、有效像素列数、所在像素行数和有效像素行数,确定所述测量点的极坐标角度;Determine the polar coordinate angle of the measurement point according to the horizontal field of view angle, the installation height, the installation angle, the vertical field of view angle, the number of pixel columns, the number of effective pixel columns, the number of pixel rows and the number of effective pixel rows;
将所述极坐标模值和所述极坐标角度设为所述测量点的极坐标,并在各所述测量点均生成极坐标后,将各所述极坐标形成为第一极坐标。The polar coordinate modulus value and the polar coordinate angle are set as the polar coordinates of the measuring point, and after the polar coordinates are generated for each measuring point, each of the polar coordinates is formed as the first polar coordinate.
进一步地,所述根据所述极坐标集合,确定与所述车体所在位置距离最近的目标障碍物的步骤包括:Further, the step of determining the target obstacle closest to the position of the vehicle body according to the set of polar coordinates includes:
对所述极坐标集合中的各元素依次进行中值滤波、去噪和均值滤波处理,生成处理结果;Performing median filtering, denoising, and mean filtering on each element in the polar coordinate set in sequence to generate a processing result;
对所述处理结果中的各元素进行合并,生成目标元素,并计算所述目标元素与所述摄像装置对应的第一角度和第一距离;Combine the elements in the processing result to generate a target element, and calculate a first angle and a first distance corresponding to the target element and the camera device;
根据所述第一角度和第一距离,确定与所述车体所在位置距离最近的目标障碍物。According to the first angle and the first distance, the target obstacle closest to the position of the vehicle body is determined.
进一步地,所述判断所述第一极坐标和第二极坐标是否位于预设角度范围内的步骤之后,处理器1001可以用于调用存储器1005中存储的地面障碍物检测程序,并执行以下操作:Further, after the step of judging whether the first polar coordinates and the second polar coordinates are within a preset angle range, the processor 1001 may be used to call a ground obstacle detection program stored in the memory 1005, and perform the following operations :
若所述第一极坐标和第二极坐标不位于预设角度范围内,则确定所述第一极坐标与所述摄像装置之间的第二角度和第二距离,以及所述第二极坐标与所述摄像装置之间的第三角度和第三距离;If the first polar coordinate and the second polar coordinate are not within the preset angle range, the second angle and the second distance between the first polar coordinate and the camera device, and the second polar coordinate are determined A third angle and a third distance between the coordinates and the camera device;
将所述第二角度和第二距离之间所形成的元素组,与所述第三角度和第三距离之间所形成的元素组进行对比,生成对比结果,并根据所述对比结果确定与所述车体所在位置距离最近的目标障碍物。The element group formed between the second angle and the second distance is compared with the element group formed between the third angle and the third distance to generate a comparison result, and the comparison result is determined according to the comparison result. The target obstacle closest to the location of the vehicle body.
进一步地,所述对各所述障碍物进行预设方向上的投影,生成各所述障碍物的投影高度的步骤包括:Further, the step of projecting each of the obstacles in a preset direction to generate the projection height of each of the obstacles includes:
读取所述摄像装置的安装高度、安装角度、视场角度和有效像素行数,并将各所述障碍物作为待测量点,逐一针对各所述待测量点执行以下步骤:Read the installation height, installation angle, field of view angle, and number of effective pixel rows of the camera device, and use each obstacle as a point to be measured, and perform the following steps for each point to be measured one by one:
检测所述摄像装置与所述待测量点之间的测量深度值,以及所述待测量点的点所在像素行数;Detecting the measured depth value between the camera device and the point to be measured, and the number of pixel rows where the point of the point to be measured is located;
根据所述安装角度、视场角度、有效像素行数和点所在像素行数,确定与所述待测量 点对应的像素所在行的偏角;Determine the deflection angle of the pixel row corresponding to the point to be measured according to the installation angle, the field of view angle, the number of effective pixel rows and the number of pixel rows where the points are located;
根据所述测量深度值和所述像素所在行的偏角,确定所述障碍物在预设方向上的投影中间值;Determine the projection median value of the obstacle in the preset direction according to the measured depth value and the deflection angle of the row where the pixel is located;
根据所述安装高度和所述投影中间值,生成各所述障碍物的投影高度。According to the installation height and the projection intermediate value, the projection height of each of the obstacles is generated.
进一步地,所述识别所述第一障碍物在预设方向的第一轮廓的步骤包括:Further, the step of identifying the first contour of the first obstacle in a preset direction includes:
读取所述第一障碍物在预设方向上的投影图,并对所述投影图依次经过均值滤波、边缘提取、轮廓查找和折线拟合处理,得到所述第一轮廓。Read the projection image of the first obstacle in the preset direction, and sequentially perform average filtering, edge extraction, contour search, and polyline fitting processing on the projection image to obtain the first contour.
进一步地,所述对所述第二障碍物进行纹理边缘检测,生成所述第二障碍物的第二轮廓的步骤包括:Further, the step of performing texture edge detection on the second obstacle to generate a second contour of the second obstacle includes:
基于所述摄像装置获取与所述第二障碍物对应的成像图像,并调用第一预设算法对所述成像图像进行填平处理;Acquiring an imaging image corresponding to the second obstacle based on the camera device, and invoking a first preset algorithm to perform fill-in processing on the imaging image;
调用第二预设算法,提取经填平处理的所述成像图像的初始轮廓,并根据所述初始轮廓,确定所述第二障碍物的第二轮廓。Calling a second preset algorithm, extracting the initial contour of the imaged image that has been leveled, and determining the second contour of the second obstacle according to the initial contour.
进一步地,所述根据各所述投影高度,将各所述障碍物划分为第一障碍物和第二障碍物的步骤包括:Further, the step of dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights includes:
将各所述障碍物的投影高度逐一和所述投影阈值进行对比,确定各所述投影高度中大于所述投影阈值的目标投影高度;Comparing the projection height of each obstacle with the projection threshold one by one, and determining the target projection height of each projection height that is greater than the projection threshold;
根据所述目标投影高度,将各所述障碍物划分为所述第一障碍物和所述第二障碍物。According to the target projection height, each of the obstacles is divided into the first obstacle and the second obstacle.
基于上述的结构,提出地面障碍物检测方法的各个实施例。Based on the above structure, various embodiments of the ground obstacle detection method are proposed.
参照图2,图2为本申请地面障碍物检测方法第一实施例的流程示意图。Referring to FIG. 2, FIG. 2 is a schematic flowchart of a first embodiment of a ground obstacle detection method according to the present application.
本申请实施例提供了地面障碍物检测方法的实施例,需要说明的是,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。The embodiment of the application provides an embodiment of a ground obstacle detection method. It should be noted that although the logical sequence is shown in the flowchart, in some cases, the sequence shown here may be executed in a different order than here. Or the steps described.
具体地,地面障碍物检测方法包括:Specifically, the ground obstacle detection method includes:
步骤S10,当基于车体上所安装的摄像装置检测到障碍物时,对各所述障碍物进行预设方向上的投影,生成各所述障碍物的投影高度;Step S10, when an obstacle is detected based on the camera device installed on the vehicle body, project each of the obstacles in a preset direction to generate the projection height of each of the obstacles;
本实施例的地面障碍物检测方法应用于在智能自动驾驶过程中,对行驶路径及其周边的障碍物进行检测,以确保行驶的安全性;其中智能自动驾驶可适用于封闭环境的仓库货运、也可适用于开放环境的道路运输,本实施例以仓库货运为例加以说明。具体地,实现自动驾驶的车辆车体上安装有摄像装置,该摄像装置优选为立体相机;车辆在行驶过程中,立体相机实时对周边环境进行扫面检测,判断行驶路径及其周边是否存在障碍物。The ground obstacle detection method of this embodiment is applied in the process of intelligent automatic driving to detect the obstacles in the driving path and its surroundings to ensure the safety of driving; among them, intelligent automatic driving can be applied to warehouse freight transportation in a closed environment. It can also be applied to road transportation in an open environment. In this embodiment, warehouse freight is taken as an example for illustration. Specifically, a camera device is installed on the body of a vehicle that realizes automatic driving, and the camera device is preferably a stereo camera; while the vehicle is driving, the stereo camera scans the surrounding environment in real time to determine whether there are obstacles in the driving path and its surroundings. Things.
此外,本实施例预先建立有三维空间坐标系,该三维空间坐标系以立体相机所在位置作为坐标原点,以车辆所在平面为XY平面,以与XY平面垂直的上部空间为Y轴正方向所在空间;其中对于XY平面,车辆行驶正前方为Y轴方向,车辆右侧与Y轴方向垂直的方向为X轴方向。将Y轴正方向作为预设方向,摄像装置一旦检测到障碍物,则对各障碍物成像,形成各障碍物在预设方向上的投影,并检测各障碍物的投影高度。In addition, this embodiment has a pre-established three-dimensional space coordinate system. The three-dimensional space coordinate system uses the position of the stereo camera as the coordinate origin, the plane where the vehicle is located as the XY plane, and the upper space perpendicular to the XY plane is the space where the positive direction of the Y axis is located. ; Among them, for the XY plane, the direction of the Y-axis is the direction directly in front of the vehicle and the direction perpendicular to the Y-axis on the right side of the vehicle is the X-axis direction. Taking the positive direction of the Y axis as the preset direction, once the camera detects an obstacle, it will image each obstacle to form a projection of each obstacle in the preset direction, and detect the projection height of each obstacle.
步骤S20,根据各所述投影高度,将各所述障碍物划分为第一障碍物和第二障碍物,并识别所述第一障碍物在预设方向的第一轮廓;Step S20, dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights, and identifying a first contour of the first obstacle in a preset direction;
进一步地,各障碍物的投影高度表征了各障碍物的高矮,为了对高障碍物和矮障碍物进行区分处理,预先设定有划分高障碍物和矮障碍物的投影阈值,依据各障碍物的投影高度和投影阈值之间的大小关系,可将各障碍物划分为第一障碍物和第二障碍物;第一障碍物为高障碍物、第二障碍物为矮障碍物。具体地,根据各投影高度,将各障碍物划分为第一障碍物和第二障碍物的步骤包括:Further, the projection height of each obstacle characterizes the height of each obstacle. In order to distinguish between high obstacles and short obstacles, a projection threshold for dividing high obstacles and low obstacles is preset, based on each obstacle The size relationship between the projection height and the projection threshold can divide each obstacle into a first obstacle and a second obstacle; the first obstacle is a high obstacle, and the second obstacle is a short obstacle. Specifically, the step of dividing each obstacle into a first obstacle and a second obstacle according to each projection height includes:
步骤S21,将各所述障碍物的投影高度逐一和所述投影阈值进行对比,确定各所述投影高度中大于所述投影阈值的目标投影高度;Step S21, comparing the projection height of each obstacle with the projection threshold one by one, and determining the target projection height of each of the projection heights that is greater than the projection threshold;
步骤S22,根据所述目标投影高度,将各所述障碍物划分为所述第一障碍物和所述第二障碍物。Step S22: Divide each of the obstacles into the first obstacle and the second obstacle according to the target projection height.
为了确定各障碍物的投影高度和投影阈值之间的大小关系,将各障碍物的投影高度逐一和投影阈值对比,从各投影高度中筛选出大于投影阈值的目标投影高度;进而将各障碍物中具有目标投影高度的障碍物确定为第一障碍物,而将其他障碍物确定为第二障碍物。In order to determine the size relationship between the projection height of each obstacle and the projection threshold, the projection height of each obstacle is compared with the projection threshold one by one, and the target projection height that is greater than the projection threshold is selected from the projection heights; and then the obstacles are selected. The obstacle with the target projection height is determined as the first obstacle, and the other obstacles are determined as the second obstacle.
进一步地,为了确保预先所设定的投影阈值可使得摄像装置对各项障碍物准确全面成像,需要对投影阈值进行多次调整,调整在对比之前进行;具体地,根据投影高度,将各障碍物划分为第一障碍物和第二障碍物的步骤之前包括:Further, in order to ensure that the preset projection threshold value can enable the camera device to accurately and comprehensively image various obstacles, the projection threshold value needs to be adjusted multiple times, and the adjustment is carried out before the comparison; specifically, according to the projection height, the obstacles The step of dividing the object into the first obstacle and the second obstacle includes:
步骤a1,在接收到预设方向上的设定值后,基于所述摄像装置获取与所述设定值对应的第一摄像图像,并判断所述第一摄像图像是否有效;Step a1, after receiving the set value in the preset direction, acquire a first camera image corresponding to the set value based on the camera device, and determine whether the first camera image is valid;
步骤a2,若所述第一摄像图像有效,则对所述设定值进行调整,并基于所述摄像装置获取与调整后的所述设定值对应的第二摄像图像;Step a2, if the first captured image is valid, adjust the set value, and acquire a second captured image corresponding to the adjusted set value based on the camera device;
步骤a3,当所述第二摄像图像有效时,根据预设干扰因素,将调整后的所述设定值更新为待验证设定值;Step a3, when the second camera image is valid, update the adjusted setting value to the setting value to be verified according to a preset interference factor;
步骤a4,对所述待验证设定值进行验证,并在所述待验证设定值验证成功后,将所述待验证设定值确定为投影阈值。Step a4, verifying the setting value to be verified, and determining the setting value to be verified as the projection threshold after the verification of the setting value to be verified is successful.
具体地,将立体相机安装到车体上之后,将车辆行驶到平整地面,开始投影阈值的设置和调整。检测立体相机的安装高度,并将车辆在预设方向上的设定值设置为0,该设定值的误差范围在安装高度正负5%以内取值。在接收到该设定值后,通过立体相机对车辆前方的地面拍摄,该拍摄范围因设定值的不同而不同,将拍摄得到的图像作为与设定值对应的第一摄像图像。此后对第一摄像图像进行有效性判断,有效性判断为判断前方地面是否全部呈现在图像中;若全部呈现在图像中,判定第一摄像图像有效,对设定值进行调整,将设定值设定为大于0的值。若第一摄像图像经判定为无效,则对立体相机的安装高度、安装角度和视场角度进行调整,以确保第一摄像图像全部呈现在图像中。Specifically, after the stereo camera is installed on the vehicle body, the vehicle is driven to a level ground, and the projection threshold is set and adjusted. The installation height of the stereo camera is detected, and the setting value of the vehicle in the preset direction is set to 0, and the error range of the setting value is within plus or minus 5% of the installation height. After receiving the setting value, the ground in front of the vehicle is photographed by a stereo camera, and the photographing range is different depending on the setting value, and the photographed image is taken as the first photographed image corresponding to the setting value. After that, the validity of the first camera image is judged, and the validity judgment is to judge whether all the ground in the front is present in the image; if all of the ground is present in the image, it is judged that the first camera image is valid, and the set value is adjusted. Set to a value greater than 0. If the first captured image is determined to be invalid, the installation height, installation angle, and field of view angle of the stereo camera are adjusted to ensure that all the first captured image is presented in the image.
进一步地,在将设定值设定为大于0的值之后,同样通过立体相机对车辆前方的地面进行拍摄,得到与调整后的设定值对应的第二摄像图像,并验证第二摄像图像的有效性,该有效性验证为判断前方地面的预设方向投影是否完全消失的过程。若完全消失,判定第二摄像图像有效,结合车体倾斜和抖动等预设干扰因素,增加设定值的数值,将调整后设定值更新为待验证设定值,以确保前方地面的预设方向投影不会出现在拍摄图像中。此后,对待验证设定值进行验证,判断前方地面中高于设定值的障碍物是否均呈现在立体相机的视场内;若均呈现在立体相机的视场内,则判定验证成功,将待验证设定确定为投影阈值,以用于区分高障碍物和矮障碍物。Further, after the setting value is set to a value greater than 0, the ground in front of the vehicle is also photographed by a stereo camera to obtain a second photographed image corresponding to the adjusted setting value, and the second photographed image is verified The validity of the validity verification is a process of judging whether the projection of the preset direction on the ground in front has completely disappeared. If it disappears completely, it is determined that the second camera image is valid, combined with preset interference factors such as vehicle body tilt and shaking, increase the value of the set value, and update the adjusted set value to the set value to be verified to ensure the prediction of the ground in front. Set the direction projection will not appear in the captured image. After that, verify the set value to be verified, and determine whether the obstacles on the ground ahead that are higher than the set value are all present in the field of view of the stereo camera; if they are all present in the field of view of the stereo camera, the verification is determined to be successful, and the The verification setting is determined as the projection threshold to distinguish between tall obstacles and short obstacles.
更进一步地,在经划分得到第一障碍物之后,对第一障碍物在预设方向上所形成的第一轮廓进行识别,该在预设方向上形成的第一轮廓即为第一障碍物朝向车体方向的外表轮廓。其中,识别第一障碍物在预设方向的第一轮廓的步骤包括:Furthermore, after the first obstacle is obtained by dividing, the first contour formed by the first obstacle in the preset direction is identified, and the first contour formed in the preset direction is the first obstacle The outer contour facing the direction of the car body. Wherein, the step of identifying the first contour of the first obstacle in the preset direction includes:
步骤S23,读取所述第一障碍物在预设方向上的投影图,并对所述投影图依次经过均值滤波、边缘提取、轮廓查找和折线拟合处理,得到所述第一轮廓。Step S23: Read a projection image of the first obstacle in a preset direction, and sequentially perform mean filtering, edge extraction, contour search, and polyline fitting processing on the projection image to obtain the first contour.
针对第一障碍物读取其在预设方向上的投影图,该投影图体现了高障碍物的轮廓曲线。为了提取该轮廓曲线,对投影图先进行均值滤波处理,以过滤掉地面上非障碍物在立体相机中成像而出现的杂质点;此后对投影图进行边缘提取,得到高障碍物在投影图中所形成的边缘像素点,进而对边缘像素点进行轮廓查找,得到高障碍物的轮廓点。通过对高障碍物在投影图中的边缘像素点进行初步提取,再在初步提取的边缘像素点的基础上,进行轮廓点查找,可确保所得到轮廓点的准确性。此后,对各个轮廓点进行折线拟合处理,生成第一障碍物在预设方向上的第一轮廓。请参照图6,图6中,标号1为立体相机原点、标号1.3为立体相机的安装高度、标号2.1为第一障碍物在Z轴的投影高度、标号2.2为第二障碍物在Z轴的投影高度、标号4为投影阈值、标号5.1为第二障碍物在Z轴的投影轮 廓,标号5.2为第一障碍物在Z轴的第一轮廓。For the first obstacle, read its projection map in the preset direction, and the projection map reflects the contour curve of the high obstacle. In order to extract the contour curve, the projection image is first subjected to mean filtering to filter out the impurity points on the ground that are not obstructed in the stereo camera imaging; after that, the edge extraction of the projection image is performed to obtain the high obstacle in the projection image The formed edge pixels are then contour searched on the edge pixels to obtain contour points of high obstacles. By initially extracting the edge pixels of the high obstacle in the projection image, and then searching for the contour points based on the preliminary extracted edge pixels, the accuracy of the obtained contour points can be ensured. After that, a polyline fitting process is performed on each contour point to generate a first contour of the first obstacle in a preset direction. Please refer to Figure 6. In Figure 6, the number 1 is the origin of the stereo camera, the number 1.3 is the installation height of the stereo camera, the number 2.1 is the projection height of the first obstacle on the Z axis, and the number 2.2 is the second obstacle on the Z axis. The projection height, number 4 is the projection threshold, number 5.1 is the projection contour of the second obstacle on the Z axis, and number 5.2 is the first contour of the first obstacle on the Z axis.
步骤S30,对所述第二障碍物进行纹理边缘检测,生成所述第二障碍物的第二轮廓;Step S30: Perform texture edge detection on the second obstacle to generate a second contour of the second obstacle;
进一步地,考虑到矮障碍物的成像与地面纹理之间具有明显差别的特性,从而对第二障碍物采用纹理边缘检测的方式来生成其第二轮廓;通过局部傅里叶变换,找出地面频率特征所分布不同的位置,该不同的位置即为地面矮障碍物所成像的轮廓,以此得到第二矮障碍物的第二轮廓。Further, taking into account the characteristics of the obvious difference between the imaging of the short obstacle and the ground texture, the second obstacle is detected by texture edge detection to generate its second contour; the local Fourier transform is used to find the ground The different positions where the frequency features are distributed are the contours imaged by the ground short obstacles, so as to obtain the second contour of the second short obstacles.
步骤S40,根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物。Step S40: Determine the target obstacle closest to the position of the vehicle body according to the first contour and the second contour.
更进一步地,不同障碍物与车体之间的距离不同,距离越近越容易发生碰撞危险。第一轮廓体现了地面高障碍物与车体之间的距离,第二轮廓体现了地面矮障碍物与车体之间的距离;为了确定与车体最容易发生碰撞危险的障碍物,可先将第一轮廓和第二轮廓分别转换为相对于坐标原点的极坐标,通过两者极坐标来确定与车体所在位置距离最近的目标障碍物;进而通过目标障碍物与车体之间的远近确定是否需要避障处理,确保车辆自动驾驶过程中的安全性。请参照图6,图6中,标号1为立体相机的坐标原点、标号5.1为第二障碍物在Z轴的投影轮廓,标号5.2为第一障碍物在Z轴的第一轮廓、标号5.3为第二障碍物的第二轮廓、标号5.4为第一障碍物在XY平面的投影轮廓、标号6.1为第二轮廓的极坐标方式的轮廓线、标号6.2为第一轮廓的极坐标方式的轮廓线。Furthermore, the distances between different obstacles and the vehicle body are different, and the closer the distance is, the more likely a collision risk will occur. The first contour reflects the distance between high ground obstacles and the car body, and the second contour reflects the distance between low ground obstacles and the car body; in order to determine the obstacles that are most likely to collide with the car body, you can first The first contour and the second contour are respectively converted into polar coordinates relative to the origin of the coordinates. The polar coordinates of the two are used to determine the target obstacle that is closest to the position of the vehicle body; and then the distance between the target obstacle and the vehicle body is determined Determine whether obstacle avoidance is required to ensure the safety of the vehicle during automatic driving. Please refer to Figure 6. In Figure 6, number 1 is the coordinate origin of the stereo camera, number 5.1 is the projection profile of the second obstacle on the Z axis, number 5.2 is the first profile of the first obstacle on the Z axis, and number 5.3 is The second contour of the second obstacle, number 5.4 is the projection contour of the first obstacle on the XY plane, the number 6.1 is the contour line of the polar coordinate method of the second contour, and the number 6.2 is the contour line of the polar coordinate method of the first contour .
本实施例在通过车体上安装的摄像装置检测到障碍物时,先对各障碍物进行预设方向上的投影处理,通过投影高度所表征的各障碍物的高矮,来将各个障碍物划分为第一障碍物和第二障碍物;此后针对第一障碍物进行预设方向上第一轮廓的识别,并对第二障碍物进行纹理边缘检测,得到其第二轮廓;进而基于第一轮廓和第二轮廓所在的位置,确定与车体所在位置距离最近的目标障碍物。因所识别的第一轮廓与摄像装置所安装的角度相关性不大,车体晃动不会对图像处理产生干扰,确保了识别的准确性;而第二轮廓基于纹理边缘检测生成,无需降低地面精度即可实现低矮障碍物的检测,可全面识别运输环境中的各项障碍物,环境兼容性强。In this embodiment, when an obstacle is detected by the camera device installed on the vehicle body, the obstacle is first projected in a preset direction, and each obstacle is divided by the height of each obstacle represented by the projection height. Are the first obstacle and the second obstacle; after that, the first obstacle is identified in the preset direction for the first contour, and the texture edge detection of the second obstacle is performed to obtain the second contour; and then based on the first contour And the position of the second contour, determine the target obstacle closest to the position of the vehicle body. Since the recognized first contour has little correlation with the angle installed by the camera, vehicle body shaking will not interfere with image processing, ensuring the accuracy of recognition; while the second contour is generated based on texture edge detection without lowering the ground The accuracy can realize the detection of low obstacles, can fully identify various obstacles in the transportation environment, and have strong environmental compatibility.
进一步地,提出本申请地面障碍物检测方法第二实施例。Further, a second embodiment of the ground obstacle detection method of the present application is proposed.
参照图3,图3为本申请地面障碍物检测方法第二实施例的流程示意图。Referring to Fig. 3, Fig. 3 is a schematic flowchart of a second embodiment of a ground obstacle detection method according to the present application.
所述地面障碍物检测方法第二实施例与所述地面障碍物检测方法第一实施例的区别在于,所述根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物的步骤包括:The difference between the second embodiment of the ground obstacle detection method and the first embodiment of the ground obstacle detection method is that the position of the vehicle body is determined according to the first contour and the second contour. The steps to the nearest target obstacle include:
步骤41,将所述第一轮廓转换为第一极坐标,将所述第二轮廓转换为第二极坐标,并判断所述第一极坐标和第二极坐标是否位于预设角度范围内;Step 41: Convert the first contour to a first polar coordinate, convert the second contour to a second polar coordinate, and determine whether the first polar coordinate and the second polar coordinate are within a preset angle range;
更进一步地,本实施例在将第一轮廓和第二轮廓分别转换为相对于坐标原点的极坐标的过程中,将第一轮廓转换为第一极坐标,将第二轮廓转换为第二极坐标。因第一轮廓和第二轮廓均由多个像素点组成,使转换得到的第一极坐标和第二极坐标其实质为由多个像素点的极坐标值所组成的极坐标集合。其中,将第一轮廓转换为第一极坐标的步骤包括:Furthermore, in this embodiment, in the process of respectively converting the first contour and the second contour into polar coordinates relative to the origin of the coordinates, the first contour is converted into the first polar coordinate, and the second contour is converted into the second pole. coordinate. Since both the first contour and the second contour are composed of a plurality of pixel points, the converted first polar coordinate and the second polar coordinate are essentially a polar coordinate set composed of the polar coordinate values of the plurality of pixel points. Wherein, the step of converting the first contour to the first polar coordinate includes:
步骤411,读取所述摄像装置的安装高度、安装角度、垂直视场角度、水平视场角度、有效像素行数和有效像素列数;Step 411: Read the installation height, the installation angle, the vertical field of view, the horizontal field of view, the number of effective pixel rows and the number of effective pixel columns of the camera device;
步骤412,将所述第一轮廓中的像素点均读取为测量点,并逐一针对各所述测量点执行以下步骤:Step 412: Read all the pixel points in the first contour as measurement points, and perform the following steps for each measurement point one by one:
步骤413,检测所述测量点到所述摄像装置之间的深度值,以及所述测量点的所在像素行数和所在像素列数;Step 413: Detect the depth value between the measurement point and the camera device, and the number of pixel rows and pixel columns of the measurement point;
步骤414,根据所述安装角度、垂直视场角度、所在像素行数、有效像素行数和所述深度值,确定所述测量点的极坐标模值;Step 414: Determine the polar coordinate modulus of the measurement point according to the installation angle, the vertical field of view angle, the number of pixel rows, the number of effective pixel rows, and the depth value;
步骤415,根据所述水平视场角度、安装高度、安装角度、垂直视场角度、所在像素列数、有效像素列数、所在像素行数和有效像素行数,确定所述测量点的极坐标角度;Step 415: Determine the polar coordinates of the measurement point according to the horizontal field of view angle, installation height, installation angle, vertical field of view angle, the number of pixel columns, the number of effective pixel columns, the number of pixel rows and the number of effective pixel rows. angle;
步骤416,将所述极坐标模值和所述极坐标角度设为所述测量点的极坐标,并在各所述测量点均生成极坐标后,将各所述极坐标形成为第一极坐标。Step 416: Set the polar coordinate modulus and the polar coordinate angle as the polar coordinate of the measuring point, and after generating the polar coordinate for each measuring point, form each of the polar coordinates as a first pole. coordinate.
进一步地,对立体相机的安装参数进行读取,安装参数包括安装高度H、安装角度θ、垂直视场角度ω z、水平视场角度ω h、有效像素行数L和有效像素列数C;其中有效像素行数为立体相机在Y轴方向的成像最大像素值,有效像素列数为立体相机在X轴方向的成像最大像素值。此后将第一轮廓中所包含的各个像素点读取为测量点,并逐一针对各测量点进行处理。处理时先检测测量点到摄像装置之间的深度值D,以及测量点的所在像素行数n和所在像素列数m;再将安装角度θ、垂直视场角度ω z、所在像素行数n和有效像素行数L传输到公式(1)中,通过公式(1)计算,得到像素所在行的偏角α;其中公式(1)为: Further, the installation parameters of the stereo camera are read, and the installation parameters include the installation height H, the installation angle θ, the vertical field of view angle ω z , the horizontal field of view angle ω h , the number of effective pixel rows L and the number of effective pixel columns C; The number of effective pixel rows is the maximum imaging pixel value of the stereo camera in the Y-axis direction, and the effective pixel column number is the maximum imaging pixel value of the stereo camera in the X-axis direction. Thereafter, each pixel point contained in the first contour is read as a measurement point, and processing is performed on each measurement point one by one. During processing, first detect the depth value D between the measuring point and the camera device, and the number of pixel rows n and the number of pixel columns m of the measuring point; then the installation angle θ, the vertical field of view angle ω z , and the number of pixel rows n And the number of effective pixel rows L is transferred to formula (1), and calculated by formula (1), the deflection angle α of the pixel row is obtained; where formula (1) is:
α=θ-(ω z/2)+(ω z*n/L)      (1); α=θ-(ω z /2)+(ω z *n/L) (1);
在通过公式(1)计算得到像素所在行的偏角α之后,将偏角α和深度值D传输到公式(2)中,通过公式(2)计算,得到测量点的极坐标模值r;其中公式(2)为:After the deflection angle α of the pixel row is calculated by formula (1), the deflection angle α and the depth value D are transmitted to formula (2), and the polar coordinate modulus r of the measuring point is calculated by formula (2); The formula (2) is:
r=D*Cos(α)      (2)。r=D*Cos(α) (2).
进一步地,对立体相机成像的最远投影点的绝对值坐标(|Xmax|,|Ymax|)和最近投影点的绝对值坐标(|Xmin|,|Ymin|)进行计算;具体地,将水平视场角度ω h、安装高度H、安装角度θ、垂直视场角度ω z传输到公式(3)中,通过公式(3)的计算,得到最远投影点的绝对值坐标中|Xmax|数值;将安装高度H、安装角度θ、垂直视场角度ω z传输到公式(4)中,通过公式(4)的计算,得到最远投影点的绝对值坐标中|Ymax|数值;同时,将水平视场角度ω h、安装高度H、安装角度θ、垂直视场角度ω z传输到公式(5)中,通过公式(5)的计算,得到最近投影的绝对值坐标中|Xmin|数值;将安装高度H、安装角度θ、垂直视场角度ω z传输到公式(6)中,通过公式(6)的计算,得到最近投影的绝对值坐标中|Ymin|数值。其中公式(3)、(4)、(5)、(6)分别为: Further, the absolute value coordinates (|Xmax|, |Ymax|) of the farthest projection point imaged by the stereo camera and the absolute value coordinates (|Xmin|, |Ymin|) of the nearest projection point are calculated; specifically, the horizontal The field of view angle ω h , the installation height H, the installation angle θ, and the vertical field of view angle ω z are transmitted to formula (3). Through the calculation of formula (3), the absolute value of the coordinates of the farthest projection point is obtained as |Xmax| ; Transmit the installation height H, the installation angle θ, and the vertical field of view angle ω z into formula (4), and obtain the value of |Ymax| in the absolute coordinates of the farthest projection point through the calculation of formula (4); at the same time, The horizontal field of view angle ω h , the installation height H, the installation angle θ, and the vertical field of view angle ω z are transmitted to formula (5), and the value of |Xmin| in the absolute coordinates of the most recent projection is obtained through the calculation of formula (5); The installation height H, the installation angle θ, and the vertical field of view angle ω z are transmitted to formula (6), and the value of |Ymin| in the absolute coordinate of the most recent projection is obtained through the calculation of formula (6). The formulas (3), (4), (5), (6) are:
|Xmax|=Tan(0.5*ω h)*H/Cos(θ-0.5*ω z)      (3); |Xmax|=Tan(0.5*ω h )*H/Cos(θ-0.5*ω z ) (3);
|Ymax|=H/Tan(θ-0.5*ω z)                   (4); |Ymax|=H/Tan(θ-0.5*ω z ) (4);
|Xmin|=Tan(0.5*ω h)*H/Cos(θ+0.5*ω z)      (5); |Xmin|=Tan(0.5*ω h )*H/Cos(θ+0.5*ω z ) (5);
|Ymin|=H/Tan(θ+0.5*ω z)                   (6)。 |Ymin|=H/Tan(θ+0.5*ω z ) (6).
更进一步地,对测量点坐标的绝对值坐标(|Xc|,|Yc|)进行计算,将所在像素列数m、有效像素列数C、|Xmax|和|Xmin|传输到公式(7)中,通过公式(7)的计算,得到测量点坐标的绝对值中|Xc|数值,并将所在像素行数n、有效像素行数L、|Ymax|和|Ymin|传输到公式(8)中,通过公式(8)的计算,得到测量点坐标的绝对值中|Yc|数值;其中公式(7)和(8)分别为:Furthermore, the absolute value coordinates (|Xc|, |Yc|) of the coordinates of the measuring point are calculated, and the number of pixel columns m, the number of effective pixel columns C, |Xmax| and |Xmin| are transferred to formula (7) In the calculation of formula (7), obtain the value of |Xc| in the absolute value of the coordinate of the measuring point, and transfer the number of pixel rows n, the number of effective pixel rows L, |Ymax| and |Ymin| to formula (8) In, through the calculation of formula (8), the value of |Yc| in the absolute value of the coordinates of the measuring point is obtained; where formulas (7) and (8) are respectively:
|Xc|=m/C*(|Xmax|-|Xmin|)+|Xmin|       (7);|Xc|=m/C*(|Xmax|-|Xmin|)+|Xmin|(7);
|Yc|=n/L*(|Ymax|-|Ymin|)+|Ymin|       (8)。|Yc|=n/L*(|Ymax|-|Ymin|)+|Ymin| (8).
此后,将测量点坐标的绝对值传输到公式(9)中,通过公式(9)的计算,得到测量点的极坐标角度,其中公式(9)为:After that, the absolute value of the coordinates of the measuring point is transferred to the formula (9), and the polar coordinate angle of the measuring point is obtained through the calculation of the formula (9), where the formula (9) is:
=Tan-1(|Yc|/|Xc|)           (9)。=Tan-1(|Yc|/|Xc|) (9).
可理解地,经上述公式(1)到(9)计算得到的测量点的极坐标模值和极坐标角度,即形成测量点的极坐标;在各个测量点均经过公式(1)到(9)的计算,得到各个测量点的极坐标后,各个极坐标即形成了第一极坐标,完整将第一轮廓转换为第一极坐标。需要说明的是,将第二轮廓转换为第二极坐标的过程和上述将第一轮廓转换为第一极坐标的过程相同,在此不做赘述。Understandably, the polar coordinate modulus and polar coordinate angle of the measuring point calculated by the above formulas (1) to (9) form the polar coordinates of the measuring point; each measuring point passes through formulas (1) to (9) After calculating the polar coordinates of each measuring point, each polar coordinate forms the first polar coordinate, and the first contour is completely converted into the first polar coordinate. It should be noted that the process of converting the second contour into the second polar coordinate is the same as the above-mentioned process of converting the first contour into the first polar coordinate, and will not be repeated here.
步骤42,若位于预设角度范围内,则将所述第一极坐标和第二极坐标合并,生成极坐标集合;Step 42: If it is within a preset angle range, merge the first polar coordinate and the second polar coordinate to generate a polar coordinate set;
步骤43,根据所述极坐标集合,确定与所述车体所在位置距离最近的目标障碍物。Step 43: Determine the target obstacle that is closest to the position of the vehicle body according to the set of polar coordinates.
考虑到车辆行驶过程中的障碍物具有不确定性,可能存在多个障碍物所在的位置很接近的情况,此时可将位置接近的障碍物合并处理。具体地,预先设定表征位置接近的预设角度范围,如5度;判断第一极坐标和第二极坐标是否均位于该预设角度范围内,若位于预设角度范围内,则对第一极坐标和第二极坐标进行合并,在同一图层上将第一极坐标和第二极坐标成像,形成坐标集合。坐标集合中包含第一极坐标的各个极坐标点和第二极坐标的各个极坐标点,需要对其中的杂质点进行去噪,并将经去噪后的有效点合并,以确定与车体所在位置距离最近的目标障碍物。具体地,根据极坐标集合,确定与车体所在位置距离最近的目标障碍物的步骤包括:Considering the uncertainty of the obstacles in the driving process of the vehicle, there may be a situation where multiple obstacles are located very close. At this time, the obstacles with close positions can be combined and processed. Specifically, a preset angle range that characterizes the proximity of the position is preset, such as 5 degrees; it is determined whether the first polar coordinate and the second polar coordinate are both within the preset angle range, and if they are within the preset angle range, the The first polar coordinate and the second polar coordinate are merged, and the first polar coordinate and the second polar coordinate are imaged on the same layer to form a coordinate set. The coordinate set contains each polar coordinate point of the first polar coordinate and each polar coordinate point of the second polar coordinate. It is necessary to denoise the impurity points in it, and merge the denoised effective points to determine the difference with the car body The closest obstacle to the target. Specifically, according to the polar coordinate set, the step of determining the target obstacle closest to the position of the vehicle body includes:
步骤431,对所述极坐标集合中的各元素依次进行中值滤波、去噪和均值滤波处理,生成处理结果;Step 431: Perform median filtering, denoising, and mean filtering on each element in the polar coordinate set in sequence to generate a processing result;
步骤432,对所述处理结果中的各元素进行合并,生成目标元素,并计算所述目标元素与所述摄像装置对应的第一角度和第一距离;Step 432: Combine the elements in the processing result to generate a target element, and calculate a first angle and a first distance corresponding to the target element and the camera device;
步骤433,根据所述第一角度和第一距离,确定与所述车体所在位置距离最近的目标障碍物。Step 433: Determine the target obstacle that is closest to the position of the vehicle body according to the first angle and the first distance.
进一步地,将形成极坐标集合中的各个极坐标点作为极坐标集合中各个元素;先对各个元素进行中值滤波处理,去除其中的椒盐噪声点;再通过设置距离坐标原点的最小值,将各元素中与坐标原点之间的距离大于最小值的元素进行去除;此后对经处理的各项元素进行均值滤波处理,生成处理结果。进而对处理结果中的各元素进行合并,将作为元素的各极坐标点合并为一个极坐标点,该合并得到的极坐标点即为目标元素。此后,计算目标元素相对于坐标原点的第一角度和第一距离,通过第一距离和第一角度表征车体与合并的多个障碍物之间的相对距离;进而通过多组经合并处理的障碍物以及未经合并处理的各障碍物与车体之间的相对距离,来确定多个障碍物中与车体所在位置距离最近的目标障碍物。Further, each polar coordinate point in the polar coordinate set is taken as each element in the polar coordinate set; the median filtering process is performed on each element first to remove the salt and pepper noise points; and then by setting the minimum value of the distance from the origin of the coordinate, Among the elements, the elements whose distance from the origin of the coordinates are greater than the minimum value are removed; after that, the average filtering process is performed on the processed elements to generate the processing result. Furthermore, the elements in the processing result are merged, and the polar coordinate points as the elements are merged into one polar coordinate point, and the polar coordinate point obtained by the merge is the target element. Thereafter, the first angle and the first distance of the target element relative to the origin of the coordinate are calculated, and the relative distance between the vehicle body and the combined obstacles is represented by the first distance and the first angle; Obstacles and the relative distances between the obstacles and the vehicle body that have not been merged are used to determine the target obstacle that is the closest to the position of the vehicle body among multiple obstacles.
更进一步地,对于多个障碍物所在的位置相隔较远的情况,则需要对各个障碍物进行单独处理;具体地,所述判断所述第一极坐标和第二极坐标是否位于预设角度范围内的步骤之后包括:Furthermore, for the situation where the locations of multiple obstacles are far apart, each obstacle needs to be processed separately; specifically, the determining whether the first polar coordinate and the second polar coordinate are located at a preset angle After the steps in the scope include:
步骤44,若所述第一极坐标和第二极坐标不位于预设角度范围内,则确定所述第一极坐标与所述摄像装置之间的第二角度和第二距离,以及所述第二极坐标与所述摄像装置之间的第三角度和第三距离;Step 44: If the first polar coordinate and the second polar coordinate are not within a preset angle range, determine the second angle and the second distance between the first polar coordinate and the camera device, and the A third angle and a third distance between the second polar coordinate and the camera device;
步骤45,将所述第二角度和第二距离之间所形成的元素组,与所述第三角度和第三距离之间所形成的元素组进行对比,生成对比结果,并根据所述对比结果确定与所述车体所在位置距离最近的目标障碍物。Step 45: Compare the element group formed between the second angle and the second distance with the element group formed between the third angle and the third distance to generate a comparison result, and compare the result according to the comparison As a result, the target obstacle closest to the position of the vehicle body is determined.
当经判断确定第一极坐标和第二极坐标不在预设角度范围内,需要针对单个障碍物进行处理时,则分别对第一极坐标中的各元素以及第二极坐标中的各元素进行中值滤波、去噪和均值滤波处理,并对第一极坐标中经处理之后的元素进行合并,生成第一元素点,对第二极坐标中经处理之后的元素进行合并,生成第二元素点。此后,计算第一元素点与坐标原点之间的角度和距离,即第一极坐标与摄像装置之间的第二角度和第二距离,并计算第二元素点与坐标原点之间的角度和距离,即第二极坐标与摄像装置之间的第三角度和第三距离。When it is determined that the first polar coordinate and the second polar coordinate are not within the preset angle range, and it is necessary to deal with a single obstacle, each element in the first polar coordinate and each element in the second polar coordinate are performed separately. Median filtering, denoising and mean filtering, and merge the processed elements in the first polar coordinate to generate the first element point, and merge the processed elements in the second polar coordinate to generate the second element point. Thereafter, calculate the angle and distance between the first element point and the origin of coordinates, that is, the second angle and the second distance between the first polar coordinate and the camera device, and calculate the sum of the angles between the second element point and the origin of coordinates The distance is the third angle and the third distance between the second polar coordinate and the camera.
进一步地,将第二角度和第二距离形成元素组,并将第三角度和第三距离形成元素组,在两个元素之间进行对比,生成表征第一极坐标以及第二极坐标分别距离车体远近的对比结果,以依据对比结果,来确定与车体所在位置距离最近的目标障碍物。Further, the second angle and the second distance are formed into an element group, and the third angle and the third distance are formed into an element group, and the two elements are compared to generate the respective distances representing the first polar coordinates and the second polar coordinates. The comparison result of the distance of the vehicle body is used to determine the target obstacle that is closest to the position of the vehicle body based on the comparison result.
本实施例通过将位置相近的障碍物合并处理,而将位置相距较远的障碍物单独处理;避免了对各障碍物均单独处理增加计算量,而对避障分辨率贡献不大;在对障碍物全面检测的同时,提高了检测效率。此外,将轮廓转换为极坐标,进而滤波、去噪和计算的处理方式,提高了各障碍物与车体之间距离计算的准确性,从而使得目标障碍物的检测更为准确。In this embodiment, the obstacles that are close in position are processed together, and the obstacles that are far apart are processed separately; it avoids that each obstacle is processed separately to increase the amount of calculation, but does not contribute much to the resolution of obstacle avoidance; While the obstacles are comprehensively detected, the detection efficiency is improved. In addition, the processing methods of converting the contour to polar coordinates, filtering, denoising and calculating, improve the accuracy of calculating the distance between each obstacle and the vehicle body, thereby making the detection of target obstacles more accurate.
进一步地,提出本申请地面障碍物检测方法第三实施例。Further, a third embodiment of the ground obstacle detection method of the present application is proposed.
参照图4,图4为本申请地面障碍物检测方法第三实施例的流程示意图。Referring to Fig. 4, Fig. 4 is a schematic flowchart of a third embodiment of a ground obstacle detection method according to the present application.
所述地面障碍物检测方法第三实施例与所述地面障碍物检测方法第一或第二实施例的区别在于,所述所述对各所述障碍物进行预设方向上的投影,生成各所述障碍物的投影高度的步骤包括:The difference between the third embodiment of the ground obstacle detection method and the first or second embodiment of the ground obstacle detection method is that the projection of each of the obstacles in a preset direction is used to generate each The steps of the projection height of the obstacle include:
步骤S11,读取所述摄像装置的安装高度、安装角度、视场角度和有效像素行数,并将各所述障碍物作为待测量点,逐一针对各所述待测量点执行以下步骤:Step S11: Read the installation height, installation angle, field of view angle and number of effective pixel rows of the camera device, and use each obstacle as a point to be measured, and perform the following steps for each point to be measured one by one:
步骤S12,检测所述摄像装置与所述待测量点之间的测量深度值,以及所述待测量点的点所在像素行数;Step S12, detecting the measured depth value between the camera device and the point to be measured, and the number of pixel rows where the point of the point to be measured is located;
步骤S13,根据所述安装角度、视场角度、有效像素行数和点所在像素行数,确定与所述待测量点对应的像素所在行的偏角;Step S13: Determine the deflection angle of the pixel row corresponding to the point to be measured according to the installation angle, the field of view angle, the number of effective pixel rows, and the number of pixel rows where the point is located;
步骤S14,根据所述测量深度值和所述像素所在行的偏角,确定所述障碍物在预设方 向上的投影中间值;Step S14: Determine the projection intermediate value of the obstacle in the preset direction according to the measured depth value and the deflection angle of the row where the pixel is located;
步骤S15,根据所述安装高度和所述投影中间值,生成各所述障碍物的投影高度。Step S15: Generate the projection height of each obstacle according to the installation height and the projection intermediate value.
本实施例实现将各障碍物投影到预设方向上,得到各障碍物的投影高度,以通过各投影高度,将各障碍物划分为第一障碍物和第二障碍物。先读取立体相机的安装高度H、安装角度θ、视场角度ω和有效像素行数L;再将各个障碍物作为待测量点,并逐一针对各待测量点进行处理。处理时先检测待测量点到摄像装置之间的测量深度值D',以及待测量点的点所在像素行数n';再将安装角度θ、视场角度ω、点所在像素行数n'和有效像素行数L传输到公式(10)中,通过公式(10)计算,得到与待测量点对应的像素所在行的偏角α';其中公式(10)为:This embodiment realizes that each obstacle is projected in a preset direction, and the projection height of each obstacle is obtained, so that each obstacle is divided into a first obstacle and a second obstacle through each projection height. First, read the installation height H, installation angle θ, field of view angle ω, and number of effective pixel rows L of the stereo camera; then each obstacle is taken as the point to be measured, and each point to be measured is processed one by one. When processing, first detect the measured depth value D'between the point to be measured and the camera device, and the number of pixel rows of the point to be measured n'; then the installation angle θ, the angle of view ω, and the number of pixel rows of the point n' And the number of effective pixel rows L is transferred to formula (10), and calculated by formula (10), the deflection angle α'of the pixel row corresponding to the point to be measured is obtained; where formula (10) is:
α'=θ-(ω/2)+(ω*n'/L)            (10);α'=θ-(ω/2)+(ω*n'/L) (10);
在通过公式(10)计算得到与待测量点对应的像素所在行的偏角α'之后,将偏角α'和测量深度值D'传输到公式(11)中,通过公式(11)计算,得到障碍物在预设方向上的投影中间值h c;其中公式(11)为: After the deflection angle α'of the row of pixels corresponding to the point to be measured is calculated by formula (10), the deflection angle α'and the measured depth value D'are transmitted to formula (11), which is calculated by formula (11), Obtain the median projection h c of the obstacle in the preset direction; where the formula (11) is:
h c=D'*Sin(α')                   (11); h c =D'*Sin(α') (11);
此后,在安装高度H和投影中间值h c之间做差值,所得到的差值结果即为障碍物的投影高度h z。在各个待测量点均经过上述处理和计算,得到的各个计算结果即为各个障碍物的投影高度,进而依据各个投影高度,将各障碍物划分为第一障碍物和第二障碍物。 After that, a difference is made between the installation height H and the projection intermediate value h c , and the result of the difference is the projection height h z of the obstacle. Each point to be measured is processed and calculated as described above, and each calculation result obtained is the projection height of each obstacle, and then each obstacle is divided into a first obstacle and a second obstacle according to each projection height.
本实施例通过计算各障碍物在预设方向上的投影高度,来表征各障碍物的高矮,进而将各障碍物划分为第一障碍物和第二障碍物,以便于针对不同类型的障碍物采用不同的方式来获取障碍物轮廓,使得障碍物的检测更为全面准确。In this embodiment, the height of each obstacle is characterized by calculating the projection height of each obstacle in a preset direction, and then each obstacle is divided into a first obstacle and a second obstacle, so as to target different types of obstacles. Different methods are adopted to obtain the outline of the obstacle, so that the detection of the obstacle is more comprehensive and accurate.
进一步地,提出本申请地面障碍物检测方法第四实施例。Further, a fourth embodiment of the ground obstacle detection method of the present application is proposed.
所述地面障碍物检测方法第四实施例与所述地面障碍物检测方法第一、第二或第三实施例的区别在于,所述对所述第二障碍物进行纹理边缘检测,生成所述第二障碍物的第二轮廓的步骤包括:The difference between the fourth embodiment of the ground obstacle detection method and the first, second, or third embodiment of the ground obstacle detection method is that the texture edge detection is performed on the second obstacle to generate the The steps of the second contour of the second obstacle include:
步骤S31,基于所述摄像装置获取与所述第二障碍物对应的成像图像,并调用第一预设算法对所述成像图像进行填平处理;Step S31: Acquire an imaging image corresponding to the second obstacle based on the camera device, and call a first preset algorithm to perform fill-in processing on the imaging image;
步骤S32,调用第二预设算法,提取经填平处理的所述成像图像的初始轮廓,并根据所述初始轮廓,确定所述第二障碍物的第二轮廓。Step S32: Invoking a second preset algorithm, extracting the initial contour of the imaged image after the leveling process, and determining the second contour of the second obstacle according to the initial contour.
本实施例针对第二障碍物进行纹理边缘检测,以得到第二障碍物的第二轮廓。预先设置有第一预设算法和第二预设算法,第一预设算法优选为数据填充法,第二预设算法为局 部傅里叶变换,以通过第一预设算法和第二预设算法对第二障碍物的成像图像进行处理。具体地,先通过立体相机对第二障碍物拍摄,得到成像图像;考虑到成像图像中因立体相机、环境光线等各项因素的影响,可能存在某些像素点没有数据;该类没有数据的像素点会使得第二预设算法的计算失败,而需要通过第一预设算法进行处理。作为第一预设算法的数据填充法包括膨胀和中值滤波,通过膨胀算法对成像图像进行领域扩张,对成像图像中的空洞数据进行填平处理,同时通过中值滤波对成像图像中的杂质去除。此后调用第二预设算法对经填平处理的成像图像进行处理,以16*16的像素识别区为最小识别单元,对成像图像进行分割,以提取出成像图像的轮廓边缘所在的目标像素识别区;针对目标像素识别区通过公式(12)和(13)进行傅里叶变换,再由公式(14)将复数转换为幅度值,进而通过公式(15)将幅度值切换到对数刻度,以对图像进行对数尺度的缩放。其中公式(12)、(13)、(14)和(15)分别为:In this embodiment, texture edge detection is performed on the second obstacle to obtain the second contour of the second obstacle. A first preset algorithm and a second preset algorithm are preset. The first preset algorithm is preferably a data filling method, and the second preset algorithm is a local Fourier transform to pass the first preset algorithm and the second preset algorithm. The algorithm processes the imaging image of the second obstacle. Specifically, first use a stereo camera to shoot the second obstacle to obtain an imaged image; considering the effects of various factors such as the stereo camera and ambient light in the imaged image, there may be some pixels that have no data; this category has no data Pixels will cause the calculation of the second preset algorithm to fail, and need to be processed by the first preset algorithm. The data filling method as the first preset algorithm includes expansion and median filtering. The imaging image is expanded by the expansion algorithm, the hole data in the imaging image is filled in, and the impurities in the imaging image are eliminated by median filtering. Remove. After that, the second preset algorithm is called to process the imaged image after the fill-in process, and the 16*16 pixel recognition area is used as the smallest recognition unit to segment the imaged image to extract the target pixel where the contour edge of the imaged image is located. Area; Fourier transform is performed for the target pixel identification area by formulas (12) and (13), and then the complex number is converted into amplitude value by formula (14), and then the amplitude value is switched to logarithmic scale by formula (15), To scale the image on a logarithmic scale. The formulas (12), (13), (14) and (15) are:
Figure PCTCN2020112132-appb-000001
Figure PCTCN2020112132-appb-000001
e ix=cos x+i sin x          (13); e ix = cos x+i sin x (13);
Figure PCTCN2020112132-appb-000002
Figure PCTCN2020112132-appb-000002
M1=log (1+M)            (15); M1=log (1+M) (15);
其中,F(k,l)是成像图像的目标像素识别区在频域的值,f(i,j)是成像图像的目标像素识别区在空域的值,M为F(k,l)的幅值;上述公式(12)到(15)常用于实现傅里叶变换,在此不做赘述。Among them, F(k,l) is the value of the target pixel recognition area of the imaged image in the frequency domain, f(i,j) is the value of the target pixel recognition area of the imaged image in the space domain, and M is the value of F(k,l) Amplitude; the above formulas (12) to (15) are often used to implement Fourier transform, so I won’t repeat them here.
进一步地,剪切和重分布幅度图像,重新排列结果的象限,使坐标原点对应于图像中心,再进行归一化处理,处理超出显示范围的值,以便于分类判断,进而对含有边缘的识别区分类,并重新生成轮廓,实现从地面纹理中提取出第二障碍物的初始轮廓。Further, cut and redistribute the amplitude image, rearrange the quadrants of the result so that the origin of the coordinate corresponds to the center of the image, and then perform normalization processing to process the values beyond the display range to facilitate classification judgment, and then to identify edges that contain Zone classification, and regenerate contours, so as to extract the initial contour of the second obstacle from the ground texture.
更进一步地,对初始提取的轮廓进行色块膨胀处理,并根据处理结果从原始的成像图像中提取边缘,进而在所提取的边缘中精确的查找出轮廓,得到第二障碍物的第二轮廓。Furthermore, the color block expansion process is performed on the initially extracted contour, and the edge is extracted from the original imaging image according to the processing result, and then the contour is accurately found from the extracted edge to obtain the second contour of the second obstacle .
本实施例对于低矮的第二障碍物,依据其与地面纹理之间的明显差别的特性来提取其轮廓,进而进行识别检测。针对高障碍物和矮障碍物以不同的方式进行处理,在对高障碍物识别的同时,实现矮障碍物的检测,确保了对车体行驶环境中各类障碍物检测的全面性和准确性。In this embodiment, for the low second obstacle, the contour is extracted according to the characteristics of the obvious difference between it and the ground texture, and then the recognition and detection are performed. High obstacles and low obstacles are processed in different ways. While identifying high obstacles, it realizes the detection of low obstacles, ensuring the comprehensiveness and accuracy of the detection of various obstacles in the vehicle body driving environment. .
需要说明的是,本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计 算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。It should be noted that those of ordinary skill in the art can understand that all or part of the steps in the above-mentioned embodiments can be completed by hardware, or by a program to instruct related hardware, and the program can be stored in a computer readable Among the storage media, the aforementioned storage media may be read-only memory, magnetic disks, or optical disks.
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有地面障碍物检测程序,所述地面障碍物检测程序被处理器执行时实现如上所述的地面障碍物检测方法的步骤。In addition, an embodiment of the present application also proposes a computer-readable storage medium having a ground obstacle detection program stored on the computer-readable storage medium, and when the ground obstacle detection program is executed by a processor, the above-mentioned ground obstacle detection program is implemented. Steps of obstacle detection method.
本申请计算机可读存储介质具体实施方式与上述地面障碍物检测方法各实施例基本相同,在此不再赘述。The specific implementation of the computer-readable storage medium of the present application is basically the same as the foregoing embodiments of the ground obstacle detection method, and will not be repeated here.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements not only includes those elements, It also includes other elements not explicitly listed, or elements inherent to the process, method, article, or device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or device that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are only for description, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above implementation manners, those skilled in the art can clearly understand that the above-mentioned embodiment method can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。 Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the method described in each embodiment of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only the preferred embodiments of the application, and do not limit the scope of the patent for this application. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of the application, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent protection of this application.

Claims (15)

  1. 一种地面障碍物检测方法,其中,所述地面障碍物检测方法包括以下步骤:A ground obstacle detection method, wherein the ground obstacle detection method includes the following steps:
    当基于车体上所安装的摄像装置检测到障碍物时,对各所述障碍物进行预设方向上的投影,生成各所述障碍物的投影高度;When an obstacle is detected based on the camera device installed on the vehicle body, projection of each of the obstacles in a preset direction is performed to generate the projection height of each of the obstacles;
    根据各所述投影高度,将各所述障碍物划分为第一障碍物和第二障碍物,并识别所述第一障碍物在预设方向的第一轮廓;Dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights, and identifying a first contour of the first obstacle in a preset direction;
    对所述第二障碍物进行纹理边缘检测,生成所述第二障碍物的第二轮廓;Performing texture edge detection on the second obstacle to generate a second contour of the second obstacle;
    根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物。According to the first contour and the second contour, the target obstacle closest to the position of the vehicle body is determined.
  2. 如权利要求1所述的地面障碍物检测方法,其中,所述根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物的步骤包括:The ground obstacle detection method according to claim 1, wherein the step of determining the target obstacle closest to the position of the vehicle body according to the first contour and the second contour comprises:
    将所述第一轮廓转换为第一极坐标,将所述第二轮廓转换为第二极坐标,并判断所述第一极坐标和第二极坐标是否位于预设角度范围内;Converting the first contour to first polar coordinates, converting the second contour to second polar coordinates, and determining whether the first polar coordinates and the second polar coordinates are within a preset angle range;
    若位于预设角度范围内,则将所述第一极坐标和第二极坐标合并,生成极坐标集合;If it is within the preset angle range, combining the first polar coordinate and the second polar coordinate to generate a polar coordinate set;
    根据所述极坐标集合,确定与所述车体所在位置距离最近的目标障碍物。According to the set of polar coordinates, a target obstacle that is closest to the position of the vehicle body is determined.
  3. 如权利要求2所述的地面障碍物检测方法,其中,所述将所述第一轮廓转换为第一极坐标的步骤包括:3. The ground obstacle detection method according to claim 2, wherein the step of converting the first contour to the first polar coordinate comprises:
    读取所述摄像装置的安装高度、安装角度、垂直视场角度、水平视场角度、有效像素行数和有效像素列数;Reading the installation height, installation angle, vertical field of view, horizontal field of view, number of effective pixel rows and effective pixel columns of the camera device;
    将所述第一轮廓中的像素点均读取为测量点,并逐一针对各所述测量点执行以下步骤:Read all the pixels in the first contour as measurement points, and perform the following steps for each measurement point one by one:
    检测所述测量点到所述摄像装置之间的深度值,以及所述测量点的所在像素行数和所在像素列数;Detecting the depth value between the measurement point and the camera device, and the number of pixel rows and pixel columns of the measurement point;
    根据所述安装角度、垂直视场角度、所在像素行数、有效像素行数和所述深度值,确定所述测量点的极坐标模值;Determine the polar coordinate modulus of the measurement point according to the installation angle, the vertical field of view angle, the number of pixel rows, the number of effective pixel rows, and the depth value;
    根据所述水平视场角度、安装高度、安装角度、垂直视场角度、所在像素列数、有效像素列数、所在像素行数和有效像素行数,确定所述测量点的极坐标角度;Determine the polar coordinate angle of the measurement point according to the horizontal field of view angle, the installation height, the installation angle, the vertical field of view angle, the number of pixel columns, the number of effective pixel columns, the number of pixel rows and the number of effective pixel rows;
    将所述极坐标模值和所述极坐标角度设为所述测量点的极坐标,并在各所述测量点均生成极坐标后,将各所述极坐标形成为第一极坐标。The polar coordinate modulus value and the polar coordinate angle are set as the polar coordinates of the measuring point, and after the polar coordinates are generated for each measuring point, each of the polar coordinates is formed as the first polar coordinate.
  4. 如权利要求2所述的地面障碍物检测方法,其中,所述根据所述极坐标集合,确定与所述车体所在位置距离最近的目标障碍物的步骤包括:The method for detecting ground obstacles according to claim 2, wherein the step of determining the target obstacle closest to the position of the vehicle body according to the set of polar coordinates comprises:
    对所述极坐标集合中的各元素依次进行中值滤波、去噪和均值滤波处理,生成处理结果;Performing median filtering, denoising, and mean filtering on each element in the polar coordinate set in sequence to generate a processing result;
    对所述处理结果中的各元素进行合并,生成目标元素,并计算所述目标元素与所述摄像装置对应的第一角度和第一距离;Combine the elements in the processing result to generate a target element, and calculate a first angle and a first distance corresponding to the target element and the camera device;
    根据所述第一角度和第一距离,确定与所述车体所在位置距离最近的目标障碍物。According to the first angle and the first distance, the target obstacle closest to the position of the vehicle body is determined.
  5. 如权利要求2所述的地面障碍物检测方法,其中,所述判断所述第一极坐标和第二极坐标是否位于预设角度范围内的步骤之后包括:3. The ground obstacle detection method according to claim 2, wherein after the step of judging whether the first polar coordinate and the second polar coordinate are within a preset angle range comprises:
    若所述第一极坐标和第二极坐标不位于预设角度范围内,则确定所述第一极坐标与所述摄像装置之间的第二角度和第二距离,以及所述第二极坐标与所述摄像装置之间的第三角度和第三距离;If the first polar coordinate and the second polar coordinate are not within the preset angle range, the second angle and the second distance between the first polar coordinate and the camera device, and the second polar coordinate are determined A third angle and a third distance between the coordinates and the camera device;
    将所述第二角度和第二距离之间所形成的元素组,与所述第三角度和第三距离之间所形成的元素组进行对比,生成对比结果,并根据所述对比结果确定与所述车体所在位置距离最近的目标障碍物。The element group formed between the second angle and the second distance is compared with the element group formed between the third angle and the third distance to generate a comparison result, and the comparison result is determined according to the comparison result. The target obstacle closest to the location of the vehicle body.
  6. 如权利要求1所述的地面障碍物检测方法,其中,所述对各所述障碍物进行预设方向上的投影,生成各所述障碍物的投影高度的步骤包括:5. The ground obstacle detection method according to claim 1, wherein the step of projecting each of the obstacles in a preset direction to generate the projection height of each of the obstacles comprises:
    读取所述摄像装置的安装高度、安装角度、视场角度和有效像素行数,并将各所述障碍物作为待测量点,逐一针对各所述待测量点执行以下步骤:Read the installation height, installation angle, field of view angle, and number of effective pixel rows of the camera device, and use each obstacle as a point to be measured, and perform the following steps for each point to be measured one by one:
    检测所述摄像装置与所述待测量点之间的测量深度值,以及所述待测量点的点所在像素行数;Detecting the measured depth value between the camera device and the point to be measured, and the number of pixel rows where the point of the point to be measured is located;
    根据所述安装角度、视场角度、有效像素行数和点所在像素行数,确定与所述待测量点对应的像素所在行的偏角;Determine the deflection angle of the pixel row corresponding to the point to be measured according to the installation angle, the field of view angle, the number of effective pixel rows, and the number of pixel rows where the points are located;
    根据所述测量深度值和所述像素所在行的偏角,确定所述障碍物在预设方向上的投影中间值;Determine the projection median value of the obstacle in the preset direction according to the measured depth value and the deflection angle of the row where the pixel is located;
    根据所述安装高度和所述投影中间值,生成各所述障碍物的投影高度。According to the installation height and the projection intermediate value, the projection height of each of the obstacles is generated.
  7. 如权利要求1所述的地面障碍物检测方法,其中,所述识别所述第一障碍物在预设方向的第一轮廓的步骤包括:The method for detecting ground obstacles according to claim 1, wherein the step of identifying the first contour of the first obstacle in a preset direction comprises:
    读取所述第一障碍物在预设方向上的投影图,并对所述投影图依次经过均值滤波、边缘提取、轮廓查找和折线拟合处理,得到所述第一轮廓。Read the projection image of the first obstacle in the preset direction, and sequentially perform average filtering, edge extraction, contour search, and polyline fitting processing on the projection image to obtain the first contour.
  8. 如权利要求1所述的地面障碍物检测方法,其中,所述对所述第二障碍物进行纹理边缘检测,生成所述第二障碍物的第二轮廓的步骤包括:5. The ground obstacle detection method according to claim 1, wherein the step of performing texture edge detection on the second obstacle to generate a second contour of the second obstacle comprises:
    基于所述摄像装置获取与所述第二障碍物对应的成像图像,并调用第一预设算法对所述成像图像进行填平处理;Acquiring an imaging image corresponding to the second obstacle based on the camera device, and invoking a first preset algorithm to perform fill-in processing on the imaging image;
    调用第二预设算法,提取经填平处理的所述成像图像的初始轮廓,并根据所述初始轮廓,确定所述第二障碍物的第二轮廓。Calling a second preset algorithm, extracting the initial contour of the imaged image that has been leveled, and determining the second contour of the second obstacle according to the initial contour.
  9. 如权利要求1所述的地面障碍物检测方法,其中,所述根据各所述投影高度,将各所述障碍物划分为第一障碍物和第二障碍物的步骤包括:5. The ground obstacle detection method according to claim 1, wherein the step of dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights comprises:
    将各所述障碍物的投影高度逐一和所述投影阈值进行对比,确定各所述投影高度中大于所述投影阈值的目标投影高度;Comparing the projection height of each obstacle with the projection threshold one by one, and determining the target projection height of each projection height that is greater than the projection threshold;
    根据所述目标投影高度,将各所述障碍物划分为所述第一障碍物和所述第二障碍物。According to the target projection height, each of the obstacles is divided into the first obstacle and the second obstacle.
  10. 一种地面障碍物检测设备,其中,所述地面障碍物检测设备包括存储器、处理器和存储在所述存储器上并可在所述处理器上运行的地面障碍物检测程序,所述地面障碍物检测程序被所述处理器执行时实现以下步骤:A ground obstacle detection device, wherein the ground obstacle detection device includes a memory, a processor, and a ground obstacle detection program stored on the memory and running on the processor, and the ground obstacle When the detection program is executed by the processor, the following steps are implemented:
    当基于车体上所安装的摄像装置检测到障碍物时,对各所述障碍物进行预设方向上的投影,生成各所述障碍物的投影高度;When an obstacle is detected based on the camera device installed on the vehicle body, projection of each of the obstacles in a preset direction is performed to generate the projection height of each of the obstacles;
    根据各所述投影高度,将各所述障碍物划分为第一障碍物和第二障碍物,并识别所述第一障碍物在预设方向的第一轮廓;Dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights, and identifying a first contour of the first obstacle in a preset direction;
    对所述第二障碍物进行纹理边缘检测,生成所述第二障碍物的第二轮廓;Performing texture edge detection on the second obstacle to generate a second contour of the second obstacle;
    根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物。According to the first contour and the second contour, the target obstacle closest to the position of the vehicle body is determined.
  11. 如权利要求10所述的地面障碍物检测设备,其中,所述根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物的步骤包括:The ground obstacle detection device according to claim 10, wherein the step of determining the target obstacle closest to the position of the vehicle body according to the first contour and the second contour comprises:
    将所述第一轮廓转换为第一极坐标,将所述第二轮廓转换为第二极坐标,并判断所述第一极坐标和第二极坐标是否位于预设角度范围内;Converting the first contour to first polar coordinates, converting the second contour to second polar coordinates, and determining whether the first polar coordinates and the second polar coordinates are within a preset angle range;
    若位于预设角度范围内,则将所述第一极坐标和第二极坐标合并,生成极坐标集合;If it is within the preset angle range, combining the first polar coordinate and the second polar coordinate to generate a polar coordinate set;
    根据所述极坐标集合,确定与所述车体所在位置距离最近的目标障碍物。According to the set of polar coordinates, a target obstacle that is closest to the position of the vehicle body is determined.
  12. 如权利要求11所述的地面障碍物检测设备,其中,所述将所述第一轮廓转换为第一极坐标的步骤包括:The ground obstacle detection device according to claim 11, wherein the step of converting the first contour into a first polar coordinate comprises:
    读取所述摄像装置的安装高度、安装角度、垂直视场角度、水平视场角度、有效像素行数和有效像素列数;Reading the installation height, installation angle, vertical field of view, horizontal field of view, number of effective pixel rows and effective pixel columns of the camera device;
    将所述第一轮廓中的像素点均读取为测量点,并逐一针对各所述测量点执行以下步骤:Read all the pixels in the first contour as measurement points, and perform the following steps for each measurement point one by one:
    检测所述测量点到所述摄像装置之间的深度值,以及所述测量点的所在像素行数和所在像素列数;Detecting the depth value between the measurement point and the camera device, and the number of pixel rows and pixel columns of the measurement point;
    根据所述安装角度、垂直视场角度、所在像素行数、有效像素行数和所述深度值,确定所述测量点的极坐标模值;Determine the polar coordinate modulus of the measurement point according to the installation angle, the vertical field of view angle, the number of pixel rows, the number of effective pixel rows, and the depth value;
    根据所述水平视场角度、安装高度、安装角度、垂直视场角度、所在像素列数、有效像素列数、所在像素行数和有效像素行数,确定所述测量点的极坐标角度;Determine the polar coordinate angle of the measurement point according to the horizontal field of view angle, the installation height, the installation angle, the vertical field of view angle, the number of pixel columns, the number of effective pixel columns, the number of pixel rows and the number of effective pixel rows;
    将所述极坐标模值和所述极坐标角度设为所述测量点的极坐标,并在各所述测量点均生成极坐标后,将各所述极坐标形成为第一极坐标。The polar coordinate modulus value and the polar coordinate angle are set as the polar coordinates of the measuring point, and after the polar coordinates are generated for each measuring point, each of the polar coordinates is formed as the first polar coordinate.
  13. 如权利要求11所述的地面障碍物检测设备,其中,所述根据所述极坐标集合,确定与所述车体所在位置距离最近的目标障碍物的步骤包括:The ground obstacle detection device according to claim 11, wherein the step of determining the target obstacle closest to the position of the vehicle body according to the set of polar coordinates comprises:
    对所述极坐标集合中的各元素依次进行中值滤波、去噪和均值滤波处理,生成处理结果;Performing median filtering, denoising, and mean filtering on each element in the polar coordinate set in sequence to generate a processing result;
    对所述处理结果中的各元素进行合并,生成目标元素,并计算所述目标元素与所述摄像装置对应的第一角度和第一距离;Combine the elements in the processing result to generate a target element, and calculate a first angle and a first distance corresponding to the target element and the camera device;
    根据所述第一角度和第一距离,确定与所述车体所在位置距离最近的目标障碍物。According to the first angle and the first distance, the target obstacle closest to the position of the vehicle body is determined.
  14. 如权利要求11所述的地面障碍物检测设备,其中,所述判断所述第一极坐标和第二极坐标是否位于预设角度范围内的步骤之后包括:11. The ground obstacle detection device according to claim 11, wherein after the step of judging whether the first polar coordinate and the second polar coordinate are within a preset angle range, comprises:
    若所述第一极坐标和第二极坐标不位于预设角度范围内,则确定所述第一极坐标与所述摄像装置之间的第二角度和第二距离,以及所述第二极坐标与所述摄像装置之间的第三角度和第三距离;If the first polar coordinate and the second polar coordinate are not within the preset angle range, the second angle and the second distance between the first polar coordinate and the camera device, and the second polar coordinate are determined A third angle and a third distance between the coordinates and the camera device;
    将所述第二角度和第二距离之间所形成的元素组,与所述第三角度和第三距离之间所形成的元素组进行对比,生成对比结果,并根据所述对比结果确定与所述车体所在位置距离最近的目标障碍物。The element group formed between the second angle and the second distance is compared with the element group formed between the third angle and the third distance to generate a comparison result, and the comparison result is determined according to the comparison result. The target obstacle closest to the location of the vehicle body.
  15. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有地面障碍物检测程序,所述地面障碍物检测程序被处理器执行时实现以下步骤:A computer-readable storage medium, wherein a ground obstacle detection program is stored on the computer-readable storage medium, and the following steps are implemented when the ground obstacle detection program is executed by a processor:
    当基于车体上所安装的摄像装置检测到障碍物时,对各所述障碍物进行预设方向上的投影,生成各所述障碍物的投影高度;When an obstacle is detected based on the camera device installed on the vehicle body, projection of each of the obstacles in a preset direction is performed to generate the projection height of each of the obstacles;
    根据各所述投影高度,将各所述障碍物划分为第一障碍物和第二障碍物,并识别所述第一障碍物在预设方向的第一轮廓;Dividing each of the obstacles into a first obstacle and a second obstacle according to each of the projection heights, and identifying a first contour of the first obstacle in a preset direction;
    对所述第二障碍物进行纹理边缘检测,生成所述第二障碍物的第二轮廓;Performing texture edge detection on the second obstacle to generate a second contour of the second obstacle;
    根据所述第一轮廓和所述第二轮廓,确定与所述车体所在位置距离最近的目标障碍物。According to the first contour and the second contour, the target obstacle closest to the position of the vehicle body is determined.
PCT/CN2020/112132 2019-11-12 2020-08-28 Ground obstacle detection method and device, and computer-readable storage medium WO2021093418A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911103707.4A CN110826512B (en) 2019-11-12 2019-11-12 Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium
CN201911103707.4 2019-11-12

Publications (1)

Publication Number Publication Date
WO2021093418A1 true WO2021093418A1 (en) 2021-05-20

Family

ID=69554384

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112132 WO2021093418A1 (en) 2019-11-12 2020-08-28 Ground obstacle detection method and device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN110826512B (en)
WO (1) WO2021093418A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861975A (en) * 2023-02-28 2023-03-28 杭州枕石智能科技有限公司 Obstacle vehicle pose estimation method and device
CN116147567A (en) * 2023-04-20 2023-05-23 高唐县空间勘察规划有限公司 Homeland mapping method based on multi-metadata fusion
CN116704473A (en) * 2023-05-24 2023-09-05 禾多科技(北京)有限公司 Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826512B (en) * 2019-11-12 2022-03-08 深圳创维数字技术有限公司 Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium
CN112364693B (en) * 2020-10-12 2024-04-16 星火科技技术(深圳)有限责任公司 Binocular vision-based obstacle recognition method, device, equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030114964A1 (en) * 2001-12-19 2003-06-19 Ford Global Technologies, Inc. Simple classification scheme for vehicle/pole/pedestrian detection
US20050231341A1 (en) * 2004-04-02 2005-10-20 Denso Corporation Vehicle periphery monitoring system
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle
CN104182756A (en) * 2014-09-05 2014-12-03 大连理工大学 Method for detecting barriers in front of vehicles on basis of monocular vision
DE102015212581A1 (en) * 2015-07-06 2017-01-12 Robert Bosch Gmbh Driver assistance and driver assistance system
CN108153301A (en) * 2017-12-07 2018-06-12 吴静 One kind is based on polar intelligent barrier avoiding system
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar
CN108805906A (en) * 2018-05-25 2018-11-13 哈尔滨工业大学 A kind of moving obstacle detection and localization method based on depth map
CN109308448A (en) * 2018-07-29 2019-02-05 国网上海市电力公司 A method of it prevents from becoming distribution maloperation using image processing techniques
CN109410234A (en) * 2018-10-12 2019-03-01 南京理工大学 A kind of control method and control system based on binocular vision avoidance
CN109828267A (en) * 2019-02-25 2019-05-31 国电南瑞科技股份有限公司 The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera
CN110826512A (en) * 2019-11-12 2020-02-21 深圳创维数字技术有限公司 Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016047890A1 (en) * 2014-09-26 2016-03-31 숭실대학교산학협력단 Walking assistance method and system, and recording medium for performing same
CN104375505B (en) * 2014-10-08 2017-02-15 北京联合大学 Robot automatic road finding method based on laser ranging
CN105989601B (en) * 2015-12-30 2021-02-05 安徽农业大学 Agricultural AGV corn inter-row navigation datum line extraction method based on machine vision
CN108230392A (en) * 2018-01-23 2018-06-29 北京易智能科技有限公司 A kind of dysopia analyte detection false-alarm elimination method based on IMU
CN110208819A (en) * 2019-05-14 2019-09-06 江苏大学 A kind of processing method of multiple barrier three-dimensional laser radar data

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030114964A1 (en) * 2001-12-19 2003-06-19 Ford Global Technologies, Inc. Simple classification scheme for vehicle/pole/pedestrian detection
US20050231341A1 (en) * 2004-04-02 2005-10-20 Denso Corporation Vehicle periphery monitoring system
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle
CN104182756A (en) * 2014-09-05 2014-12-03 大连理工大学 Method for detecting barriers in front of vehicles on basis of monocular vision
DE102015212581A1 (en) * 2015-07-06 2017-01-12 Robert Bosch Gmbh Driver assistance and driver assistance system
CN108153301A (en) * 2017-12-07 2018-06-12 吴静 One kind is based on polar intelligent barrier avoiding system
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar
CN108805906A (en) * 2018-05-25 2018-11-13 哈尔滨工业大学 A kind of moving obstacle detection and localization method based on depth map
CN109308448A (en) * 2018-07-29 2019-02-05 国网上海市电力公司 A method of it prevents from becoming distribution maloperation using image processing techniques
CN109410234A (en) * 2018-10-12 2019-03-01 南京理工大学 A kind of control method and control system based on binocular vision avoidance
CN109828267A (en) * 2019-02-25 2019-05-31 国电南瑞科技股份有限公司 The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera
CN110826512A (en) * 2019-11-12 2020-02-21 深圳创维数字技术有限公司 Ground obstacle detection method, ground obstacle detection device, and computer-readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YU SHIYOU: "Research on Mobile Robot Development and Simultaneous Location and Map Creation Based on Particle Filter", BASIC SCIENCES, CHINESE MASTER’S THESES FULL-TEXT DATABASE, 31 May 2008 (2008-05-31), XP055812620 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861975A (en) * 2023-02-28 2023-03-28 杭州枕石智能科技有限公司 Obstacle vehicle pose estimation method and device
CN116147567A (en) * 2023-04-20 2023-05-23 高唐县空间勘察规划有限公司 Homeland mapping method based on multi-metadata fusion
CN116147567B (en) * 2023-04-20 2023-07-21 高唐县空间勘察规划有限公司 Homeland mapping method based on multi-metadata fusion
CN116704473A (en) * 2023-05-24 2023-09-05 禾多科技(北京)有限公司 Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium
CN116704473B (en) * 2023-05-24 2024-03-08 禾多科技(北京)有限公司 Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium

Also Published As

Publication number Publication date
CN110826512B (en) 2022-03-08
CN110826512A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
WO2021093418A1 (en) Ground obstacle detection method and device, and computer-readable storage medium
CN110148185B (en) Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment
CN110146869B (en) Method and device for determining coordinate system conversion parameters, electronic equipment and storage medium
JP6825569B2 (en) Signal processor, signal processing method, and program
CN107703528B (en) Visual positioning method and system combined with low-precision GPS in automatic driving
JP3868876B2 (en) Obstacle detection apparatus and method
WO2020124988A1 (en) Vision-based parking space detection method and device
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
JP3367170B2 (en) Obstacle detection device
US10331961B2 (en) Detecting device, detecting method, and program
US20160019683A1 (en) Object detection method and device
WO2017138245A1 (en) Image processing device, object recognition device, device control system, and image processing method and program
US20220277478A1 (en) Positioning Method and Apparatus
JP2011511281A (en) Map matching method with objects detected by sensors
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN111178295A (en) Parking space detection and model training method and device, vehicle, equipment and storage medium
US20210326612A1 (en) Vehicle detection method and device
US20150288878A1 (en) Camera modeling system
CN114118252A (en) Vehicle detection method and detection device based on sensor multivariate information fusion
CN112261390B (en) Vehicle-mounted camera equipment and image optimization device and method thereof
JP5501084B2 (en) Planar area detection apparatus and stereo camera system
RU2667026C1 (en) Bench detection device and a bench detection method
CN110852278B (en) Ground identification line recognition method, ground identification line recognition equipment and computer-readable storage medium
CN110488320B (en) Method for detecting vehicle distance by using stereoscopic vision
CN109784315B (en) Tracking detection method, device and system for 3D obstacle and computer storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20886573

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20886573

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02.11.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20886573

Country of ref document: EP

Kind code of ref document: A1