CN112528771A - Obstacle detection method, obstacle detection device, electronic device, and storage medium - Google Patents

Obstacle detection method, obstacle detection device, electronic device, and storage medium Download PDF

Info

Publication number
CN112528771A
CN112528771A CN202011359697.3A CN202011359697A CN112528771A CN 112528771 A CN112528771 A CN 112528771A CN 202011359697 A CN202011359697 A CN 202011359697A CN 112528771 A CN112528771 A CN 112528771A
Authority
CN
China
Prior art keywords
obstacle
radar
visual
barrier
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011359697.3A
Other languages
Chinese (zh)
Inventor
陈海波
许皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deep Blue Technology Shanghai Co Ltd
Original Assignee
Deep Blue Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Blue Technology Shanghai Co Ltd filed Critical Deep Blue Technology Shanghai Co Ltd
Priority to CN202011359697.3A priority Critical patent/CN112528771A/en
Publication of CN112528771A publication Critical patent/CN112528771A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application relates to the technical field of data processing, and provides an obstacle detection method and device, electronic equipment and a storage medium. The obstacle detection method includes: determining area images acquired by all cameras in the vehicle-mounted all-round looking camera system and all-round looking point cloud data acquired by the vehicle-mounted laser radar; projecting the point cloud data of each radar obstacle in the all-round looking point cloud data to a corresponding camera coordinate system to obtain the visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is the camera coordinate system of the camera with the shooting area superposed with the space area where the radar obstacle is located; and determining a detection result of the around-looking obstacles based on the visual positions of the radar obstacles, the visual positions of the visual obstacles in the area images and the types of the obstacles. The obstacle detection method and device, the electronic equipment and the storage medium provided by the embodiment of the application ensure 360-degree omnibearing obstacle detection.

Description

Obstacle detection method, obstacle detection device, electronic device, and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for detecting an obstacle, an electronic device, and a storage medium.
Background
The laser radar is an important component in the automatic driving technology, and point cloud data obtained by scanning the laser radar can be used for sensing obstacles. Because obstacle detection based on point cloud data can only obtain the azimuth information of an obstacle generally, the scheme of fusing a single laser radar and monocular vision or a single laser radar and binocular vision is adopted at present to carry out fine detection on the obstacle in front of the vehicle.
However, the above solutions are limited to the blind field of vision of the vision camera and cannot provide 360 ° omni-directional obstacle detection for the vehicle.
Disclosure of Invention
The application provides an obstacle detection method, an obstacle detection device, electronic equipment and a storage medium, which are used for providing an obstacle detection scheme so as to realize omnibearing obstacle detection.
The application provides an obstacle detection method, comprising:
determining area images acquired by all cameras in the vehicle-mounted all-round looking camera system and all-round looking point cloud data acquired by the vehicle-mounted laser radar;
projecting the point cloud data of each radar obstacle in the all-round looking point cloud data to a corresponding camera coordinate system to obtain the visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is a camera coordinate system of a camera with a shooting area coinciding with a space area where the radar obstacle is located;
and determining a detection result of the around-looking obstacles based on the visual positions of the radar obstacles, the visual positions of the visual obstacles in the area images and the types of the obstacles.
According to the present application, an obstacle detection method for determining a panoramic obstacle detection result based on a visual position of each radar obstacle, and a visual position and an obstacle type of a visual obstacle in each area image includes:
determining a visual matching result of any radar barrier based on the visual position of each radar barrier and the visual position of each visual barrier in the area image which belongs to the same camera coordinate system with the radar barrier;
and determining the panoramic obstacle detection result based on the vision matching result of each radar obstacle, and the vision position and the obstacle type of the vision obstacle in each area image.
According to the present application, an obstacle detection method for determining a visual matching result of each radar obstacle based on a visual position of each radar obstacle and a visual position of each visual obstacle in an area image belonging to the same camera coordinate system as each radar obstacle, includes:
if the overlapping rate between the visual positions of any radar barrier and any visual barrier in the same camera coordinate system is larger than a preset overlapping rate threshold value, determining that the visual matching result of any radar barrier is any visual barrier;
otherwise, determining that the visual matching result of any radar obstacle is null.
According to the present application, there is provided an obstacle detection method for determining a surrounding obstacle detection result based on a visual matching result of each radar obstacle, and a visual position and an obstacle type of a visual obstacle in each area image, the method including:
if the vision matching result of any radar obstacle in the vision matching results of the radar obstacles is empty, the radar obstacle information based on any radar obstacle is placed into the all-around obstacle detection result;
otherwise, fusing the radar obstacle information of each radar obstacle and the visual obstacle information of the matched visual obstacle in the visual matching result of each radar obstacle to obtain fused obstacle information, and placing the fused obstacle information and the obstacle type of the matched visual obstacle into the all-round-looking obstacle detection result.
According to the present application, there is provided an obstacle detection method, wherein a camera coordinate system corresponding to a radar obstacle is determined based on the following steps:
determining a space area where each radar obstacle is located based on the space azimuth corresponding to the point cloud data of each radar obstacle;
and determining a camera coordinate system of the camera corresponding to each radar obstacle based on the coincidence relation between the shooting area of each camera and each space area and the space area where each radar obstacle is located.
According to the obstacle detection method provided by the application, the point cloud data of each radar obstacle in the all-round looking point cloud data is determined based on the following steps:
deleting points, of which the distance between the point cloud data of the look around point and the vehicle-mounted laser radar exceeds a preset distance threshold value, from the point cloud data of the look around point;
dividing the panoramic point cloud data into area point cloud data of a plurality of spatial areas based on the attitude;
and respectively carrying out obstacle detection on the point cloud data of each area to obtain the point cloud data of each radar obstacle in the point cloud data of each area.
According to the present application, there is provided an obstacle detection method, wherein the visual position and the obstacle type of the visual obstacle in each area image are determined based on the following steps:
inputting each area image into a visual disturbance detection model to obtain the visual position and the type of a visual disturbance in each area image output by the visual disturbance detection model;
the visual disturbance detection model is trained based on a sample region image, and a visual position and a disturbance type of a sample visual disturbance in the sample region image.
The present application further provides an obstacle detection device, including:
the acquisition unit is used for determining area images acquired by all cameras in the vehicle-mounted all-around camera system and all-around point cloud data acquired by the vehicle-mounted laser radar;
the projection unit is used for projecting the point cloud data of each radar obstacle in the all-round looking point cloud data to a corresponding camera coordinate system to obtain the visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is a camera coordinate system of a camera with a shooting area coinciding with a space area where the radar obstacle is located;
and the integration unit is used for determining the detection result of the around looking obstacle based on the visual position of each radar obstacle, the visual position of the visual obstacle in each area image and the obstacle type.
According to the present application, there is provided an obstacle detection apparatus, the integration unit including:
the matching subunit is used for determining a visual matching result of each radar barrier based on the visual position of each radar barrier and the visual position of each visual barrier in the area image which belongs to the same camera coordinate system with each radar barrier;
and the integrated detection subunit is used for determining the detection result of the around looking obstacle based on the vision matching result of each radar obstacle, and the vision position and the obstacle type of the vision obstacle in each area image.
According to the present application, there is provided an obstacle detecting apparatus, wherein the matching subunit is configured to:
if the overlapping rate between the visual positions of any radar barrier and any visual barrier in the same camera coordinate system is larger than a preset overlapping rate threshold value, determining that the visual matching result of any radar barrier is any visual barrier;
otherwise, determining that the visual matching result of any radar obstacle is null.
According to the present application, there is provided an obstacle detection apparatus, wherein the integrated detection subunit is configured to:
if the vision matching result of any radar obstacle in the vision matching results of the radar obstacles is empty, the radar obstacle information based on any radar obstacle is placed into the all-around obstacle detection result;
otherwise, fusing the radar obstacle information of each radar obstacle and the visual obstacle information of the matched visual obstacle in the visual matching result of each radar obstacle to obtain fused obstacle information, and placing the fused obstacle information and the obstacle type of the matched visual obstacle into the all-round-looking obstacle detection result.
According to the present application, there is provided an obstacle detection apparatus, further comprising a camera coordinate system determination unit configured to:
determining a space area where each radar obstacle is located based on the space azimuth corresponding to the point cloud data of each radar obstacle;
and determining a camera coordinate system of the camera corresponding to each radar obstacle based on the coincidence relation between the shooting area of each camera and each space area and the space area where each radar obstacle is located.
The present application further provides an electronic device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of any of the above-mentioned obstacle detection methods when executing the program.
The present application also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the obstacle detection method according to any one of the above.
According to the obstacle detection method, the obstacle detection device, the electronic equipment and the storage medium, the shooting area and the space area are divided, the point cloud of the radar obstacle obtained based on the vehicle-mounted laser radar is projected to the camera coordinate system, information fusion of the radar obstacle and the visual obstacle obtained based on the vehicle-mounted all-around camera system is achieved, 360-degree omnibearing obstacle detection is guaranteed, accuracy and reliability of obstacle detection are effectively improved, and information obtained by obstacle detection is enriched.
Drawings
In order to more clearly illustrate the technical solutions in the present application or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an obstacle detection method provided herein;
fig. 2 is a top view of a shooting area division of the vehicle-mounted all-round camera system provided by the present application;
FIG. 3 is a plan view of a vehicle laser radar according to the present application;
fig. 4 is a second schematic flow chart of the obstacle detection method provided in the present application;
FIG. 5 is a schematic flow chart of a method for determining a camera coordinate system provided herein;
FIG. 6 is a schematic flow chart of a method for determining point cloud data of a radar obstacle according to the present disclosure;
FIG. 7 is a schematic deployment diagram of a vehicle-mounted panoramic camera system and a vehicle-mounted lidar provided by the present application;
fig. 8 is a third schematic flow chart of the obstacle detection method provided in the present application;
fig. 9 is a schematic structural diagram of an obstacle detection device provided in the present application;
FIG. 10 is a schematic structural diagram of an integrated unit provided herein;
fig. 11 is a second schematic structural diagram of the obstacle detecting device provided in the present application;
fig. 12 is a third schematic structural diagram of an obstacle detection device provided in the present application;
fig. 13 is a fourth schematic structural view of the obstacle detecting device provided in the present application;
fig. 14 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
To make the purpose, technical solutions and advantages of the present application clearer, the technical solutions in the present application will be clearly and completely described below with reference to the drawings in the present application, and it is obvious that the described embodiments are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flow chart of an obstacle detection method provided in the present application, and as shown in fig. 1, the method includes:
and step 110, determining area images acquired by each camera in the vehicle-mounted all-round looking camera system and all-round looking point cloud data acquired by the vehicle-mounted laser radar.
The vehicle-mounted all-round looking camera system comprises cameras arranged in all directions of a vehicle, each camera is used for collecting area images of corresponding areas of all directions of the vehicle, and the area images collected by the cameras reflect visual information in the corresponding shooting areas. The regional images acquired by the cameras can be spliced to form a 360-degree all-round view image of the vehicle, so that the vehicle omni-directional regional image acquisition is realized.
The vehicle-mounted laser radar can be arranged at the top of the vehicle, and the vehicle-mounted laser radar can scan the surrounding scenes of the vehicle by 360 degrees to acquire all-around looking-around point cloud data. The vehicle lidar here may be a single multiline lidar.
Step 120, projecting the point cloud data of each radar obstacle in the ring viewpoint cloud data to a corresponding camera coordinate system to obtain the visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is a camera coordinate system of a camera in which a shooting area coincides with a space area where the radar obstacle is located.
Specifically, the point cloud data of each radar obstacle in the panoramic point cloud data may be obtained by performing point cloud data segmentation on the detected radar obstacle on the basis of performing obstacle detection on the panoramic point cloud data.
Aiming at any radar obstacle, the space area where the point cloud data of the radar obstacle is located can be determined, a shooting area which is overlapped with the space area is further obtained, a camera coordinate system of a camera corresponding to the shooting area is selected, the point cloud data of the radar obstacle is projected to the position under the camera coordinate system, and therefore the visual position of the radar obstacle under the camera coordinate system is obtained. The visual position, that is, the position information reflected on the radar obstacle in the camera coordinate system, may specifically be coordinates of each point of the radar obstacle projected in the camera coordinate system, may also be coordinates of an outline frame of the radar obstacle in the camera coordinate system, and the like, which is not specifically limited in this embodiment of the present application.
Before step 120 is executed, shooting areas of the cameras in the vehicle-mounted all-around camera system and spatial areas of the vehicle-mounted laser radar all-around scanning can be defined in advance:
for example, fig. 2 is a plan view for dividing a shooting area of the vehicle-mounted all-round camera system provided in the present application, in fig. 2, squares filled with oblique lines and arranged around a vehicle indicate each camera in the vehicle-mounted all-round camera system, in the vehicle-mounted all-round camera system shown in fig. 2, 8 cameras are arranged, and a shooting area corresponding to each camera may be marked by using an identifier CameraID of the camera.
Fig. 3 is a plan view of the vehicle-mounted lidar for dividing the spatial area, as shown in fig. 3, a single multi-line lidar disposed at the top of a vehicle can realize the acquisition of point cloud data in an omnidirectional manner, and similarly to the shooting area of the vehicle-mounted all-around camera system, the space that the multi-line lidar can scan can be divided into a plurality of spatial areas according to the spatial azimuth angle relative to the multi-line lidar, for example, the space that the multi-line lidar can scan is specifically divided into 8 spatial areas in fig. 3.
After the area division is completed, the correspondence between each spatial area and each shooting area may be established according to the coincidence relationship between the spatial areas and the shooting areas. For example, if the imaging area CameraID in fig. 2 completely covers the spatial area regionID in fig. 3 to 1, a correspondence relationship between CameraID 1 and regionID 1 may be established, and for a radar obstacle located in the spatial area regionID 1, the point cloud data of the radar obstacle may be directly projected onto the camera coordinate system of the camera corresponding to the imaging area CameraID to 1.
In addition, after the partition of the interval region is completed, each space region may also be directly marked according to the corresponding relationship between each space region and each shooting region, for example, the value of the identifier regionID of each space region may be directly set as the value of the identifier CameraID of the corresponding shooting region, and thus the region id and the CameraID of the same value directly correspond to each other.
And step 130, determining a detection result of the around looking obstacle based on the visual position of each radar obstacle, the visual position of each visual obstacle in each area image and the type of the obstacle.
Here, the visual position of the visual barrier and the type of the barrier in each area image may be obtained by performing barrier detection on each area image, where the visual position of the visual barrier is position information of the area image where the visual barrier is located in a camera coordinate system corresponding to the camera, and the type of the barrier of the visual barrier may be a type of a common barrier in an automatic driving or an auxiliary driving process of a pedestrian, an automobile, a bicycle, or the like.
The method and the device have the advantages that the problem that false alarm or missing detection is easy to occur on the obtained visual barrier due to the fact that the barrier detection is carried out on each regional image acquired by the vehicle-mounted all-around camera system independently is considered, the regional image is a two-dimensional visual image essentially, and distance information between the visual barrier and a vehicle cannot be directly obtained.
Specifically, when fusion is carried out, the visual positions of the radar barriers are obtained, the visual positions of the radar barriers are under the camera coordinate system of the corresponding camera, and the visual positions of the radar barriers can be directly matched with the visual positions of the visual barriers under the corresponding camera coordinate system, so that the radar barriers and the visual barriers under the same camera coordinate system are fused, and the problems of missing detection and false alarms are solved. In addition, after matching fusion is carried out on the radar barrier and the visual barrier, the barrier type of the visual barrier can be given to the fused barrier so as to enrich the information of the fused barrier.
The panoramic obstacle detection result obtained in this way may include position information, size information, and the like of each obstacle, and may further include the obstacle type of each obstacle.
According to the method provided by the embodiment of the application, the shooting area and the space area are divided, and the point cloud of the radar barrier obtained based on the vehicle-mounted laser radar is projected to the camera coordinate system, so that the information fusion of the radar barrier and the visual barrier obtained based on the vehicle-mounted all-around camera system is realized, the accuracy and the reliability of barrier detection are effectively improved while 360-degree all-around barrier detection is ensured, and the information obtained by the barrier detection is enriched.
Based on the above embodiments, fig. 4 is a second schematic flowchart of the obstacle detection method provided in the present application, and as shown in fig. 4, step 130 includes:
step 131, determining a visual matching result of each radar barrier based on the visual position of each radar barrier and the visual position of each visual barrier in the area image belonging to the same camera coordinate system with each radar barrier;
step 132, determining the detection result of the around looking obstacle based on the vision matching result of each radar obstacle, and the vision position and obstacle type of the vision obstacle in each area image.
Specifically, before merging the obstacle detection result based on the vehicle-mounted all-around camera system and the obstacle detection result based on the vehicle-mounted laser radar, the visual position of each radar obstacle may be matched with the visual position of the visual obstacle in the corresponding camera coordinate system, so as to determine the visual matching result of each radar obstacle.
Here, the visual position matching may specifically refer to coordinates of an external frame of the radar obstacle in a camera coordinate system and coordinates of external frames of the visual obstacles in the same camera coordinate system, and determine whether the radar obstacle and the visual obstacle are the same obstacle by measuring a degree of coincidence between the external frame of the radar obstacle and the external frame of the visual obstacle. The visual matching result obtained by the method can comprise the visual obstacle matched with the corresponding radar obstacle, and can also be empty, namely, the visual obstacle matched with the corresponding radar obstacle does not exist.
If the radar barrier has the matched visual barrier, the information of the radar barrier and the visual barrier can be fused, the barrier type of the visual barrier is given to the matched radar barrier, and the fusion result is put into the all-round-looking barrier detection result; and if the radar barrier does not have a matched visual barrier, setting the information of the radar barrier into the panoramic barrier detection result.
Based on any of the above embodiments, step 131 includes:
if the overlapping rate between the visual positions of any radar barrier and any visual barrier in the same camera coordinate system is larger than a preset overlapping rate threshold value, determining that the visual matching result of the radar barrier is the visual barrier; otherwise, determining that the visual matching result of the radar obstacle is null.
Specifically, the matching between the radar obstacle and the visual obstacle may be measured by an overlap ratio between the radar obstacle and the visual position of the visual obstacle, where the overlap ratio between the visual positions may be specifically expressed as a degree of overlap of the bounding boxes, and for example, an Intersection-over-unit (IOU) ratio between the radar obstacle and the bounding boxes of the visual obstacle may be used as the overlap ratio between the visual positions of the radar obstacle and the visual obstacle.
For any radar obstacle, if the overlap ratio between the visual position of any visual obstacle and the visual position of the radar obstacle in the same camera coordinate system is greater than a preset overlap ratio threshold value, the two visual obstacles are considered to be matched, and the visual matching result of the radar obstacle is determined to be the visual obstacle;
if the visual positions of the visual obstacles belonging to the same camera coordinate system as the radar obstacle and the visual positions of the radar obstacle are all smaller than a preset overlap rate threshold value, it can be considered that no visual obstacle matched with the radar obstacle exists in the camera coordinate system, missed detection may exist in corresponding area images, and the visual matching result of the radar obstacle is determined to be empty.
Based on any of the above embodiments, step 132 includes:
if the vision matching result of any radar obstacle in the vision matching results of the radar obstacles is empty, the radar obstacle information based on the radar obstacle is placed into the all-around obstacle detection result; otherwise, fusing radar barrier information of each radar barrier and visual barrier information of the matched visual barrier in the visual matching result of each radar barrier to obtain fused barrier information, and placing the fused barrier information and the barrier type of the matched visual barrier into the all-round-looking barrier detection result.
Specifically, after the visual matching results of the radar obstacles are obtained, the following operations may be performed for each radar obstacle, respectively, so as to obtain the looking-around obstacle detection result:
and for any radar barrier, if no visual barrier matched with the radar barrier exists, the radar barrier information of the radar barrier is directly put into the all-round-looking barrier detection result without performing fusion operation. Here, the radar obstacle information is information related to the radar obstacle obtained based on the look-around point cloud data, and examples thereof include point cloud data of the radar obstacle, a distance and an orientation from the radar obstacle to the vehicle, and a size of the radar obstacle.
If the visual barrier matched with the radar barrier exists, the radar barrier information of the radar barrier and the visual barrier information of the matched visual barrier need to be fused, so that fused barrier information is obtained, in addition, the barrier type of the matched visual barrier can be assigned to the radar barrier, and the fused barrier information and the barrier type are both put into the all-round-looking barrier detection result. Here, the visual obstacle information is information related to the visual obstacle obtained based on the area image, and examples thereof include an image of the visual obstacle, a position of the visual obstacle in the area image, and the like.
In addition, for a visual obstacle that does not match a radar obstacle, the visual obstacle can be considered to be caused by a false alarm occurring when the obstacle detection is performed on the area image, and can be directly deleted.
Based on any of the above embodiments, fig. 5 is a schematic flowchart of a method for determining a camera coordinate system provided in the present application, and as shown in fig. 5, the camera coordinate system corresponding to the radar obstacle is determined based on the following steps:
step 510, determining a spatial area where each radar obstacle is located based on a spatial azimuth corresponding to point cloud data of each radar obstacle;
and step 520, determining a camera coordinate system of the camera corresponding to each radar obstacle based on the coincidence relation between the shooting area of each camera and each space area and the space area where each radar obstacle is located.
Specifically, before the above steps are performed, spatial regions may be defined in advance, and each spatial region corresponds to an attitude zone. For any radar obstacle, the spatial region where the radar obstacle is located can be determined by analyzing the spatial azimuth angle interval to which the spatial azimuth angle corresponding to the point cloud data of the radar obstacle belongs.
On the basis, the shooting area corresponding to the space area where the radar obstacle is located can be determined according to the corresponding relation between each shooting area and each space area, and the camera coordinate system of the camera to which the shooting area belongs is used as the camera coordinate system corresponding to the radar obstacle.
Based on any of the above embodiments, fig. 6 is a schematic flow chart of the method for determining point cloud data of radar obstacles provided by the present application, and as shown in fig. 6, the point cloud data of each radar obstacle in the look-around point cloud data is determined based on the following steps:
step 610, deleting points, of which the distance between the point cloud data and the vehicle-mounted laser radar exceeds a preset distance threshold value, in the all-round looking point cloud data;
step 620, dividing the ring viewpoint cloud data into area point cloud data of a plurality of space areas based on the space azimuth;
and 630, respectively carrying out obstacle detection on the point cloud data of each area to obtain point cloud data of each radar obstacle in the point cloud data of each area.
Specifically, the preset distance threshold is a farthest distance value for obstacle detection that is set in advance. After the distance between each point in the panoramic point cloud data and the vehicle-mounted laser radar is calculated, the point with the distance exceeding a preset distance threshold value can be deleted, so that the situation that the obstacle detection is interfered by the point with the too long distance in the originally acquired panoramic point cloud data and the obstacle detection accuracy is influenced is avoided.
After filtering out too far points, the panoramic point cloud data can be subjected to region segmentation based on the space azimuth, so that region point cloud data in each space region can be obtained. And then, respectively carrying out obstacle detection on the point cloud data of each area, wherein the obtained radar obstacle can directly determine the space area, so that a camera coordinate system corresponding to the space area is directly positioned, and the point cloud data of the radar obstacle is projected to the corresponding camera coordinate system to carry out obstacle detection information fusion.
Based on any of the above embodiments, the visual position and the obstacle type of the visual obstacle in each area image are determined based on the following steps:
inputting each area image into a visual disturbance detection model to obtain the visual position and the type of a visual disturbance in each area image output by the visual disturbance detection model; the visual disturbance detection model is trained based on the sample region image, and the visual position and the disturbance type of the sample visual disturbance in the sample region image.
Specifically, the visual obstacle detection model is used to perform detection of a visual obstacle on an input two-dimensional image, thereby outputting a position of the detected visual obstacle in the corresponding image, that is, a visual position of the visual obstacle, and an obstacle type of the visual obstacle. The visual disorder detection model can be a pre-trained neural network model, and the deployment reasoning of the visual disorder detection model can be realized through a TensorRT framework.
Before that, the visual disorder detection model can be obtained by pre-training, and specifically, the visual disorder detection model can be trained by the following steps: firstly, a large number of sample region images are collected through a vehicle-mounted all-round camera system, and the visual positions and the types of the visual obstacles of the samples in the sample region images are obtained in a manual labeling mode. Then, the initial model is trained based on the sample region image, and the visual position and the obstacle type of the sample visual obstacle in the sample region image, so that a visual obstacle detection model is obtained.
Based on any of the above embodiments, fig. 7 is a schematic deployment diagram of the vehicle-mounted panoramic camera system and the vehicle-mounted lidar provided by the present application, and fig. 7 shows the deployment of the vehicle-mounted panoramic camera system and the vehicle-mounted lidar and the effective area of information acquisition generated thereby from the side of the vehicle.
The squares filled with oblique lines in fig. 7 represent each camera in the vehicle-mounted all-round camera system, and the arrows extending outwards at each camera reflect the effective camera area of each camera, i.e. the shooting area where the camera can collect image information. The on-vehicle lidar in figure 7 is multi-line lidar, sets up at the vehicle top, and the outside arrow head that extends of multi-line lidar department reflects multi-line lidar's multi-line lidar active area, all spatial regions that multi-line lidar can gather promptly. As shown in fig. 7, there is an overlap between the camera effective area of each camera and the multiline lidar effective area, whereby the correspondence between each shooting area and the spatial area can be determined.
Based on any of the above embodiments, fig. 8 is a third schematic flow chart of the obstacle detection method provided in the present application, and as shown in fig. 8, the obstacle detection method includes:
firstly, collecting the panoramic point cloud data based on a vehicle-mounted laser radar, and simultaneously collecting the panoramic image based on each camera in a vehicle-mounted panoramic camera system, wherein the panoramic image comprises an area image collected by each camera. Each of the area images acquired by the cameras has a corresponding camera identifier, for example, in 8 cameras shown in fig. 2, the camera identifier CameraID directly in front of the vehicle is 1, the camera identifier CameraID directly in front of the vehicle is 2, the camera identifier CameraID on the left side of the vehicle is 3, the camera identifier CameraID on the left rear of the vehicle is 4, the camera identifier CameraID directly behind the vehicle is 5, the camera identifier CameraID on the right rear of the vehicle is 6, the camera identifier CameraID on the right side of the vehicle is 7, and the camera identifier CameraID on the right front of the vehicle is 8.
Aiming at the collected panoramic point cloud data, firstly, calculating the distance between each point in the panoramic point cloud data and the vehicle-mounted laser radar, judging whether the distance between each point and the vehicle-mounted laser radar is greater than a preset distance threshold value, and if so, deleting the point. After filtering out too far points, the panoramic point cloud data can be subjected to region segmentation based on the space azimuth, so that region point cloud data in each space region can be obtained. The spatial regions obtained by region division may correspond to the imaging regions of the cameras in the vehicle-mounted panoramic imaging system one by one, for example, among 8 spatial regions shown in fig. 3, the spatial region identifier RegionID directly in front of the vehicle is 1, the spatial region identifier RegionID directly in front of the left of the vehicle is 2, the spatial region identifier RegionID on the left side of the vehicle is 3, the spatial region identifier RegionID on the left rear of the vehicle is 4, the spatial region identifier RegionID directly behind the vehicle is 5, the spatial region identifier RegionID on the right rear of the vehicle is 6, the spatial region identifier RegionID on the right side of the vehicle is 7, and the spatial region identifier RegionID on the right front of the vehicle is 8. And then respectively carrying out obstacle detection on the point cloud data of each area, wherein the obtained radar obstacle can directly determine the located space area, so as to directly position a camera coordinate system of a camera corresponding to the located space area, so that the point cloud data of the radar obstacle with the RegionID of 1-8 is projected to the corresponding camera coordinate system, and the point cloud data of the radar obstacle with the RegionID of 1-8 is projected to the corresponding camera coordinate system of the camera with the CameraID of 1-8, so as to obtain the visual position of the radar obstacle.
And regarding the acquired regional images acquired by each camera, inputting 8 regional images from different cameras into a pre-trained visual obstacle detection model to perform batch detection to obtain the visual position and the obstacle type of the visual obstacle in each regional image.
And then, carrying out joint detection on the radar obstacle and the visual obstacle which are respectively detected:
respectively extracting the radar barrier and the visual barrier, comparing whether the RegionID of the radar barrier and the CameraID of the visual barrier are consistent or not, judging whether the RegionID and the CameraID are in the same camera coordinate system or not, and returning to extract again if the RegionID and the CameraID are not consistent;
if the RegionID is consistent with the CameraID, calculating the overlapping rate between the radar barrier and an outer frame of the visual barrier, further judging whether the overlapping rate exceeds a preset overlapping rate threshold value, if so, considering that the radar barrier and the outer frame of the visual barrier are matched, assigning the barrier type of the matched visual barrier to the radar barrier, and combining the radar barrier information of the radar barrier and the visual barrier information of the matched visual barrier into a panoramic barrier detection result; and if the radar obstacle information does not exceed the threshold value, the radar obstacle information and the threshold value are considered to be not matched, no visual obstacle matched with the radar obstacle exists, the radar obstacle information of the obstacle-free type is saved, and the radar obstacle information and the threshold value are combined into the all-around obstacle detection result.
And finally outputting a panoramic obstacle detection result.
According to the method provided by the embodiment of the application, the shooting area and the space area are divided, and the point cloud of the radar barrier obtained based on the vehicle-mounted laser radar is projected to the camera coordinate system, so that the information fusion of the radar barrier and the visual barrier obtained based on the vehicle-mounted all-around camera system is realized, the accuracy and the reliability of barrier detection are effectively improved while 360-degree all-around barrier detection is ensured, and the information obtained by the barrier detection is enriched.
The following describes the obstacle detection device provided in the present application, and the obstacle detection device described below and the obstacle detection method described above may be referred to in correspondence with each other.
Fig. 9 is a schematic structural diagram of an obstacle detection apparatus provided in the present application, and as shown in fig. 9, the obstacle detection apparatus includes an acquisition unit 910, a projection unit 920, and an integration unit 930;
the acquisition unit 910 is configured to determine area images acquired by each camera in the vehicle-mounted panoramic camera system and panoramic point cloud data acquired by the vehicle-mounted laser radar;
the projection unit 920 is configured to project point cloud data of each radar obstacle in the all-round view point cloud data to a corresponding camera coordinate system, so as to obtain a visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is a camera coordinate system of a camera with a shooting area coinciding with a space area where the radar obstacle is located;
the integration unit 930 is configured to determine a surround view obstacle detection result based on the visual position of each radar obstacle, and the visual position and the obstacle type of the visual obstacle in each area image.
The device that this application embodiment provided, through dividing shooting area and space region, will be based on the point cloud projection of the radar barrier that vehicle-mounted laser radar gained to the camera coordinate system in to realize the information fusion of radar barrier and the visual barrier based on vehicle-mounted look around camera system gained, when guaranteeing 360 omnidirectional barriers and detect, effectively improve the accuracy and the reliability that the barrier detected, richen the information that the barrier detected the gained.
Based on any of the above embodiments, fig. 10 is a schematic structural diagram of an integration unit provided in the present application, and as shown in fig. 10, the integration unit 930 includes a matching subunit 931 and an integration detection subunit 932;
the matching subunit 931 is configured to determine a visual matching result of each radar barrier based on the visual position of each radar barrier and the visual position of each visual barrier in the area image that belongs to the same camera coordinate system as each radar barrier;
the integral detection subunit 932 is configured to determine the all-around obstacle detection result based on the visual matching result of each radar obstacle, and the visual position and the obstacle type of the visual obstacle in each area image.
Based on any of the above embodiments, the matching sub-unit 931 is configured to:
if the overlapping rate between the visual positions of any radar barrier and any visual barrier in the same camera coordinate system is larger than a preset overlapping rate threshold value, determining that the visual matching result of any radar barrier is any visual barrier;
otherwise, determining that the visual matching result of any radar obstacle is null.
Based on any of the above embodiments, the integrated detection subunit 932 is configured to:
if the vision matching result of any radar obstacle in the vision matching results of the radar obstacles is empty, the radar obstacle information based on any radar obstacle is placed into the all-around obstacle detection result;
otherwise, fusing the radar obstacle information of each radar obstacle and the visual obstacle information of the matched visual obstacle in the visual matching result of each radar obstacle to obtain fused obstacle information, and placing the fused obstacle information and the obstacle type of the matched visual obstacle into the all-round obstacle detection result.
Based on any of the above embodiments, fig. 11 is a second schematic structural diagram of the obstacle detection apparatus provided in the present application, and as shown in fig. 11, the apparatus further includes a camera coordinate system determination unit 940, where the camera coordinate system determination unit 940 is configured to:
determining a space area where each radar obstacle is located based on the space azimuth corresponding to the point cloud data of each radar obstacle;
and determining a camera coordinate system of the camera corresponding to each radar obstacle based on the coincidence relation between the shooting area of each camera and each space area and the space area where each radar obstacle is located.
Based on any of the above embodiments, fig. 12 is a third schematic structural diagram of the obstacle detection apparatus provided in the present application, and as shown in fig. 12, the apparatus further includes a radar obstacle detection unit 950, where the radar obstacle detection unit 950 is configured to:
deleting points, of which the distance between the point cloud data of the look around point and the vehicle-mounted laser radar exceeds a preset distance threshold value, from the point cloud data of the look around point;
dividing the panoramic point cloud data into area point cloud data of a plurality of spatial areas based on the attitude;
and respectively carrying out obstacle detection on the point cloud data of each area to obtain the point cloud data of each radar obstacle in the point cloud data of each area.
Based on any of the above embodiments, fig. 13 is a fourth schematic structural diagram of the obstacle detection apparatus provided in the present application, as shown in fig. 13, the apparatus further includes a visual obstacle detection unit 960, where the visual obstacle detection unit 960 is configured to:
inputting each area image into a visual disturbance detection model to obtain the visual position and the type of a visual disturbance in each area image output by the visual disturbance detection model;
the visual disturbance detection model is trained based on a sample region image, and a visual position and a disturbance type of a sample visual disturbance in the sample region image.
The obstacle detection device provided by the embodiment of the application is used for executing the obstacle detection method, the specific implementation mode of the obstacle detection device is consistent with the implementation mode of the method, the same beneficial effects can be achieved, and the details are not repeated here.
Fig. 14 illustrates a physical structure diagram of an electronic device, and as shown in fig. 14, the electronic device may include: a processor (processor)1410, a communication Interface (Communications Interface)1420, a memory (memory)1430 and a communication bus 1440, wherein the processor 1410, the communication Interface 1420 and the memory 1430 communicate with each other via the communication bus 1440. Processor 1410 may invoke logic instructions in memory 1430 to perform an obstruction detection method comprising: determining area images acquired by all cameras in the vehicle-mounted all-round looking camera system and all-round looking point cloud data acquired by the vehicle-mounted laser radar; projecting the point cloud data of each radar obstacle in the all-round looking point cloud data to a corresponding camera coordinate system to obtain the visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is a camera coordinate system of a camera with a shooting area coinciding with a space area where the radar obstacle is located; and determining a detection result of the around-looking obstacles based on the visual positions of the radar obstacles, the visual positions of the visual obstacles in the area images and the types of the obstacles.
In addition, the logic instructions in the memory 1430 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The processor 1410 in the electronic device provided in the embodiment of the present application may call a logic instruction in the memory 1430 to implement the above obstacle detection method, and the specific implementation manner of the method is consistent with the method implementation manner and may achieve the same beneficial effects, which is not described herein again.
In another aspect, the present application also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the obstacle detection method provided by the above methods, the method comprising: determining area images acquired by all cameras in the vehicle-mounted all-round looking camera system and all-round looking point cloud data acquired by the vehicle-mounted laser radar; projecting the point cloud data of each radar obstacle in the all-round looking point cloud data to a corresponding camera coordinate system to obtain the visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is a camera coordinate system of a camera with a shooting area coinciding with a space area where the radar obstacle is located; and determining a detection result of the around-looking obstacles based on the visual positions of the radar obstacles, the visual positions of the visual obstacles in the area images and the types of the obstacles.
When the computer program product provided in the embodiment of the present application is executed, the method for detecting an obstacle described above is implemented, and the specific implementation manner is consistent with the method implementation manner, and the same beneficial effects can be achieved, which is not described herein again.
In yet another aspect, the present application also provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, is implemented to perform the obstacle detection method provided above, the method comprising: determining area images acquired by all cameras in the vehicle-mounted all-round looking camera system and all-round looking point cloud data acquired by the vehicle-mounted laser radar; projecting the point cloud data of each radar obstacle in the all-round looking point cloud data to a corresponding camera coordinate system to obtain the visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is a camera coordinate system of a camera with a shooting area coinciding with a space area where the radar obstacle is located; and determining a detection result of the around-looking obstacles based on the visual positions of the radar obstacles, the visual positions of the visual obstacles in the area images and the types of the obstacles.
When the computer program stored on the non-transitory computer-readable storage medium provided in the embodiment of the present application is executed, the method for detecting an obstacle is implemented, and the specific implementation manner is consistent with the method implementation manner and can achieve the same beneficial effects, which is not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (14)

1. An obstacle detection method, comprising:
determining area images acquired by all cameras in the vehicle-mounted all-round looking camera system and all-round looking point cloud data acquired by the vehicle-mounted laser radar;
projecting the point cloud data of each radar obstacle in the all-round looking point cloud data to a corresponding camera coordinate system to obtain the visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is a camera coordinate system of a camera with a shooting area coinciding with a space area where the radar obstacle is located;
and determining a detection result of the around-looking obstacles based on the visual positions of the radar obstacles, the visual positions of the visual obstacles in the area images and the types of the obstacles.
2. The obstacle detection method according to claim 1, wherein the determining of the all-around obstacle detection result based on the visual position of each radar obstacle, and the visual position and the obstacle type of the visual obstacle in each area image includes:
determining a visual matching result of each radar barrier based on the visual position of each radar barrier and the visual position of each visual barrier in the area image belonging to the same camera coordinate system with each radar barrier;
and determining the panoramic obstacle detection result based on the vision matching result of each radar obstacle, and the vision position and the obstacle type of the vision obstacle in each area image.
3. The obstacle detection method according to claim 2, wherein determining the visual matching result for each radar obstacle based on the visual position of each radar obstacle and the visual positions of the visual obstacles in the area images belonging to the same camera coordinate system as the respective radar obstacles respectively comprises:
if the overlapping rate between the visual positions of any radar barrier and any visual barrier in the same camera coordinate system is larger than a preset overlapping rate threshold value, determining that the visual matching result of any radar barrier is any visual barrier;
otherwise, determining that the visual matching result of any radar obstacle is null.
4. The obstacle detection method according to claim 2, wherein the determining the all-round obstacle detection result based on the visual matching result of each radar obstacle, and the visual position and the obstacle type of the visual obstacle in each area image includes:
if the vision matching result of any radar obstacle in the vision matching results of the radar obstacles is empty, the radar obstacle information based on any radar obstacle is placed into the all-around obstacle detection result;
otherwise, fusing the radar obstacle information of each radar obstacle and the visual obstacle information of the matched visual obstacle in the visual matching result of each radar obstacle to obtain fused obstacle information, and placing the fused obstacle information and the obstacle type of the matched visual obstacle into the all-round-looking obstacle detection result.
5. The obstacle detection method according to any one of claims 1 to 4, wherein the camera coordinate system corresponding to the radar obstacle is determined based on:
determining a space area where each radar obstacle is located based on the space azimuth corresponding to the point cloud data of each radar obstacle;
and determining a camera coordinate system of the camera corresponding to each radar obstacle based on the coincidence relation between the shooting area of each camera and each space area and the space area where each radar obstacle is located.
6. The obstacle detection method according to any one of claims 1 to 4, wherein the point cloud data of each radar obstacle in the all-round point cloud data is determined based on:
deleting points, of which the distance between the point cloud data of the look around point and the vehicle-mounted laser radar exceeds a preset distance threshold value, from the point cloud data of the look around point;
dividing the panoramic point cloud data into area point cloud data of a plurality of spatial areas based on the attitude;
and respectively carrying out obstacle detection on the point cloud data of each area to obtain the point cloud data of each radar obstacle in the point cloud data of each area.
7. The obstacle detection method according to any one of claims 1 to 4, wherein the visual position and the obstacle type of the visual obstacle in each of the area images are determined based on:
inputting each area image into a visual disturbance detection model to obtain the visual position and the type of a visual disturbance in each area image output by the visual disturbance detection model;
the visual disturbance detection model is trained based on a sample region image, and a visual position and a disturbance type of a sample visual disturbance in the sample region image.
8. An obstacle detection device, comprising:
the acquisition unit is used for determining area images acquired by all cameras in the vehicle-mounted all-around camera system and all-around point cloud data acquired by the vehicle-mounted laser radar;
the projection unit is used for projecting the point cloud data of each radar obstacle in the all-round looking point cloud data to a corresponding camera coordinate system to obtain the visual position of each radar obstacle; the camera coordinate system corresponding to the radar obstacle is a camera coordinate system of a camera with a shooting area coinciding with a space area where the radar obstacle is located;
and the integration unit is used for determining the detection result of the around looking obstacle based on the visual position of each radar obstacle, the visual position of the visual obstacle in each area image and the obstacle type.
9. The obstacle detecting apparatus according to claim 8, wherein the integration unit includes:
the matching subunit is used for determining a visual matching result of each radar barrier based on the visual position of each radar barrier and the visual position of each visual barrier in the area image which belongs to the same camera coordinate system with each radar barrier;
and the integrated detection subunit is used for determining the detection result of the around looking obstacle based on the vision matching result of each radar obstacle, and the vision position and the obstacle type of the vision obstacle in each area image.
10. Obstacle detecting arrangement according to claim 9, characterized in that the matching subunit is adapted to:
if the overlapping rate between the visual positions of any radar barrier and any visual barrier in the same camera coordinate system is larger than a preset overlapping rate threshold value, determining that the visual matching result of any radar barrier is any visual barrier;
otherwise, determining that the visual matching result of any radar obstacle is null.
11. The obstruction detection device of claim 9, wherein the integrated detection subunit is configured to:
if the vision matching result of any radar obstacle in the vision matching results of the radar obstacles is empty, the radar obstacle information based on any radar obstacle is placed into the all-around obstacle detection result;
otherwise, fusing the radar obstacle information of each radar obstacle and the visual obstacle information of the matched visual obstacle in the visual matching result of each radar obstacle to obtain fused obstacle information, and placing the fused obstacle information and the obstacle type of the matched visual obstacle into the all-round-looking obstacle detection result.
12. The obstacle detection apparatus according to any one of claims 8 to 11, further comprising a camera coordinate system determination unit configured to:
determining a space area where each radar obstacle is located based on the space azimuth corresponding to the point cloud data of each radar obstacle;
and determining a camera coordinate system of the camera corresponding to each radar obstacle based on the coincidence relation between the shooting area of each camera and each space area and the space area where each radar obstacle is located.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the obstacle detection method according to any of claims 1 to 7 when executing the program.
14. A non-transitory computer readable storage medium, having a computer program stored thereon, wherein the computer program, when being executed by a processor, implements the steps of the obstacle detection method according to any one of claims 1 to 7.
CN202011359697.3A 2020-11-27 2020-11-27 Obstacle detection method, obstacle detection device, electronic device, and storage medium Pending CN112528771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011359697.3A CN112528771A (en) 2020-11-27 2020-11-27 Obstacle detection method, obstacle detection device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011359697.3A CN112528771A (en) 2020-11-27 2020-11-27 Obstacle detection method, obstacle detection device, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN112528771A true CN112528771A (en) 2021-03-19

Family

ID=74994258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011359697.3A Pending CN112528771A (en) 2020-11-27 2020-11-27 Obstacle detection method, obstacle detection device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN112528771A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570622A (en) * 2021-07-26 2021-10-29 北京全路通信信号研究设计院集团有限公司 Obstacle determination method and device, electronic equipment and storage medium
CN113568002A (en) * 2021-06-24 2021-10-29 中车南京浦镇车辆有限公司 Rail transit active obstacle detection device based on laser and image data fusion
CN113794922A (en) * 2021-09-13 2021-12-14 深圳创维-Rgb电子有限公司 Radar-based television curvature adjusting method and device, television and storage medium
CN115331483A (en) * 2021-05-11 2022-11-11 宗盈国际科技股份有限公司 Intelligent locomotive warning device and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006011570A (en) * 2004-06-23 2006-01-12 Daihatsu Motor Co Ltd Camera calibration method and camera calibration device
CN104637059A (en) * 2015-02-09 2015-05-20 吉林大学 Night preceding vehicle detection method based on millimeter-wave radar and machine vision
CN106878687A (en) * 2017-04-12 2017-06-20 吉林大学 A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
CN109375635A (en) * 2018-12-20 2019-02-22 安徽江淮汽车集团股份有限公司 A kind of autonomous driving vehicle road environment sensory perceptual system and method
KR20190040550A (en) * 2017-10-11 2019-04-19 현대모비스 주식회사 Apparatus for detecting obstacle in vehicle and control method thereof
CN110096059A (en) * 2019-04-25 2019-08-06 杭州飞步科技有限公司 Automatic Pilot method, apparatus, equipment and storage medium
CN110719442A (en) * 2019-10-12 2020-01-21 深圳市镭神智能系统有限公司 Security monitoring system
CN111323027A (en) * 2018-12-17 2020-06-23 兰州大学 Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera
CN111583337A (en) * 2020-04-25 2020-08-25 华南理工大学 Omnibearing obstacle detection method based on multi-sensor fusion
CN111582256A (en) * 2020-04-26 2020-08-25 智慧互通科技有限公司 Parking management method and device based on radar and visual information
CN111611853A (en) * 2020-04-15 2020-09-01 宁波吉利汽车研究开发有限公司 Sensing information fusion method and device and storage medium
CN111796299A (en) * 2020-06-10 2020-10-20 东风汽车集团有限公司 Obstacle sensing method and device and unmanned sweeper
CN111856448A (en) * 2020-07-02 2020-10-30 山东省科学院海洋仪器仪表研究所 Marine obstacle identification method and system based on binocular vision and radar

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006011570A (en) * 2004-06-23 2006-01-12 Daihatsu Motor Co Ltd Camera calibration method and camera calibration device
CN104637059A (en) * 2015-02-09 2015-05-20 吉林大学 Night preceding vehicle detection method based on millimeter-wave radar and machine vision
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
CN106878687A (en) * 2017-04-12 2017-06-20 吉林大学 A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
KR20190040550A (en) * 2017-10-11 2019-04-19 현대모비스 주식회사 Apparatus for detecting obstacle in vehicle and control method thereof
CN111323027A (en) * 2018-12-17 2020-06-23 兰州大学 Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera
CN109375635A (en) * 2018-12-20 2019-02-22 安徽江淮汽车集团股份有限公司 A kind of autonomous driving vehicle road environment sensory perceptual system and method
CN110096059A (en) * 2019-04-25 2019-08-06 杭州飞步科技有限公司 Automatic Pilot method, apparatus, equipment and storage medium
CN110719442A (en) * 2019-10-12 2020-01-21 深圳市镭神智能系统有限公司 Security monitoring system
CN111611853A (en) * 2020-04-15 2020-09-01 宁波吉利汽车研究开发有限公司 Sensing information fusion method and device and storage medium
CN111583337A (en) * 2020-04-25 2020-08-25 华南理工大学 Omnibearing obstacle detection method based on multi-sensor fusion
CN111582256A (en) * 2020-04-26 2020-08-25 智慧互通科技有限公司 Parking management method and device based on radar and visual information
CN111796299A (en) * 2020-06-10 2020-10-20 东风汽车集团有限公司 Obstacle sensing method and device and unmanned sweeper
CN111856448A (en) * 2020-07-02 2020-10-30 山东省科学院海洋仪器仪表研究所 Marine obstacle identification method and system based on binocular vision and radar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李英勃;于波;: "基于融合感知的场景数据提取技术研究", 现代计算机(专业版), no. 09 *
王葵;徐照胜;颜普;王道斌;: "基于激光测距雷达和机器视觉的障碍物检测", 仪表技术, no. 08 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115331483A (en) * 2021-05-11 2022-11-11 宗盈国际科技股份有限公司 Intelligent locomotive warning device and system
CN113568002A (en) * 2021-06-24 2021-10-29 中车南京浦镇车辆有限公司 Rail transit active obstacle detection device based on laser and image data fusion
CN113570622A (en) * 2021-07-26 2021-10-29 北京全路通信信号研究设计院集团有限公司 Obstacle determination method and device, electronic equipment and storage medium
CN113794922A (en) * 2021-09-13 2021-12-14 深圳创维-Rgb电子有限公司 Radar-based television curvature adjusting method and device, television and storage medium
WO2023035414A1 (en) * 2021-09-13 2023-03-16 深圳创维-Rgb电子有限公司 Radar-based method and apparatus for adjusting curvature of television, and television and storage medium

Similar Documents

Publication Publication Date Title
CN112528771A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
US11320833B2 (en) Data processing method, apparatus and terminal
CN110861639A (en) Parking information fusion method and device, electronic equipment and storage medium
CN112528773B (en) Obstacle information fusion method and device, electronic equipment and storage medium
CN112967283B (en) Target identification method, system, equipment and storage medium based on binocular camera
US10268904B2 (en) Vehicle vision system with object and lane fusion
CN104966062B (en) Video monitoring method and device
CN112651359A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN111724558B (en) Monitoring method, monitoring device and intrusion alarm system
CN109191513B (en) Power equipment stereo matching method based on global optimization
CN111324143A (en) Unmanned aerial vehicle autonomous patrol obstacle avoidance system, method and computer equipment
KR102265980B1 (en) Device and method for monitoring ship and port
JP4102885B2 (en) Parked vehicle detection method and parked vehicle detection system
CN111213153A (en) Target object motion state detection method, device and storage medium
JP2013137767A (en) Obstacle detection method and driver support system
JP5539250B2 (en) Approaching object detection device and approaching object detection method
CN112561941A (en) Cliff detection method and device and robot
CN111386530A (en) Vehicle detection method and apparatus
CN113255444A (en) Training method of image recognition model, image recognition method and device
CN107886544A (en) IMAQ control method and device for vehicle calibration
CN114120254A (en) Road information identification method, device and storage medium
CN113569812A (en) Unknown obstacle identification method and device and electronic equipment
CN113128430A (en) Crowd gathering detection method and device, electronic equipment and storage medium
JP2008165595A (en) Obstacle detection method, obstacle detection device, and obstacle detection system
CN110800020A (en) Image information acquisition method, image processing equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240326