WO2022078463A1 - 基于汽车的障碍物检测方法及装置 - Google Patents

基于汽车的障碍物检测方法及装置 Download PDF

Info

Publication number
WO2022078463A1
WO2022078463A1 PCT/CN2021/123880 CN2021123880W WO2022078463A1 WO 2022078463 A1 WO2022078463 A1 WO 2022078463A1 CN 2021123880 W CN2021123880 W CN 2021123880W WO 2022078463 A1 WO2022078463 A1 WO 2022078463A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
image
detected
target
area
Prior art date
Application number
PCT/CN2021/123880
Other languages
English (en)
French (fr)
Inventor
胡方全
Original Assignee
爱驰汽车(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 爱驰汽车(上海)有限公司 filed Critical 爱驰汽车(上海)有限公司
Publication of WO2022078463A1 publication Critical patent/WO2022078463A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the invention relates to the field of electronic information, in particular to an obstacle detection method and device based on an automobile.
  • the present invention is proposed in order to provide an automobile-based obstacle detection method and apparatus that overcomes the above problems or at least partially solves the above problems.
  • a vehicle-based obstacle detection method comprising:
  • Image segmentation is performed on each image frame contained in the image to be detected, so as to identify the obstacle area contained in each image frame;
  • the obstacle area contained in at least two adjacent image frames detect the target obstacle corresponding to the described to-be-detected image, and add the detected target obstacle to the tracking obstacle set;
  • the real-time location information of the target obstacle included in the tracking obstacle set is detected according to the driving state information of the vehicle acquired in real time, and whether the alarm prompt information is triggered is determined according to the real-time location information of the target obstacle.
  • a vehicle-based obstacle detection device comprising:
  • a correction module adapted to perform distortion correction processing on the original fisheye image obtained by the fisheye camera located on the car, to obtain an image to be detected corresponding to the original fisheye image
  • a segmentation module adapted to perform image segmentation for each image frame contained in the to-be-detected image, so as to identify the obstacle area contained in each image frame;
  • a detection module adapted to detect target obstacles corresponding to the to-be-detected images according to the obstacle areas contained in at least two adjacent image frames, and add the detected target obstacles to the set of tracking obstacles;
  • the tracking module is adapted to detect the real-time position information of the target obstacle included in the tracking obstacle set according to the real-time obtained vehicle driving state information, and judge whether to trigger the alarm prompt information according to the real-time position information of the target obstacle.
  • an electronic device comprising: a processor, a memory, a communication interface and a communication bus, and the processor, the memory and the communication interface can communicate with each other through the communication bus. communication;
  • the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to perform operations corresponding to the above-mentioned vehicle-based obstacle detection method.
  • a computer storage medium wherein at least one executable instruction is stored in the storage medium, and the executable instruction causes the processor to execute the method corresponding to the above-mentioned vehicle-based obstacle detection method. operate.
  • the detection can be performed by the fisheye camera. Since the detection range of the fisheye camera is large, the obstacles around the vehicle can be comprehensively detected, and the distortion correction processing and the like can be performed. This method can restore the image distortion problem of fisheye camera.
  • the target obstacle is detected by the obstacle area contained in at least two adjacent image frames, which can avoid the problem of misjudgment caused by the low resolution of a single image frame, and improve the accuracy of obstacle detection.
  • the real-time position information of the target obstacle included in the tracking obstacle set can be detected in combination with the real-time driving state information of the car, and the real-time position of the obstacle can be reversed according to the shape and state of the car, so that the obstacle temporarily leaves the camera's field of view. Realize continuous tracking of obstacles when in range to ensure driving safety.
  • FIG. 1 shows a flowchart of a vehicle-based obstacle detection method provided by Embodiment 1 of the present invention
  • FIG. 2 shows a flowchart of an automobile-based obstacle detection method provided in Embodiment 2 of the present invention
  • FIG. 3 shows a structural diagram of a vehicle-based obstacle detection device provided in Embodiment 3 of the present invention
  • FIG. 4 shows a schematic structural diagram of an electronic device according to Embodiment 5 of the present invention.
  • FIG. 5 shows a schematic flowchart of the obstacle tracking method in this example
  • FIG. 6 shows a schematic flowchart of the information fusion of the previous frame.
  • FIG. 1 shows a flow chart of a vehicle-based obstacle detection method provided by Embodiment 1 of the present invention. As shown in Figure 1, the method includes:
  • Step S110 Perform distortion correction processing on the original fisheye image obtained by the fisheye camera located on the vehicle, to obtain an image to be detected corresponding to the original fisheye image.
  • the fisheye camera may be a front-view camera installed in the front of the car, or a rear-view camera installed at the rear of the car.
  • the present invention does not limit the specific installation position of the fisheye camera. Since the original fisheye image obtained by the fisheye camera is distorted, a distortion correction process needs to be performed to obtain a corrected image to be detected.
  • Step S120 Perform image segmentation on each image frame included in the image to be detected, so as to identify the obstacle area included in each image frame.
  • image segmentation is performed on the image to be detected obtained after correction, and the image to be detected is segmented into a plurality of regions through image segmentation processing.
  • the image to be detected includes a ground area, an object area on the ground, and a background area.
  • the object area adjacent to the ground area is extracted, so as to identify the obstacle area according to the extracted object area.
  • Step S130 Detect target obstacles corresponding to the images to be detected according to the obstacle areas included in at least two adjacent image frames, and add the detected target obstacles to the tracking obstacle set.
  • the detection method is similar to the image to be detected. the corresponding target obstacle.
  • the adjacent image frames usually contain the same obstacle, and the displacement of the obstacle in the adjacent image frames is usually not very large, the combination of two or more adjacent image frames can The reasonable displacement range of the obstacle is verified to assist in verifying whether the identification result of the obstacle is accurate, thereby preventing the problem of misidentification.
  • the target obstacle is determined by combining multiple image frames, the determined target obstacle is added to the set of tracking obstacles, so as to realize real-time tracking of the obstacle in the subsequent process.
  • Step S140 Detect the real-time position information of the target obstacle included in the tracking obstacle set according to the real-time driving state information of the vehicle, and determine whether to trigger the alarm prompt information according to the real-time position information of the target obstacle.
  • the driving state information of the car in order to ensure the reliable identification of obstacles, further obtain the driving state information of the car, so as to obtain the real-time
  • the obtained driving state information reversely estimates the current position of the obstacle, thereby realizing the continuous tracking of the obstacle, and according to the tracking result, judges whether it is necessary to send an alarm prompt message to remind the driver to avoid the obstacle.
  • the fisheye camera can be used for detection. Since the detection range of the fisheye camera is large, the obstacles around the vehicle can be comprehensively detected, and the distortion correction can be performed. Processing and other methods can restore the image distortion problem of the fisheye camera.
  • the target obstacle is detected by the obstacle area included in at least two adjacent image frames, which can avoid the problem of misjudgment caused by the low resolution of a single image frame, and improve the accuracy of obstacle detection.
  • the real-time position information of the target obstacle included in the tracking obstacle set can be detected in combination with the real-time driving state information of the car, and the real-time position of the obstacle can be reversed according to the shape and state of the car, so that the obstacle temporarily leaves the camera's field of view. Realize continuous tracking of obstacles when in range to ensure driving safety.
  • FIG. 2 shows a flow chart of a vehicle-based obstacle detection method provided by Embodiment 2 of the present invention. As shown in Figure 2, the method includes:
  • Step S210 Perform distortion correction processing on the original fisheye image obtained by the fisheye camera located on the vehicle, to obtain an image to be detected corresponding to the original fisheye image.
  • the fisheye camera may be a front-view camera installed in the front of the car, or a rear-view camera installed at the rear of the car.
  • the present invention does not limit the specific installation position of the fisheye camera. Since the original fisheye image obtained by the fisheye camera is distorted, a distortion correction process needs to be performed to obtain a corrected image to be detected.
  • a correction model based on plane projection or cylindrical projection may be used for correction, and the present invention does not limit the specific implementation details.
  • Step S220 Perform image segmentation on each image frame included in the image to be detected, so as to identify the obstacle area included in each image frame.
  • image segmentation is performed on the image to be detected obtained after correction, and the image to be detected is segmented into a plurality of regions through image segmentation processing.
  • the image to be detected includes a ground area, an object area on the ground, and a background area.
  • the object area adjacent to the ground area is extracted, so as to identify the obstacle area according to the extracted object area.
  • the use of image segmentation technology for obstacle detection can separate low and independent obstacles from the ground and other backgrounds to obtain the drivable area of the car.
  • Step S230 Detect target obstacles corresponding to the images to be detected according to the obstacle areas included in at least two adjacent image frames.
  • the detection method is similar to the image to be detected. the corresponding target obstacle.
  • the adjacent image frames usually contain the same obstacle, and the displacement of the obstacle in the adjacent image frames is usually not very large, the combination of two or more adjacent image frames can The reasonable displacement range of the obstacle is verified to assist in verifying whether the identification result of the obstacle is accurate, thereby preventing the problem of misidentification.
  • the obstacle prediction area contained in the M+Nth image frame is predicted according to the obstacle area contained in the Mth image frame and the driving state information of the car obtained in real time; the M+Nth image frame is determined.
  • the actual area of the obstacle contained in it judge whether the actual area of the obstacle matches the predicted area of the obstacle, and detect the target obstacle corresponding to the image to be detected according to the judgment result; wherein, each image frame is sorted according to the time sequence of collection, and M and N are natural numbers. For example, when N is equal to 1, the obstacle prediction area contained in the current image frame is predicted according to the obstacle area contained in the previous image frame.
  • the obstacle included in the subsequent image frame can be predicted in combination with the obstacle regions contained in the previous multiple image frames (referred to as the previous frame or the previous frame).
  • object prediction area It can be seen that when N is a natural number greater than 1, the obstacle prediction contained in the M+Nth image frame is predicted according to the obstacle area contained in the Mth image frame and the driving state information of the car obtained in real time. area, the obstacle prediction contained in the M+Nth image frame is predicted according to the obstacle area contained in the Mth image frame to the M+N-1th image frame and the driving state information of the car acquired in real time. area. Accordingly, combining multiple previous image frames can improve the accuracy of obstacle prediction.
  • the actual area of the obstacle refers to the obstacle area included in the image frame obtained by detecting the image frame to be detected.
  • the obstacle prediction area refers to the position of the predicted obstacle in the following image frame by the position of the actual area of the obstacle contained in the previous image frame and combined with the real-time driving state information of the vehicle. It can be seen that, different from the actual area of the obstacle, the obstacle prediction area is the position of the obstacle in the subsequent image frame predicted according to the previous image frame and the driving state information of the vehicle. The position is predicted by the algorithm, not It is actually obtained by image detection, so there may be errors.
  • the following methods are used: first, extract the actual obstacle corresponding to the actual area of the obstacle. feature information and predicted feature information corresponding to the predicted area of the obstacle; then, perform feature matching processing on the actual feature information and predicted feature information; if the feature matching is successful, determine the target corresponding to the image to be detected according to the actual area of the obstacle obstacle. Conversely, if the feature matching fails, it indicates that there is a misjudgment in the actual area of the obstacle or the predicted area of the obstacle, which needs to be verified in combination with subsequent image frames. Wherein, when the number of current frames is multiple, the obstacle prediction area corresponding to each previous frame is respectively matched with the actual obstacle area of the current frame, so as to comprehensively determine the final detection result in combination with the multiple matching results.
  • Step S240 Add the detected target obstacle to the set of tracking obstacles.
  • the determined target obstacle is added to the set of tracking obstacles, so as to realize real-time tracking of the obstacle in the subsequent process.
  • the obstacle is determined as target obstacle. It can be seen from this that mutual verification can be performed through a plurality of adjacent image frames, thereby preventing misidentification. Tracking the target obstacles included in the obstacle set needs to be continuously tracked during the driving process.
  • Step S250 Detect the real-time position information of the target obstacle included in the tracked obstacle set according to the driving state information of the vehicle acquired in real time.
  • the driving state information of the car in order to ensure the reliable identification of obstacles, further obtain the driving state information of the car, so as to obtain the real-time
  • the obtained driving state information reversely estimates the current position of the obstacle, thereby realizing the continuous tracking of the obstacle, and according to the tracking result, judges whether it is necessary to send an alarm prompt message to remind the driver to avoid the obstacle.
  • identification features are set for such continuous and low obstacles in advance, and preset height thresholds and length thresholds are specifically defined, so as to quickly and accurately determine such obstacles .
  • preset height thresholds and length thresholds are specifically defined, so as to quickly and accurately determine such obstacles .
  • the undetected part contained in the continuous obstacle can be predicted from the detected part contained in the continuous obstacle.
  • the height is fixed and the length extends along the road. Therefore, the continuity can be predicted based on the height, length, shape and other characteristics of the detected part contained in the continuous obstacle.
  • the height, length, shape and other characteristics of the undetected part contained in the obstacle can further predict the obstacle area that may continuously appear around the vehicle before the undetected part enters the detection range of the camera.
  • a continuous obstacle prediction model can be preset.
  • the height, length, shape and other feature information of the detected part included in the continuous obstacle obtained in real time are input into the continuous obstacle prediction model mentioned above. , so as to predict the height, length, shape, etc. of the undetected part contained in the continuous obstacle according to the model output result.
  • the continuous obstacle prediction model can be trained through machine learning and other methods. It can be seen that this embodiment can predict in advance the continuous obstacles that do not enter the vehicle's field of vision according to the feature that continuous low and low obstacles are usually arranged continuously along the route direction, thereby facilitating risk avoidance.
  • the driving state of the car obtained in real time is obtained according to the real-time position information of the vehicle.
  • the information predicts the current position of the undetected portion contained in the target obstacle.
  • the current position of the undetected portion included in the target obstacle can be predicted from information such as the running speed of the car, the rotational speed of the steering wheel, and the like.
  • the fisheye camera located on the car is usually a forward-looking fisheye camera located in front of the car, so as to detect the obstacle in advance before the vehicle collides with the obstacle.
  • the vehicle is reversing, it can also be detected by a rear-view fisheye camera located behind the car.
  • Step S260 Determine whether to trigger the alarm prompt information according to the real-time position information of the target obstacle.
  • an alarm prompt message is triggered to remind the driver to avoid the obstacle.
  • the alarm prompt information may be of various types such as voice prompt information, steering wheel vibration information, etc., which is not limited in the present invention.
  • the fisheye cameras installed in the front, left, rear and right of the car can cover a 360-degree field of view around the car, and can observe obstacles that appear near the car at low places.
  • the distortion of the fisheye camera increases the difficulty of obstacle detection, and there are often missed and false detections; on the other hand, when using a four-way fisheye camera, the pictures obtained by four cameras per second will occupy Most of the bandwidth and computing resources of the intelligent system on the vehicle are, so it is difficult to ensure the real-time detection.
  • this example proposes a method for detecting and tracking low obstacles according to the vehicle motion state and the vehicle's forward-looking fisheye camera.
  • the purpose is to overcome the following points in the existing vehicle visual obstacle detection solutions.
  • the problem that ordinary front-view cameras cannot detect low obstacles near the vehicle the problem that the contour and position of continuous low obstacles are difficult to detect by visual methods; the use of four-way fisheye camera signals takes up too much bandwidth and computing resources.
  • High but the problem of limited field of view of a single camera.
  • This example can solve the above problems, detect independent/continuous low obstacles outside the car through the front-view fisheye camera, and issue an alarm for obstacles entering the warning range.
  • This example mainly includes: distortion correction, obstacle detection based on image segmentation, obstacle world coordinate calculation, previous frame information fusion, obstacle tracking, and obstacle alarm and display processes.
  • the position of the obstacle on the ground is calculated by the contour line of the obstacle in contact with the ground pixels.
  • Combine the detection results of the previous frame to optimize the detection results of this frame.
  • the vehicle motion information is used to continuously track the obstacle. If the shortest distance between the obstacle and the vehicle or the driving trajectory is less than the preset warning distance, it will alarm and display to the user.
  • this example proposes a method for detecting and tracking low obstacles based on a forward-looking fisheye camera during low-speed driving.
  • the deep learning image segmentation technology is used to complete the detection of low obstacles, especially for traditional obstacle detection. It is difficult to detect continuous low obstacles in the process, and can provide more reliable detection results through the fusion of front and rear frame information.
  • this example only requires the use of a forward-looking fisheye camera and the collection of vehicle motion information. The hardware cost is low, and it is more real-time than the four-way fisheye-based visual detection system.
  • this example can be implemented by an intelligent system in the vehicle, which is further divided into distortion correction, obstacle detection based on image segmentation, obstacle world coordinate calculation, previous frame information fusion, obstacle tracking, and obstacle warning and display modules.
  • distortion correction distortion correction
  • obstacle detection based on image segmentation
  • obstacle world coordinate calculation previous frame information fusion
  • obstacle tracking obstacle tracking
  • obstacle warning and display modules The specific implementation principles of each module are described in detail below:
  • Distortion correction module The fisheye image I is collected by the forward-looking fisheye camera. In order to make the effect of image segmentation sufficient for feature extraction, this scheme first performs distortion correction on the collected fisheye images.
  • This solution can adopt a correction model based on plane projection or cylindrical projection, the former can better eliminate the radial distortion of the fisheye camera, and the latter can maintain the original horizontal and vertical shape of the image at the center of the fisheye.
  • the corrected image IR can be obtained .
  • Obstacle detection module based on image segmentation This scheme trains an image segmentation model MS based on deep learning , and the use of image segmentation technology for obstacle detection can separate independent low obstacles from the ground and other backgrounds to obtain drivable areas , and can solve the problem that traditional solutions cannot identify continuous low obstacles.
  • image segmentation technology for the i-th frame image I R i , extract the edge of the segmented obstacle to obtain the contour C i of the obstacle and the contour line C G i in contact with the ground, the latter can be used for subsequent distance calculation and previous frame information fusion.
  • Obstacle world coordinate calculation module Combine the obstacle ground contact contour line C G i obtained by the obstacle detection module, the image distortion correction model MR and the camera calibration parameter H, and project the obstacle ground contour line C G i to the ground, and calculate the world coordinate Li of the outline of the obstacle's ground . All detected obstacles form the set ⁇ L i ⁇ .
  • the previous frame information fusion module In order to ensure the real-time detection, a smaller resolution image may be used for detection, and the contours of obstacles detected each time may be incomplete or inaccurate, and there may also be missed detections and false detections. This scheme optimizes the detection results by means of information fusion of the previous frame, so that the detected obstacle contour and position are more reliable.
  • the previous frame information fusion steps are shown in Figure 6, and the details are as follows:
  • the set ⁇ L i ⁇ of the contour coordinates of the obstacle in this frame is obtained by calculating the detection result of this frame.
  • Obstacle tracking module For all the confirmed tracked obstacles in ⁇ L i ⁇ , when the obstacle leaves the field of view of the forward-looking fisheye camera, the ground displacement of the obstacle is reversely calculated by using the vehicle motion information to calculate the ground displacement of the obstacle. to track obstacles.
  • Obstacle alarm and display module Calculate the vehicle's trajectory line through the vehicle motion information, and then calculate the closest distance between the obstacle and the vehicle or the trajectory line. If the distance is less than the preset warning distance, the obstacle will pass through. The perspective transformation is projected onto a specific display plane and displayed to the user on the vehicle.
  • FIG. 5 shows a schematic flowchart of the obstacle tracking method in this example.
  • the coordinates of the obstacle are obtained through the fisheye image;
  • the vehicle motion state is obtained through information such as the rotational speed or steering of the wheels, so as to realize obstacle tracking in combination with the vehicle motion state, vehicle trajectory and obstacle coordinate information.
  • FIG. 6 shows a schematic flowchart of information fusion of the previous frame. As shown in FIG. 6 , feature matching is performed according to the obstacles detected in this frame and multiple predicted obstacles in the previous frame, so as to determine the target obstacle according to the feature matching result.
  • the number of previous frames is multiple, it is possible to match the predicted obstacles of each previous frame with the detected obstacles of the current frame, so as to obtain multiple previous frames (also called previous frames or previous frames)
  • the matching result corresponding to the image frame) is finally determined according to the number of matches. For example, if the number of unmatched times reaches a preset threshold, it is determined that the obstacle is a false detection.
  • the method in the embodiment of the present invention can be detected by the fisheye camera. Since the detection range of the fisheye camera is large, the obstacles around the vehicle can be comprehensively detected, and can be restored through distortion correction processing and other methods. Image distortion problem with fisheye camera.
  • the target obstacle is detected by the obstacle area included in at least two adjacent image frames, which can avoid the problem of misjudgment caused by the low resolution of a single image frame, and improve the accuracy of obstacle detection.
  • the real-time position information of the target obstacle included in the tracking obstacle set can be detected in combination with the real-time driving state information of the car, and the real-time position of the obstacle can be reversed according to the shape and state of the car, so that the obstacle temporarily leaves the camera's field of view. Realize continuous tracking of obstacles when in range to ensure driving safety. In addition, it is comprehensively determined whether the detected obstacle in this frame is correct in combination with the prediction results of multiple previous frames, which can significantly improve the detection accuracy.
  • Embodiment 3 shows a schematic structural diagram of a vehicle-based obstacle detection device provided in Embodiment 3 of the present invention, which specifically includes:
  • the correction module 31 is adapted to perform distortion correction processing on the original fisheye image obtained by the fisheye camera located on the vehicle, so as to obtain an image to be detected corresponding to the original fisheye image;
  • the segmentation module 32 is adapted to perform image segmentation for each image frame contained in the to-be-detected image, so as to identify the obstacle area contained in each image frame;
  • the detection module 33 is adapted to detect target obstacles corresponding to the to-be-detected images according to the obstacle areas contained in at least two adjacent image frames, and add the detected target obstacles to the tracking obstacle set ;
  • the tracking module 34 is adapted to detect the real-time position information of the target obstacle included in the tracking obstacle set according to the real-time obtained driving state information of the vehicle, and judge whether to trigger the alarm prompt information according to the real-time position information of the target obstacle.
  • the detection module is specifically adapted to:
  • M and N are natural numbers.
  • the detection module is specifically adapted to:
  • the target obstacle corresponding to the to-be-detected image is determined according to the actual area of the obstacle.
  • the detection module is specifically adapted to:
  • the obstacle prediction area included in the M+N th image frame is predicted according to the obstacle area included in the M th image frame to the M+N-1 th image frame and the driving state information of the car acquired in real time.
  • the target obstacle when the height of the target obstacle is lower than the preset height threshold and the length is greater than the preset length threshold, it is determined that the target obstacle is a continuous obstacle;
  • the undetected part can be predicted from the detected part contained in the continuous obstacle.
  • the tracking module is specifically adapted to:
  • the current position of the undetected part included in the target obstacle is predicted according to the driving state information of the vehicle acquired in real time.
  • the fisheye camera located on the car is a forward-looking fisheye camera.
  • the fourth embodiment of the present application provides a non-volatile computer storage medium, where the computer storage medium stores at least one executable instruction, and the computer-executable instruction can execute the vehicle-based obstacle detection in any of the foregoing method embodiments method.
  • the executable instructions may specifically be used to cause the processor to perform the corresponding operations in the foregoing method embodiments.
  • FIG. 4 shows a schematic structural diagram of an electronic device according to Embodiment 5 of the present invention.
  • the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
  • the electronic device may include: a processor (processor) 402 , a communication interface (Communications Interface) 406 , a memory (memory) 404 , and a communication bus 408 .
  • processor processor
  • Communication interface Communication Interface
  • memory memory
  • communication bus 408 a communication bus 408
  • the processor 402 , the communication interface 406 , and the memory 404 communicate with each other through the communication bus 408 .
  • the communication interface 406 is used to communicate with network elements of other devices such as clients or other servers.
  • the processor 402 is configured to execute the program 410, and specifically may execute the relevant steps in the above embodiments of the vehicle-based obstacle detection method.
  • the program 410 may include program code including computer operation instructions.
  • the processor 402 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention.
  • the one or more processors included in the electronic device may be the same type of processors, such as one or more CPUs; or may be different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 404 is used to store the program 410 .
  • Memory 404 may include high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
  • the program 510 can specifically be used to cause the processor 502 to perform the corresponding operations in the foregoing method embodiments.
  • modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment.
  • the modules or units or components in the embodiments may be combined into one module or unit or component, and further they may be divided into multiple sub-modules or sub-units or sub-assemblies. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method so disclosed may be employed in any combination, unless at least some of such features and/or procedures or elements are mutually exclusive. All processes or units of equipment are combined.
  • Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
  • Various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as apparatus or apparatus programs (eg, computer programs and computer program products) for performing part or all of the methods described herein.
  • Such a program implementing the present invention may be stored on a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from Internet sites, or provided on carrier signals, or in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于汽车的障碍物检测方法及装置,包括:针对位于汽车上的鱼眼摄像头获取到的原始鱼眼图像进行畸变校正处理,得到与原始鱼眼图像相对应的待检测图像(S110);针对待检测图像中包含的各个图像帧进行图像分割,以识别各个图像帧中包含的障碍物区域(S120);根据至少两个相邻的图像帧中包含的障碍物区域,检测与待检测图像相对应的目标障碍物,将已检测到的目标障碍物添加到追踪障碍物集合(S130);根据实时获取的汽车的行驶状态信息检测追踪障碍物集合中包含的目标障碍物的实时位置信息并判断是否触发报警提示信息(S140)。该方法能够实现障碍物的持续追踪功能。

Description

基于汽车的障碍物检测方法及装置
相关申请的交叉参考
本申请要求于2020年10月16日提交中国专利局、申请号为202011111375.7、名称为“基于汽车的障碍物检测方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及电子信息领域,具体涉及一种基于汽车的障碍物检测方法及装置。
背景技术
在汽车行驶过程中,如何高效而准确地检测到车辆周边的障碍物成为影响驾驶安全的关键问题。在现有技术中,大多通过雷达探测器、视觉传感器或车辆摄像头进行检测。但是,基于雷达的探测技术视野范围有限、单种传感器则存在盲区,而多传感器融合的软硬件开发难度则较高。
由此可见,在现有技术中,通过单个设备进行检测无法规避视觉盲区等问题,而结合多种设备进行检测则会显著增加开发成本。因此,亟需一种能够以低成本方式准确检测车辆周边障碍物的方案。
公开内容
鉴于上述问题,提出了本发明以便提供一种克服上述问题或者至少部分地解决上述问题的一种基于汽车的障碍物检测方法及装置。
根据本发明的一个方面,提供了一种基于汽车的障碍物检测方法,包括:
针对位于汽车上的鱼眼摄像头获取到的原始鱼眼图像进行畸变校正处理,得到与所述原始鱼眼图像相对应的待检测图像;
针对所述待检测图像中包含的各个图像帧进行图像分割,以识别各个图像帧中包含的障碍物区域;
根据至少两个相邻的图像帧中包含的障碍物区域,检测与所述待检测图 像相对应的目标障碍物,将已检测到的目标障碍物添加到追踪障碍物集合;
根据实时获取的汽车的行驶状态信息检测所述追踪障碍物集合中包含的目标障碍物的实时位置信息,根据所述目标障碍物的实时位置信息判断是否触发报警提示信息。
依据本发明的再一方面,提供了一种基于汽车的障碍物检测装置,包括:
校正模块,适于针对位于汽车上的鱼眼摄像头获取到的原始鱼眼图像进行畸变校正处理,得到与所述原始鱼眼图像相对应的待检测图像;
分割模块,适于针对所述待检测图像中包含的各个图像帧进行图像分割,以识别各个图像帧中包含的障碍物区域;
检测模块,适于根据至少两个相邻的图像帧中包含的障碍物区域,检测与所述待检测图像相对应的目标障碍物,将已检测到的目标障碍物添加到追踪障碍物集合;
追踪模块,适于根据实时获取的汽车的行驶状态信息检测所述追踪障碍物集合中包含的目标障碍物的实时位置信息,根据所述目标障碍物的实时位置信息判断是否触发报警提示信息。
依据本发明的再一方面,提供了一种电子设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;
所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如上述的基于汽车的障碍物检测方法对应的操作。
依据本发明的再一方面,提供了一种计算机存储介质,所述存储介质中存储有至少一可执行指令,所述可执行指令使处理器执行如上述的基于汽车的障碍物检测方法对应的操作。
在本发明提供的基于汽车的障碍物检测方法及装置中,能够通过鱼眼摄像头进行检测,由于鱼眼摄像头的检测范围大,因而能够全面检测车辆周边的障碍物,并且,通过畸变校正处理等方式能够还原鱼眼摄像头的图像畸变问题。另外,通过至少两个相邻的图像帧中包含的障碍物区域来检测目标障碍物,能够避免因单个图像帧分辨率较低而导致的误判问题,提升障碍物检 测的准确性。并且,结合实时获取的汽车的行驶状态信息来检测追踪障碍物集合中包含的目标障碍物的实时位置信息,能够根据汽车的形状状态反推障碍物的实时位置,从而在障碍物暂时离开摄像头视野范围时实现障碍物的持续追踪,以确保行车安全。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图概述
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示出了本发明实施例一提供的一种基于汽车的障碍物检测方法的流程图;
图2示出了本发明实施例二提供的一种基于汽车的障碍物检测方法的流程图;
图3示出了本发明实施例三提供的一种基于汽车的障碍物检测装置的结构图;
图4示出了本发明实施例五提供的一种电子设备的结构示意图;
图5示出了本示例中的障碍物追踪方法的流程示意图;
图6示出了前帧信息融合的流程示意图。
本公开的较佳实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
实施例一
图1示出了本发明实施例一提供的一种基于汽车的障碍物检测方法的流程图。如图1所示,该方法包括:
步骤S110:针对位于汽车上的鱼眼摄像头获取到的原始鱼眼图像进行畸变校正处理,得到与原始鱼眼图像相对应的待检测图像。
具体的,鱼眼摄像头可以为安装在汽车前方的前视摄像头,也可以为安装在汽车后方的后视摄像头,本发明对鱼眼摄像头的具体设置位置不作限定。由于鱼眼摄像头获取到的原始鱼眼图像存在畸变,因此,需要执行畸变校正处理,从而得到校正后的待检测图像。
步骤S120:针对待检测图像中包含的各个图像帧进行图像分割,以识别各个图像帧中包含的障碍物区域。
具体的,针对校正后得到的待检测图像进行图像分割,通过图像分割处理,将待检测图像分割为多个区域。通常,待检测图像中包含地面区域、位于地面上的物体区域、以及背景区域等多个区域。相应的,剔除背景区域后,提取与地面区域相邻的物体区域,从而根据提取到的物体区域识别障碍物区域。
步骤S130:根据至少两个相邻的图像帧中包含的障碍物区域,检测与待检测图像相对应的目标障碍物,将已检测到的目标障碍物添加到追踪障碍物集合。
具体的,为了防止因单个图像帧的像素不清晰等问题而导致的误判问题,在本实施例中,根据至少两个相邻的图像帧中包含的障碍物区域,检测与待检测图像相对应的目标障碍物。其中,由于相邻的图像帧中通常会包含同一障碍物,且障碍物在相邻图像帧中的位移通常不会很大,因此,结合相邻的两个或更多个图像帧,能够对障碍物的合理位移范围进行校验,从而辅助验证障碍物的识别结果是否准确,进而防止误识别的问题。相应的,若结合多个图像帧确定出目标障碍物,则将确定出的目标障碍物添加到追踪障碍物集合中,以便在后续过程中实现针对障碍物的实时追踪。
步骤S140:根据实时获取的汽车的行驶状态信息检测追踪障碍物集合中包含的目标障碍物的实时位置信息,根据目标障碍物的实时位置信息判断是否触发报警提示信息。
具体的,在实现障碍物的实时追踪过程中,考虑到障碍物可能会暂时从摄像头的视野范围内消失,因此,为了确保障碍物的可靠识别,进一步获取汽车的行驶状态信息,从而根据实时获取到的行驶状态信息反向推测障碍物的当前位置,进而实现障碍物的持续追踪,并根据追踪结果判断是否需要出发报警提示消息,以提醒驾驶员规避障碍物。
由此可见,在本发明提供的基于汽车的障碍物检测方法中,能够通过鱼眼摄像头进行检测,由于鱼眼摄像头的检测范围大,因而能够全面检测车辆周边的障碍物,并且,通过畸变校正处理等方式能够还原鱼眼摄像头的图像畸变问题。另外,通过至少两个相邻的图像帧中包含的障碍物区域来检测目标障碍物,能够避免因单个图像帧分辨率较低而导致的误判问题,提升障碍物检测的准确性。并且,结合实时获取的汽车的行驶状态信息来检测追踪障碍物集合中包含的目标障碍物的实时位置信息,能够根据汽车的形状状态反推障碍物的实时位置,从而在障碍物暂时离开摄像头视野范围时实现障碍物的持续追踪,以确保行车安全。
实施例二
图2示出了本发明实施例二提供的一种基于汽车的障碍物检测方法的流程图。如图2所示,该方法包括:
步骤S210:针对位于汽车上的鱼眼摄像头获取到的原始鱼眼图像进行畸变校正处理,得到与原始鱼眼图像相对应的待检测图像。
具体的,鱼眼摄像头可以为安装在汽车前方的前视摄像头,也可以为安装在汽车后方的后视摄像头,本发明对鱼眼摄像头的具体设置位置不作限定。由于鱼眼摄像头获取到的原始鱼眼图像存在畸变,因此,需要执行畸变校正处理,从而得到校正后的待检测图像。具体实施时,可采用基于平面投影或者柱面投影的校正模型进行校正,本发明对具体实现细节不作限定。
步骤S220:针对待检测图像中包含的各个图像帧进行图像分割,以识别各个图像帧中包含的障碍物区域。
具体的,针对校正后得到的待检测图像进行图像分割,通过图像分割处理,将待检测图像分割为多个区域。通常,待检测图像中包含地面区域、位于地面上的物体区域、以及背景区域等多个区域。相应的,剔除背景区域后,提取与地面区域相邻的物体区域,从而根据提取到的物体区域识别障碍物区域。具体实施时,利用图像分割技术进行障碍物检测能够将独立低矮障碍物与地面等背景相分离从而获取汽车的可行驶区域。
步骤S230:根据至少两个相邻的图像帧中包含的障碍物区域,检测与待检测图像相对应的目标障碍物。
具体的,为了防止因单个图像帧的像素不清晰等因素而导致的误判问题,在本实施例中,根据至少两个相邻的图像帧中包含的障碍物区域,检测与待检测图像相对应的目标障碍物。其中,由于相邻的图像帧中通常会包含同一障碍物,且障碍物在相邻图像帧中的位移通常不会很大,因此,结合相邻的两个或更多个图像帧,能够对障碍物的合理位移范围进行校验,从而辅助验证障碍物的识别结果是否准确,进而防止误识别的问题。
具体实施时,根据第M个图像帧中包含的障碍物区域以及实时获取的汽车的行驶状态信息,预测第M+N个图像帧中包含的障碍物预测区域;确定第M+N个图像帧中包含的障碍物实际区域,判断障碍物实际区域与障碍物预测区域是否匹配,根据判断结果检测与待检测图像相对应的目标障碍物;其中,各个图像帧按照采集的时间顺序进行排序,且M、N为自然数。例如,当N等于1时,根据上一个图像帧中包含的障碍物区域预测当前图像帧中包含的障碍物预测区域。又如,当N大于1时,可以结合在前的多个图像帧(简称上帧或前帧)中包含的障碍物区域预测在后的图像帧(简称下帧或后帧)中包含的障碍物预测区域。由此可见,当N为大于1的自然数时,在根据第M个图像帧中包含的障碍物区域以及实时获取的汽车的行驶状态信息,预测第M+N个图像帧中包含的障碍物预测区域时,具体根据第M个图像帧至第M+N-1个图像帧中包含的障碍物区域以及实时获取的汽车的行驶状态信息,预测第M+N个图像帧中包含的障碍物预测区域。相应的,结合多个在前的 图像帧能够提升障碍物预测的准确度。
其中,障碍物实际区域是指:通过对待检测的图像帧进行检测,得到的图像帧中包含的障碍物区域。障碍物预测区域是指:通过在前的图像帧中包含的障碍物实际区域的位置,并结合车辆的实时行驶状态信息,预测得到的障碍物位于在后的图像帧中的位置。由此可见,与障碍物实际区域不同,障碍物预测区域是根据在前的图像帧以及车辆的行驶状态信息预测得到的障碍物在后续图像帧中的位置,该位置通过算法预测得到,而并非通过图像检测实际获得,因而可能存在误差。
具体的,在判断障碍物实际区域与障碍物预测区域是否匹配,根据判断结果检测与待检测图像相对应的目标障碍物时,通过以下方式实现:首先,提取与障碍物实际区域相对应的实际特征信息以及与障碍物预测区域相对应的预测特征信息;然后,将实际特征信息与预测特征信息进行特征匹配处理;若特征匹配成功,则根据障碍物实际区域确定与待检测图像相对应的目标障碍物。反之,若特征匹配失败,则提示障碍物实际区域或障碍物预测区域存在误判,需要结合后续图像帧进行核实。其中,当前帧数量为多个时,将各个前帧所对应的障碍物预测区域分别与本帧的障碍物实际区域进行匹配,从而结合多个匹配结果综合确定最终的检测结果。
步骤S240:将已检测到的目标障碍物添加到追踪障碍物集合。
相应的,若结合多个图像帧确定出目标障碍物,则将确定出的目标障碍物添加到追踪障碍物集合中,以便在后续过程中实现针对障碍物的实时追踪。其中,只有当同一个障碍物既出现于由在前的图像帧确定的障碍物预测区域中,又出现于由在后的图像帧确定的障碍物实际区域中时,才将该障碍物确定为目标障碍物。由此可见,通过相邻的多个图像帧能够进行相互校验,从而防止误识别。追踪障碍物集合中包含的目标障碍物是需要在行驶过程中持续追踪的。
步骤S250:根据实时获取的汽车的行驶状态信息检测追踪障碍物集合中包含的目标障碍物的实时位置信息。
具体的,在实现障碍物的实时追踪过程中,考虑到障碍物可能会暂时从摄像头的视野范围内消失,因此,为了确保障碍物的可靠识别,进一步获取 汽车的行驶状态信息,从而根据实时获取到的行驶状态信息反向推测障碍物的当前位置,进而实现障碍物的持续追踪,并根据追踪结果判断是否需要出发报警提示消息,以提醒驾驶员规避障碍物。
其中,发明人在实现本发明的过程中发现,在行车过程中可能会出现类似于马路牙或路边栅栏等连续而低矮的障碍物,对于该类障碍物,因其高度较低且长度连续,因而普通的检测方案中难以有效检测。为了解决上述问题,在本实施例中,预先针对该类连续而低矮的障碍物设置了识别特征,具体定义了预设的高度阈值以及长度阈值,以便于快速而准确地确定该类障碍物。具体实施时,当检测到的目标障碍物的高度低于预设高度阈值且长度大于预设长度阈值时,确定该目标障碍物为连续性障碍物。相应的,依据连续性障碍物的形状特征,连续性障碍物中包含的未检测部分能够根据连续性障碍物中包含的已检测部分预测得到。例如,对于马路牙或栅栏等障碍物而言,其高度固定且长度沿马路方向延伸,因此,根据连续性障碍物中包含的已检测部分的高度、长度、形状等特征,即可预测连续性障碍物中包含的未检测部分的高度、长度、形状等特征,进而能够在未检测部分进入摄像头检测范围之前提前预测车辆周边可能连续出现的障碍物区域。具体预测时,可以预先设置连续性障碍物预测模型,相应的,将实时获取到的连续性障碍物中包含的已检测部分的高度、长度、形状等特征信息输入上述的连续性障碍物预测模型,从而根据模型输出结果预测连续性障碍物中包含的未检测部分的高度、长度、形状等内容。其中,连续性障碍物预测模型可通过机器学习等方式训练得到。由此可见,本实施例能够根据连续性低矮障碍物通常沿路线方向连续排布的特征,提前预测未进入汽车视野范围内的连续性障碍物,从而便于规避风险。相应的,在根据实时获取的汽车的行驶状态信息检测追踪障碍物集合中包含的目标障碍物的实时位置信息时,若当前图像帧中未包含目标障碍物,则根据实时获取的汽车的行驶状态信息预测目标障碍物中包含的未检测部分的当前位置。例如,能够根据汽车的行驶速度、方向盘转速等信息预测目标障碍物中包含的未检测部分的当前位置。在本实施例中,位于汽车上的鱼眼摄像头通常为位于汽车前方的前视鱼眼摄像头,以便于在车辆与障碍物碰撞之前提前检测到障碍物。当然,在车辆倒车的过程中,也可以通过位于汽车后方的后视鱼眼摄像头进行检测。
步骤S260:根据目标障碍物的实时位置信息判断是否触发报警提示信息。
具体的,当判断出目标障碍物的实时位置信息距离车辆较近时,则触发报警提示信息,以提醒驾驶员规避障碍物。其中,报警提示信息可以为语音提示信息、方向盘震动信息等多种类型,本发明对此不作限定。
为了便于理解,下面以一个具体示例为例,详细描述本发明实施例二的具体实现细节:
首先,为了便于理解,先简单介绍本示例的技术背景:为了实现智能驾驶,需要实现车外障碍物的检测和追踪。目前,针对车外障碍物(如隔离柱、人、车辆等)的检测已经有了较多解决方案,而针对低矮的、连续障碍物(如马路牙等)的检测则还存在较多难点。现有的针对车外障碍物检测的算法包括基于雷达和基于前视摄像头或两者结合的方案,针对连续障碍物的检测往往需要结合雷达和视觉传感器。基于雷达的探测技术视野范围有限,单种传感器均存在盲区,而多传感器融合的软硬件开发难度较高。另外,鱼眼摄像头的视野范围大,车前、左、后、右安装的鱼眼摄像头能够覆盖车周围360度的视野,且能够观测到低处近车处出现的障碍物。然而,鱼眼摄像头的畸变对障碍物检测增加了较大难度,往往会存在漏检和误检;另一方面,当使用四路鱼眼摄像头时,每秒由四个摄像头获取的图片会占用车上智能系统大部分的带宽和计算资源,因此难以保证检测的实时性。
为了解决上述问题,本示例提出了一种根据车辆运动状态和车前视鱼眼摄像头对低矮障碍物进行检测与追踪的方法,目的在于克服现有车载视觉障碍物检测方案中的以下几点技术问题:普通前视摄像头无法检测到车近处低矮障碍物的问题;连续低矮障碍物轮廓及位置难以通过视觉方法进行检测的问题;使用四路鱼眼摄像头信号占用带宽和计算资源过高,但单个摄像头视野有限的问题。该示例能够解决上述问题,通过车前视鱼眼摄像头对车外独立/连续低矮障碍物进行检测,并针对进入警戒范围内的障碍物进行报警。该示例主要包括:畸变校正、基于图像分割的障碍物检测、障碍物世界坐标计算、前帧信息融合、障碍物追踪以及障碍物报警及显示等流程。首先,对鱼眼图像进行畸变校正,对校正后的图像进行像素分割获得障碍物的图像轮 廓。以障碍物与地面像素接触的轮廓线计算得到障碍物在地面上的位置。结合上一帧的检测结果优化本帧检测结果。当障碍物离开前视鱼眼摄像头的视野范围时,利用车辆运动信息对障碍物进行持续追踪。若障碍物到车辆或行驶轨迹线的最近距离小于预设的警戒距离,向用户进行报警和显示。具体的,本示例提出一种低速行驶中基于前视鱼眼摄像头对低矮障碍物检测及追踪的方法,借助深度学习图像分割技术完成对低矮障碍物的检测,尤其能够检测传统障碍物检测过程中较难检测的连续低矮障碍物,并能够通过前后帧信息融合提供更可靠的检测结果。另外,该示例仅需要使用前视鱼眼摄像头并配合车辆运动信息采集即可实现,硬件成本低,且相较基于四路鱼眼的视觉检测系统更兼具实时性。
具体实施时,本示例可以由车辆中的智能系统实施,该智能系统进一步分为畸变校正、基于图像分割的障碍物检测、障碍物世界坐标计算、前帧信息融合、障碍物追踪以及障碍物报警及显示等多个模块。下面分别针对各个模块的具体实现原理进行详细描述:
(1)畸变校正模块:通过前视鱼眼相机采集到鱼眼图像I。为了使图像分割的效果足够进行特征提取,本方案先对采集到的鱼眼图像进行畸变校正。本方案可采用基于平面投影或者柱面投影的校正模型,其中前者能够较好的消除鱼眼摄像头的径向畸变,后者能够将鱼眼中心位置的图像保持原有的水平、竖直形态。利用上述校正模型M R,可以得到校正后图像I R
(2)基于图像分割的障碍物检测模块:本方案训练基于深度学习的图像分割模型M S,利用图像分割技术进行障碍物检测能够将独立低矮障碍物与地面等背景分离以获取可行驶区域,而且能够解决传统方案无法识别连续低矮障碍物的问题。对于第i帧图像I R i,将分割得到的障碍物进行边缘提取,获取障碍物的轮廓C i以及与地面接触的轮廓线C G i,后者可以用于后续的距离计算和前帧信息融合。
(3)障碍物世界坐标计算模块:结合障碍物检测模块获取的障碍物地面接触轮廓线C G i、图像的畸变校正模型M R和相机标定参数H,将障碍物地面轮廓线C G i投影到地面上,并计算得到障碍物地面轮廓的世界坐标L i。所有检测到的障碍物形成集合{L i}。
(4)前帧信息融合模块。为了保证检测的实时性,可能会采用较小分辨率图像进行检测,则每次检测到的障碍物轮廓可能是不完整的或者不准确的,同时也可能存在漏检误检的情况。本方案采用前帧信息融合的方式优化检测结果,使检测到的障碍物轮廓和位置更可靠。前帧信息融合步骤如图6所示,具体如下:
首先,先利用障碍物世界坐标计算模块中计算得到的上一帧障碍物地面坐标集合{L i-1}结合车辆运动信息(通过车辆OBD采集轮速和方向盘转向信息)预测障碍物轮廓线在下一帧的位置L P i,并形成预测集合
Figure PCTCN2021123880-appb-000001
通过本帧检测结果计算得到本帧障碍物轮廓坐标集合{L i}。
然后,对于{L i}中每一个本帧检测结果L i,遍历上一帧的障碍物预测集合
Figure PCTCN2021123880-appb-000002
中的每一个预测障碍物位置L P i。若有满足特征匹配(可以简单利用距离或者IOU进行匹配)的单个或多个预测障碍物L P i,则确认当前帧检测障碍物L i非误检,将L i和L P i的坐标进行加权融合,从
Figure PCTCN2021123880-appb-000003
中删除预测障碍物L P,将L i加入待追踪障碍物集合{T};若没有满足特征匹配的预测障碍物,则将该检测障碍物L i暂定为误检障碍物,若未匹配的次数超出预设的阈值,则确认L i为误检,将其从{L i}中删除。通过将前帧的障碍物预测结果与本帧的障碍物检测结果的坐标进行加权融合,能够解决因分辨率低而导致的识别不准确的问题,能够结合多个图像帧提升障碍物的检测准确性。
(5)障碍物追踪模块:对于{L i}中所有已确认追踪的障碍物,当障碍物离开前视鱼眼摄像头的视野范围时,利用车辆运动信息反向计算障碍物的地面位移,以此来追踪障碍物。
(6)障碍物报警及显示模块:通过车辆运动信息计算得到车辆行驶轨迹线,然后计算障碍物到车辆或轨迹线的最近距离,若该距离小于预设的警戒距离,则将该障碍物通过透视变换投影到特定的显示平面上,在车机上向用户进行展示。
为了便于理解上述过程,图5示出了本示例中的障碍物追踪方法的流程示意图,如图5所示,在该障碍物追踪方法中,一方面,通过鱼眼图像获取障碍物的坐标;另一方面,通过车轮的转速或转向等信息获取车辆运动状态,从而结合车辆运动状态、车辆行驶轨迹以及障碍物坐标信息实现障碍物追 踪。图6示出了前帧信息融合的流程示意图,如图6所示,根据本帧检测到的障碍物以及多个上帧预测的障碍物进行特征匹配,从而根据特征匹配结果确定目标障碍物。其中,由于上帧的数量为多个,因此,可以分别针对每个上帧预测障碍物与本帧检测障碍物进行匹配,从而获取多个分别与各个上帧(也叫前帧或在前的图像帧)相对应的匹配结果,根据匹配次数来最终确定是否为目标障碍物。例如,若未匹配的次数达到预设阈值,则确认该障碍物为误检。
综上可知,通过本发明实施例中的方式,能够通过鱼眼摄像头进行检测,由于鱼眼摄像头的检测范围大,因而能够全面检测车辆周边的障碍物,并且,通过畸变校正处理等方式能够还原鱼眼摄像头的图像畸变问题。另外,通过至少两个相邻的图像帧中包含的障碍物区域来检测目标障碍物,能够避免因单个图像帧分辨率较低而导致的误判问题,提升障碍物检测的准确性。并且,结合实时获取的汽车的行驶状态信息来检测追踪障碍物集合中包含的目标障碍物的实时位置信息,能够根据汽车的形状状态反推障碍物的实时位置,从而在障碍物暂时离开摄像头视野范围时实现障碍物的持续追踪,以确保行车安全。并且,结合多个上帧的预测结果综合判定本帧检测到的障碍物是否正确,能够显著提升检测的准确性。
实施例三
图3示出了本发明实施例三提供的一种基于汽车的障碍物检测装置的结构示意图,具体包括:
校正模块31,适于针对位于汽车上的鱼眼摄像头获取到的原始鱼眼图像进行畸变校正处理,得到与所述原始鱼眼图像相对应的待检测图像;
分割模块32,适于针对所述待检测图像中包含的各个图像帧进行图像分割,以识别各个图像帧中包含的障碍物区域;
检测模块33,适于根据至少两个相邻的图像帧中包含的障碍物区域,检测与所述待检测图像相对应的目标障碍物,将已检测到的目标障碍物添加到追踪障碍物集合;
追踪模块34,适于根据实时获取的汽车的行驶状态信息检测所述追踪障碍物集合中包含的目标障碍物的实时位置信息,根据所述目标障碍物的实时位置信息判断是否触发报警提示信息。
可选的,所述检测模块具体适于:
根据第M个图像帧中包含的障碍物区域以及实时获取的汽车的行驶状态信息,预测第M+N个图像帧中包含的障碍物预测区域;
确定第M+N个图像帧中包含的障碍物实际区域,判断所述障碍物实际区域与所述障碍物预测区域是否匹配,根据判断结果检测与所述待检测图像相对应的目标障碍物;其中,M、N为自然数。
可选的,所述检测模块具体适于:
提取与所述障碍物实际区域相对应的实际特征信息以及与所述障碍物预测区域相对应的预测特征信息;
将所述实际特征信息与所述预测特征信息进行特征匹配处理;
若特征匹配成功,则根据所述障碍物实际区域确定与所述待检测图像相对应的目标障碍物。
可选的,当N为大于1的自然数时,所述检测模块具体适于:
根据第M个图像帧至第M+N-1个图像帧中包含的障碍物区域以及实时获取的汽车的行驶状态信息,预测第M+N个图像帧中包含的障碍物预测区域。
可选的,当所述目标障碍物的高度低于预设高度阈值且长度大于预设长度阈值时,确定所述目标障碍物为连续性障碍物;其中,所述连续性障碍物中包含的未检测部分能够根据连续性障碍物中包含的已检测部分预测得到。
可选的,所述追踪模块具体适于:
若当前图像帧中未包含所述目标障碍物,则根据实时获取的汽车的行驶状态信息预测所述目标障碍物中包含的未检测部分的当前位置。
可选的,所述位于汽车上的鱼眼摄像头为前视鱼眼摄像头。
上述各个模块的具体实现原理可参照方法实施例中相应部分的描述,此 次不再赘述。
实施例四
本申请实施例四提供了一种非易失性计算机存储介质,所述计算机存储介质存储有至少一可执行指令,该计算机可执行指令可执行上述任意方法实施例中的基于汽车的障碍物检测方法。可执行指令具体可以用于使得处理器执行上述方法实施例中对应的各个操作。
实施例五
图4示出了根据本发明实施例五的一种电子设备的结构示意图,本发明具体实施例并不对电子设备的具体实现做限定。
如图4所示,该电子设备可以包括:处理器(processor)402、通信接口(Communications Interface)406、存储器(memory)404、以及通信总线408。
其中:
处理器402、通信接口406、以及存储器404通过通信总线408完成相互间的通信。
通信接口406,用于与其它设备比如客户端或其它服务器等的网元通信。
处理器402,用于执行程序410,具体可以执行上述基于汽车的障碍物检测方法实施例中的相关步骤。
具体地,程序410可以包括程序代码,该程序代码包括计算机操作指令。
处理器402可能是中央处理器CPU,或者是特定集成电路ASIC(Application Spec如果ic Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。电子设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。
存储器404,用于存放程序410。存储器404可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序510具体可以用于使得处理器502执行上述方法实施例中对应的各个操作。
在此提供的算法和显示不与任何特定计算机、虚拟系统或者其它设备固有相关。各种通用系统也可以与基于在此的示教一起使用。根据上面的描述,构造这类系统所要求的结构是显而易见的。此外,本发明也不针对任何特定编程语言。应当明白,可以利用各种编程语言实现在此描述的本发明的内容,并且上面对特定语言所做的描述是为了披露本发明的最佳实施方式。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组 合意味着处于本发明的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。

Claims (10)

  1. 一种基于汽车的障碍物检测方法,其特征在于,包括:
    针对位于汽车上的鱼眼摄像头获取到的原始鱼眼图像进行畸变校正处理,得到与所述原始鱼眼图像相对应的待检测图像;
    针对所述待检测图像中包含的各个图像帧进行图像分割,以识别各个图像帧中包含的障碍物区域;
    根据至少两个相邻的图像帧中包含的障碍物区域,检测与所述待检测图像相对应的目标障碍物,将已检测到的目标障碍物添加到追踪障碍物集合;
    根据实时获取的汽车的行驶状态信息检测所述追踪障碍物集合中包含的目标障碍物的实时位置信息,根据所述目标障碍物的实时位置信息判断是否触发报警提示信息。
  2. 根据权利要求1所述的方法,其特征在于,所述根据至少两个相邻的图像帧中包含的障碍物区域,检测与所述待检测图像相对应的目标障碍物包括:
    根据第M个图像帧中包含的障碍物区域以及实时获取的汽车的行驶状态信息,预测第M+N个图像帧中包含的障碍物预测区域;
    确定第M+N个图像帧中包含的障碍物实际区域,判断所述障碍物实际区域与所述障碍物预测区域是否匹配,根据判断结果检测与所述待检测图像相对应的目标障碍物;其中,M、N为自然数。
  3. 根据权利要求2所述的方法,其特征在于,所述判断所述障碍物实际区域与所述障碍物预测区域是否匹配,根据判断结果检测与所述待检测图像相对应的目标障碍物包括:
    提取与所述障碍物实际区域相对应的实际特征信息以及与所述障碍物预测区域相对应的预测特征信息;
    将所述实际特征信息与所述预测特征信息进行特征匹配处理;
    若特征匹配成功,则根据所述障碍物实际区域确定与所述待检测图像相对应的目标障碍物。
  4. 根据权利要求3所述的方法,其特征在于,当N为大于1的自然数时,所述根据第M个图像帧中包含的障碍物区域以及实时获取的汽车的行驶状态信息,预测第M+N个图像帧中包含的障碍物预测区域包括:
    根据第M个图像帧至第M+N-1个图像帧中包含的障碍物区域以及实时获取的汽车的行驶状态信息,预测第M+N个图像帧中包含的障碍物预测区域。
  5. 根据权利要求1所述的方法,其特征在于,当所述目标障碍物的高度低于预设高度阈值且长度大于预设长度阈值时,确定所述目标障碍物为连续性障碍物;其中,所述连续性障碍物中包含的未检测部分能够根据连续性障碍物中包含的已检测部分预测得到。
  6. 根据权利要求5所述的方法,其特征在于,所述根据实时获取的汽车的行驶状态信息检测所述追踪障碍物集合中包含的目标障碍物的实时位置信息包括:
    若当前图像帧中未包含所述目标障碍物,则根据实时获取的汽车的行驶状态信息预测所述目标障碍物中包含的未检测部分的当前位置。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述位于汽车上的鱼眼摄像头为前视鱼眼摄像头。
  8. 一种基于汽车的障碍物检测装置,包括:
    校正模块,适于针对位于汽车上的鱼眼摄像头获取到的原始鱼眼图像进行畸变校正处理,得到与所述原始鱼眼图像相对应的待检测图像;
    分割模块,适于针对所述待检测图像中包含的各个图像帧进行图像分割,以识别各个图像帧中包含的障碍物区域;
    检测模块,适于根据至少两个相邻的图像帧中包含的障碍物区域,检测与所述待检测图像相对应的目标障碍物,将已检测到的目标障碍物添加到追踪障碍物集合;
    追踪模块,适于根据实时获取的汽车的行驶状态信息检测所述追踪障碍物集合中包含的目标障碍物的实时位置信息,根据所述目标障碍物的实时位置信息判断是否触发报警提示信息。
  9. 一种电子设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;
    所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1-7中任一项所述的基于汽车的障碍物检测方法对应的操作。
  10. 一种计算机存储介质,所述存储介质中存储有至少一可执行指令,所述可执行指令使处理器执行如权利要求1-7中任一项所述的基于汽车的障碍物检测方法对应的操作。
PCT/CN2021/123880 2020-10-16 2021-10-14 基于汽车的障碍物检测方法及装置 WO2022078463A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011111375.7 2020-10-16
CN202011111375.7A CN112329552B (zh) 2020-10-16 2020-10-16 基于汽车的障碍物检测方法及装置

Publications (1)

Publication Number Publication Date
WO2022078463A1 true WO2022078463A1 (zh) 2022-04-21

Family

ID=74313955

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123880 WO2022078463A1 (zh) 2020-10-16 2021-10-14 基于汽车的障碍物检测方法及装置

Country Status (2)

Country Link
CN (1) CN112329552B (zh)
WO (1) WO2022078463A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114764911A (zh) * 2022-06-15 2022-07-19 小米汽车科技有限公司 障碍物信息检测方法、装置、电子设备及存储介质
CN115631478A (zh) * 2022-12-02 2023-01-20 广汽埃安新能源汽车股份有限公司 道路图像检测方法、装置、设备、计算机可读介质
CN115848358A (zh) * 2023-01-19 2023-03-28 禾多科技(北京)有限公司 车辆泊车方法、装置、电子设备和计算机可读介质
CN116437120A (zh) * 2023-04-20 2023-07-14 深圳森云智能科技有限公司 一种视频分帧处理方法及装置

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11205093B2 (en) 2018-10-11 2021-12-21 Tesla, Inc. Systems and methods for training machine models with augmented data
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11150664B2 (en) 2019-02-01 2021-10-19 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
CN112329552B (zh) * 2020-10-16 2023-07-14 爱驰汽车(上海)有限公司 基于汽车的障碍物检测方法及装置
CN112883909B (zh) * 2021-03-16 2024-06-14 东软睿驰汽车技术(沈阳)有限公司 基于包围盒的障碍物位置检测方法、装置和电子设备
CN113297939B (zh) * 2021-05-17 2024-04-16 深圳市优必选科技股份有限公司 障碍物检测方法、系统、终端设备及存储介质
CN113298044B (zh) * 2021-06-23 2023-04-18 上海西井信息科技有限公司 基于定位补偿的障碍物检测方法、系统、设备及存储介质
CN113619600B (zh) * 2021-08-17 2022-11-15 广州文远知行科技有限公司 障碍物数据诊断方法、装置、可移动载体及存储介质
CN113610056B (zh) * 2021-08-31 2024-06-07 的卢技术有限公司 障碍物检测方法、装置、电子设备及存储介质
CN114399919A (zh) * 2021-12-31 2022-04-26 展讯通信(上海)有限公司 泊车影像生成方法、终端设备、介质及泊车系统
CN115586772B (zh) * 2022-09-29 2024-09-20 九识(苏州)智能科技有限公司 一种自动驾驶车辆的分层控制系统和方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120081542A1 (en) * 2010-10-01 2012-04-05 Andong University Industry-Academic Cooperation Foundation Obstacle detecting system and method
CN108596009A (zh) * 2017-12-29 2018-09-28 西安智加科技有限公司 一种用于农机自动驾驶的障碍物检测方法和系统
US20190050652A1 (en) * 2018-09-28 2019-02-14 Intel Corporation Obstacle analyzer, vehicle control system, and methods thereof
CN109829386A (zh) * 2019-01-04 2019-05-31 清华大学 基于多源信息融合的智能车辆可通行区域检测方法
CN110378837A (zh) * 2019-05-16 2019-10-25 四川省客车制造有限责任公司 基于鱼眼摄像头的目标检测方法、装置和存储介质
CN112329552A (zh) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 基于汽车的障碍物检测方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678787A (zh) * 2016-02-03 2016-06-15 西南交通大学 一种基于双目鱼眼摄像头的载重货车行驶障碍物检测及跟踪方法
CN110018496A (zh) * 2018-01-10 2019-07-16 北京京东尚科信息技术有限公司 障碍物识别方法及装置、电子设备、存储介质
CN109254289B (zh) * 2018-11-01 2021-07-06 百度在线网络技术(北京)有限公司 道路护栏的检测方法和检测设备
CN111199177A (zh) * 2018-11-20 2020-05-26 中山大学深圳研究院 一种基于鱼眼图像校正的汽车后视行人检测报警方法
CN111723597B (zh) * 2019-03-18 2023-07-14 深圳市速腾聚创科技有限公司 跟踪算法的精度检测方法、装置、计算机设备和存储介质
CN111563474A (zh) * 2020-05-18 2020-08-21 北京茵沃汽车科技有限公司 运动背景下的基于车载鱼眼镜头的障碍物检测方法、系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120081542A1 (en) * 2010-10-01 2012-04-05 Andong University Industry-Academic Cooperation Foundation Obstacle detecting system and method
CN108596009A (zh) * 2017-12-29 2018-09-28 西安智加科技有限公司 一种用于农机自动驾驶的障碍物检测方法和系统
US20190050652A1 (en) * 2018-09-28 2019-02-14 Intel Corporation Obstacle analyzer, vehicle control system, and methods thereof
CN109829386A (zh) * 2019-01-04 2019-05-31 清华大学 基于多源信息融合的智能车辆可通行区域检测方法
CN110378837A (zh) * 2019-05-16 2019-10-25 四川省客车制造有限责任公司 基于鱼眼摄像头的目标检测方法、装置和存储介质
CN112329552A (zh) * 2020-10-16 2021-02-05 爱驰汽车(上海)有限公司 基于汽车的障碍物检测方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114764911A (zh) * 2022-06-15 2022-07-19 小米汽车科技有限公司 障碍物信息检测方法、装置、电子设备及存储介质
CN114764911B (zh) * 2022-06-15 2022-09-23 小米汽车科技有限公司 障碍物信息检测方法、装置、电子设备及存储介质
CN115631478A (zh) * 2022-12-02 2023-01-20 广汽埃安新能源汽车股份有限公司 道路图像检测方法、装置、设备、计算机可读介质
CN115848358A (zh) * 2023-01-19 2023-03-28 禾多科技(北京)有限公司 车辆泊车方法、装置、电子设备和计算机可读介质
CN116437120A (zh) * 2023-04-20 2023-07-14 深圳森云智能科技有限公司 一种视频分帧处理方法及装置
CN116437120B (zh) * 2023-04-20 2024-04-09 深圳森云智能科技有限公司 一种视频分帧处理方法及装置

Also Published As

Publication number Publication date
CN112329552B (zh) 2023-07-14
CN112329552A (zh) 2021-02-05

Similar Documents

Publication Publication Date Title
WO2022078463A1 (zh) 基于汽车的障碍物检测方法及装置
CN112349144B (zh) 一种基于单目视觉的车辆碰撞预警方法及系统
JP4622001B2 (ja) 道路区画線検出装置および道路区画線検出方法
CN109712427B (zh) 一种车位检测方法及装置
JP6626410B2 (ja) 自車位置特定装置、自車位置特定方法
JP6520740B2 (ja) 物体検出方法、物体検出装置、およびプログラム
JP2000285245A (ja) 移動体の衝突防止装置、衝突防止方法、および記録媒体
JP2021149863A (ja) 物体状態識別装置、物体状態識別方法及び物体状態識別用コンピュータプログラムならびに制御装置
JP2002314989A (ja) 車両用周辺監視装置
JP2021128705A (ja) 物体状態識別装置
JP2014106739A (ja) 車載画像処理装置
CN111105619A (zh) 一种路侧逆向停车的判断方法及装置
JP3999088B2 (ja) 障害物検出装置
CN116434156A (zh) 目标检测方法、存储介质、路侧设备及自动驾驶系统
CN114037977B (zh) 道路灭点的检测方法、装置、设备及存储介质
CN116152753A (zh) 车辆信息识别方法和系统、存储介质和电子装置
TWI621073B (zh) Road lane detection system and method thereof
JP2003151096A (ja) 進入警報装置
JP2004258981A (ja) 車両監視方法およびその装置
JPH1186199A (ja) 走行車線検出方法及びその装置
JP4092974B2 (ja) 車両用走行制御装置
JP4854619B2 (ja) 障害物認識装置
CN118552582B (zh) 目标跟踪方法、装置、电子设备及存储介质
JP2011090490A (ja) 障害物認識装置
CN116612194B (zh) 一种位置关系确定方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879505

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/09/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21879505

Country of ref document: EP

Kind code of ref document: A1