CN112784671A - Obstacle detection device and obstacle detection method - Google Patents

Obstacle detection device and obstacle detection method Download PDF

Info

Publication number
CN112784671A
CN112784671A CN202011212140.7A CN202011212140A CN112784671A CN 112784671 A CN112784671 A CN 112784671A CN 202011212140 A CN202011212140 A CN 202011212140A CN 112784671 A CN112784671 A CN 112784671A
Authority
CN
China
Prior art keywords
obstacle
road surface
vehicle
images
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011212140.7A
Other languages
Chinese (zh)
Inventor
清水浩一
藤好宏树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of CN112784671A publication Critical patent/CN112784671A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an obstacle detection device which improves the detection precision of an obstacle. The imaging device comprises an imaging part, a display part and a control part, wherein the imaging part is arranged on a vehicle, images around the vehicle are imaged, and a plurality of images are acquired according to time sequence; an optical flow calculation unit that calculates optical flows of feature points extracted from the plurality of images, extracts feature points of an approaching vehicle as approaching feature points, and outputs information of the approaching feature points; a road surface detection unit that outputs a result of dividing the periphery of the vehicle in each of the plurality of images into a road surface region that can be determined as a road surface and an obstacle region in which an object other than the road surface is displayed; and an obstacle detection unit that detects an obstacle around the vehicle based on the information on the proximity feature point and the information on the road surface area.

Description

Obstacle detection device and obstacle detection method
Technical Field
The present application relates to an obstacle detection device and an obstacle detection method.
Background
In recent years, development of technology using an in-vehicle imaging device as a sensor for recognizing the shape of an object as one of in-vehicle monitoring devices has been progressing. The vehicle-mounted camera device eliminates a blind spot by mounting a plurality of vehicle-mounted cameras around a vehicle. The in-vehicle imaging device recognizes an image of an object such as an obstacle by a dedicated recognition processing device, and can travel the vehicle while avoiding contact with the obstacle.
A camera for monitoring the periphery of the vehicle is mounted as a small camera module at the mounting position of the vehicle. The mounting position is centered on a grill cover, a door mirror, and the like provided in front and rear of the vehicle. In order to compensate for a blind spot, the lens angle of the camera module, the mounting angle of the camera module on the vehicle, and the like are often optimized for each vehicle.
As a method of recognizing an obstacle using this camera, an optical flow method is disclosed in which feature points of an object captured as an obstacle are captured, how the feature points move between images in a video frame is examined, and a relative positional relationship with the obstacle is calculated based on an optical flow of the captured images (see, for example, patent document 1).
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication No. 2011-203766
Disclosure of Invention
Technical problem to be solved by the invention
In patent document 1, the relative positional relationship with the obstacle can be calculated based on the optical flow. However, there are the following problems: in obstacle detection using optical flow, it is necessary to improve detection accuracy because optical flow occurs even in cracks, dirt, or road surface marks and figures at equal intervals on a road surface, which leads to false detection.
The present application has been made to solve the above-described problems, and an object thereof is to provide an obstacle detection device that improves the detection accuracy of an obstacle without adding a sensor other than a camera.
Means for solving the problems
The disclosed obstacle detection device includes a photographing section that is mounted on a vehicle, photographs the surroundings of the vehicle, and acquires a plurality of images in time series; an optical flow calculation unit that extracts a feature point of an approaching vehicle as an approaching feature point by calculating an optical flow of the feature point extracted from the plurality of images, and outputs information of the approaching feature point; a road surface detection unit that outputs a result of dividing the periphery of the vehicle in each of the plurality of images into a road surface region that can be determined as a road surface and an obstacle region in which an object other than the road surface is displayed; and an obstacle detection unit that detects an obstacle around the vehicle based on the information on the proximity feature point and the information on the road surface area.
Technical effects
According to the obstacle detection device disclosed by the application, the detection accuracy of the obstacle is improved without adding other sensors besides the camera.
Drawings
Fig. 1 is a diagram schematically illustrating the configuration of an obstacle detection device according to embodiment 1.
Fig. 2 is a diagram showing an outline of processing of the obstacle detection device according to embodiment 1.
Fig. 3 is a diagram showing an example of the imaging range of the obstacle detection device according to embodiment 1.
Fig. 4 is a diagram illustrating an example of an image captured by the obstacle detection device according to embodiment 1.
Fig. 5 is a diagram illustrating an outline of optical flow processing of the obstacle detection device according to embodiment 1.
Fig. 6 is a diagram showing an outline of road surface detection processing of the obstacle detecting device according to embodiment 1.
Fig. 7 is a diagram showing an example of a pattern of a histogram of road surface detection processing of the obstacle detecting device of embodiment 1.
Fig. 8 is a diagram showing another example of a pattern of a histogram of the road surface detection processing of the obstacle detecting device according to embodiment 1.
Fig. 9 is a diagram showing another example of a pattern of a histogram of the road surface detection processing of the obstacle detecting device according to embodiment 1.
Fig. 10 is a diagram showing an outline of obstacle detection processing of the obstacle detection device according to embodiment 1.
Fig. 11 is a diagram showing another example of an image captured by the obstacle detecting device according to embodiment 1.
Fig. 12 is a diagram showing an outline of processing added to the obstacle detection device according to embodiment 2.
Fig. 13 is a diagram illustrating an example of an image captured by the obstacle detection device according to embodiment 2.
Fig. 14 is a diagram showing an outline of the configuration of an obstacle detection device according to embodiment 3.
Fig. 15 is a diagram showing an outline of the configuration of an obstacle detection device according to embodiment 4.
Fig. 16 is a diagram showing an example of an image captured by the obstacle detection device according to embodiment 4.
Fig. 17 is a diagram showing an example of an image captured by the obstacle detection device according to embodiment 5.
Fig. 18 is a diagram showing another example of an image captured by the obstacle detecting device according to embodiment 5.
Fig. 19 is a block diagram showing one example of hardware of the obstacle detection ECU.
Detailed Description
Hereinafter, an obstacle detection device and an obstacle detection method according to embodiments of the present application will be described with reference to the drawings. In each of the drawings, the same components or corresponding components and portions are denoted by the same reference numerals for explanation.
Embodiment mode 1
Fig. 1 is a diagram showing an outline of the configuration of an obstacle detection device 100 according to embodiment 1, fig. 2 is a diagram showing an outline of processing of the obstacle detection device 100, fig. 3 is a diagram showing an example of an imaging range 12 of a camera 11 which is an imaging unit 1 of the obstacle detection device 100, and fig. 4 is a diagram showing an example of an image captured by the obstacle detection device 100. The obstacle detection device 100 is a device that is mounted on a vehicle (hereinafter referred to as the host vehicle 10) driven by a driver and detects an obstacle approaching the host vehicle 10. For example, as shown in fig. 4, the obstacle is a moving object of another vehicle (hereinafter referred to as an approaching vehicle 22) including a two-wheeled vehicle. As shown in fig. 1, the obstacle detection device 100 includes an imaging section 1 for imaging the surroundings of a host vehicle 10; a frame buffer 2 for storing image data photographed in time series; an optical flow calculation unit 3 for extracting a feature point close to the host vehicle 10 as a close feature point and outputting information of the close feature point; a road surface detection unit 4 for extracting a road surface area around the host vehicle 10; an obstacle detection unit 5 for detecting an obstacle around the vehicle 10; a video display unit 6 for using the result of the detection of the obstacle; an alarm unit (7); and a control unit 8. Among these components, the processes executed by the frame buffer 2, the optical flow calculation Unit 3, the road surface detection Unit 4, and the obstacle detection Unit 5 are executed by a program in an obstacle detection ECU (Electronic Control Unit) 9 having an image processing function. Since the control section 8 is not included in the obstacle detection ECU9, the control section 8 is connected with the obstacle detection ECU9 via the in-vehicle network.
As shown in fig. 19, one example of the hardware of the obstacle detecting ECU9 includes a processor 110 and a storage device 111. The storage device 111 having the program executed by the obstacle detection ECU9, the image data stored in the frame buffer 2, and the like includes, for example, a volatile storage device such as a random access memory, and a nonvolatile auxiliary storage device such as a flash memory. A secondary storage device of a hard disk may also be included instead of the flash memory. The processor 110 executes the program input from the storage device 111, and the obstacle detection unit 5 provided in the obstacle detection ECU9 detects an obstacle around the host vehicle 10. At this time, the program is input from the auxiliary storage device to the processor 110 via the volatile storage device. The processor 110 may output data such as the operation result to a volatile storage device of the storage device 111, or may store the data in an auxiliary storage device via the volatile storage device.
An outline of a process of detecting an obstacle performed by each part of the obstacle detecting apparatus 100 shown in fig. 1 will be described with reference to fig. 2. The detection process includes four steps from step S1 to step S4. Step S1 executed by the image pickup unit 1 is a step of picking up an image of the periphery of the host vehicle 10 and acquiring and outputting a plurality of images in time series. Step S2 executed by the optical flow computing unit 3 is as follows: by calculating the optical flows of the feature points extracted from the plurality of images, the feature points near the host vehicle 10 are extracted as near feature points, and information of the near feature points is output. Step S3 executed by the road surface detection unit 4 is a step of, for example, dividing the warning detection area 14 serving as the detection target road surface around the host vehicle 10 in the acquired image shown in fig. 4 as a plurality of road surface detection areas 15 to 17, generating a histogram of luminance values of the respective road surface detection areas, comparing the histogram with a histogram serving as reference data that can be determined as the road surface, extracting a road surface area from the plurality of road surface detection areas 15 to 17, and outputting a result of dividing the warning detection area 14 into a road surface area and an obstacle area displaying an object other than the road surface. Step S4 executed by the obstacle detecting unit 5 is as follows: based on the information of the proximity feature points and the information of the road surface area, an obstacle around the host vehicle 10 is detected by creating a bounding box 19 as an obstacle information area surrounded by obstacle coordinates. A step (S5) of issuing a warning to the driver by sound or vibration when an obstacle is detected around the own vehicle 10; a step (S6) of notifying the driver by video display; and a step of controlling the host vehicle 10 so as to avoid the host vehicle 10 from contacting the obstacle indicated by the boundary box 19 (S7). Instead of performing all of the steps of S5, S6, and S7, any one of the steps may be performed. The time at which the obstacle is detected around the own vehicle 10 may be set so that at least a part of the alarm detection area 14 is included in the created bounding box 19. Hereinafter, the configuration and processing of the obstacle detecting device 100 will be described in detail.
The imaging unit 1 is a camera 11 provided in the own vehicle 10 and configured to capture the surroundings of the own vehicle 10 and acquire a plurality of images in time series. As shown in fig. 3, the imaging range 12 of the camera 11 provided at the door mirror position is rearward of the left side of the own vehicle 10. When the host vehicle 10 is a vehicle having a right steering wheel, the imaging range 12 is set to compensate for a blind spot of the driver. The mounting position of the camera 11 is not limited to the position of the door mirror, and the left rear can be photographed by setting the mounting position at another position on the left side face of the own vehicle 10. The photographing range 12 is also not limited to the left rear of the host vehicle 10, and the area where an obstacle is detected may be changed by photographing the right rear of the host vehicle 10, the rear of the host vehicle 10, or the like.
The image acquired by the imaging unit 1 will be described with reference to fig. 4. In the example of the image shown in fig. 4, an area required for processing to detect an obstacle is shown by a dotted line. The angle of view of the image is set to the case where the side face 10a of the own vehicle 10 enters. The alarm detection area 14 and the optical flow calculation area 20 for calculating the optical flow are specified with the side surface 10a of the vehicle 10 as a reference. The warning detection area 14 is an area predetermined to be required to detect an obstacle around the own vehicle 10. The alarm detection area 14 in the present embodiment is an area corresponding to 1 meter in the lateral direction from the lateral surface 10a of the host vehicle 10 shown in the image and 30 meters behind the host vehicle 10 from the camera 11. The extended end of the alarm detection area 14 in the image is the vanishing point 13. The alarm detection area 14 includes a crack 21a and a white line 21b as the non-obstacle 21. The alarm detection area 14 is not limited thereto, and may be changed or not particularly determined. For example, when the alarm detection area 14 is not particularly determined and the target area for obstacle detection is set as all the areas displayed in the image, the alarm detection area 14 may be set to the right area in fig. 4 excluding the own vehicle 10 and may be set to an area lower than the vanishing point 13. This is because the road surface appears in the image in the area below the vanishing point 13. The warning detection area 14 is divided into a plurality of blocks according to the distance from the host vehicle 10 to the rear. The road surface detection region 15, the road surface detection region 16, and the road surface detection region 17 are provided from the side closer to the vehicle 10.
The image data captured by the camera 11 is output to the frame buffer 2. The frame buffer 2 stores image data of previous frames. The image data is output to the optical flow computing unit 3 and the road surface detecting unit 4 at the subsequent stage via the frame buffer 2.
The optical flow calculation unit 3 calculates the optical flows of the feature points extracted from the plurality of images, thereby extracting the feature points close to the host vehicle 10 as close feature points and outputting information of the close feature points. Fig. 5 is a diagram illustrating an outline of optical flow processing in the obstacle detection device 100 according to embodiment 1. The process executed by the optical flow computing unit 3 will be described in detail with reference to fig. 5. First, the optical flow calculation area 20 for calculating the optical flow is acquired from the image as a partial image (step S101). In the present embodiment, the optical-flow calculation area 20 is an area on the right side of the image other than the host vehicle 10, but the optical-flow calculation area 20 may be any area in the image as long as it is an area in which an obstacle is detected. Next, a corner point, which is a point where edges on the image intersect, is detected for the optical flow operation area 20, and the corner point is extracted as a feature point (step S102). There are many methods of detecting the corner, such as Harris' corner detection algorithm, but any method may be used.
Next, feature points extracted from the latest image are matched with feature points extracted from the past image, and an optical flow is calculated (step S103). Finally, the optical flow approaching the host vehicle 10 is extracted from the calculation result (step S104). In fig. 4, the vertical direction is defined as the Y direction, the downward direction is defined as positive, the horizontal direction is defined as the X direction, and the rightward direction is defined as positive. In this definition, the optical flow of an obstacle approaching the host vehicle 10 from behind has an optical flow whose Y component is positive. In contrast, the optical flow of a structure away from the vehicle 10, such as a building or a vehicle parked on a road, has an optical flow whose Y component is negative. Therefore, in the extraction of the optical flow, an optical flow in which the Y component is positive is extracted. The optical flow calculation unit 3 outputs information of the extracted optical flow, which is close to the feature point, to the obstacle detection unit 5 (step S105). The information of the proximity feature point is coordinates of the proximity feature point, a Y component length of the optical flow, an X component length, a direction, a length of a movement amount, and the like.
The road surface detection unit 4 divides an alarm detection area 14 set as a detection target road surface around the host vehicle 10 into a plurality of road surface detection areas 15 to 17 for each of the plurality of images, generates a histogram of luminance values of the respective road surface detection areas, compares the histogram with a histogram of reference data that can be determined as a road surface, extracts a road surface area from the plurality of road surface detection areas 15 to 17, and outputs a result of dividing the alarm detection area 14 into a road surface area and an obstacle area in which an object other than the road surface is displayed. Fig. 6 is a diagram showing an outline of the road surface detection processing of the obstacle detecting device 100 according to embodiment 1. The processing performed by the road surface detection unit 4 will be described in detail with reference to fig. 6. First, the alarm detection region 14 is acquired from the image as a partial image (step S201). Then, a road surface histogram extraction area is set (step S202). A road surface detection region set in the vicinity of the own vehicle 10 is set as a road surface histogram extraction region, and a histogram obtained from the road surface histogram extraction region is used as reference data. In the road surface detection method according to the present embodiment, it is assumed that the region closest to the host vehicle 10 is the road surface. Therefore, a histogram created from an image obtained by capturing the nearest region of the own vehicle 10 is used as the reference data. In the warning detection area 14 shown in fig. 3, a road surface detection area 15, which is an area closest to the host vehicle 10, is set as a histogram extraction area. The histogram extraction area may be set to be changed according to the angle of view of the camera, the installation position of the camera, or the direction of the camera, as long as the area closest to the host vehicle 10 in the acquired image is sufficient.
Next, a histogram of each of the road surface detection areas 15 to 17 of the warning detection area 14 is created (step S203). In the road surface detection region 1517, a luminance value is assigned to each pixel, and a histogram of luminance values with the luminance value on the abscissa and the frequency on the ordinate is prepared. The luminance value is represented by a range from 0 (black) to 255 (white), for example. Fig. 7 is a diagram showing a histogram of the road surface detection region 15 as an example of a pattern of the histogram created in the road surface detection processing of the obstacle detection device 100 of embodiment 1. Since the road surface area occupies most of the road surface detection area 15, the gray luminance is more, and the frequency peak of G1 appears. The histogram is a graph in which the features of an image are captured by representing the levels of luminance values on the horizontal axis and representing the frequencies thereof on the vertical axis, but since a road surface has yellow lines in addition to white lines and also has a road surface painted with red or blue paint, the levels can be created by combining R, G and B, and the features of the road surface including colors can be captured by extracting the frequencies for each level.
Fig. 8 is a diagram showing a histogram of the road surface detection region 16, and fig. 9 is a diagram showing a histogram of the road surface detection region 17 as an example of a pattern of the histogram generated in the road surface detection process by the obstacle detection device 100 of embodiment 1. In the histogram of the road surface detection region 16, a frequency peak of G2 indicating the characteristic luminance as the road surface is displayed in the same manner as G1 of the road surface detection region 15. Since the white line 21b is included in the road surface detection region 16, a frequency peak of G3 corresponding to the luminance of the white line 21b is displayed. As shown in fig. 9, in the histogram of the road surface detection region 17, a frequency peak indicating G5 as the characteristic luminance of the road surface appears similarly to G1 of the road surface detection region 15 and G2 of the road surface detection region 16, and a frequency peak of G6 corresponding to the luminance of the white line 21b appears. Further, since the approaching vehicle 22 approaching the host vehicle 10 is included in the road surface detection region 17, 2 frequency peaks G4, G7 corresponding to the approaching vehicle 22 appear. G4 is the lowest frequency peak of the luminance that becomes a tire and shadow near the vehicle 22 at the boundary between the near vehicle 22 and the road surface. G7 is a frequency peak with higher brightness on the side of the host vehicle 10 where colors are generally likely to reflect light.
Finally, an area as a road surface is extracted from the warning detection area 14 using the histogram of the road surface detection area 15 as reference data (step S204). Although there are many methods of extracting a road surface region using a histogram, here, a histogram of reference data is compared with histograms of other regions, a region of the histogram similar to the reference data is determined as a road surface, and is extracted as a road surface region. Although the similarity of the histograms may be determined from the shape of the histogram and the range of the luminance values of the distribution, the method of determining the similarity of the histograms is not limited to this method. A binarized image is created by setting the luminance value of a region determined as the road surface to 255 and the luminance value of a region determined as not being the road surface (hereinafter referred to as an obstacle region) to 0. The road surface detection unit 4 outputs a binary image, which is a result of dividing the warning detection area 14 into a road surface area and an obstacle area, to the obstacle detection unit 5 (step S205). The road surface detection unit 4 outputs information of the luminance value of each pixel of the alarm detection area 14 for creating a histogram to the obstacle detection unit 5 in order to correct the obstacle coordinates of the four corners of the bounding box described later.
Although the road surface detection areas 15 to 17 are defined by dividing the warning detection area 14 into three areas, the road surface area may be extracted for a smaller area obtained by further dividing the road surface detection area. A histogram is created from the luminance values of the small regions, and the distance resolution between the road surface and the vehicle is improved by determining the road surface based on the similarity of the histogram. Further, it is not necessary to make a histogram for each small region, but the road surface can be determined for each pixel from the luminance value of each pixel in the alarm detection region 14 by a back projection method using a generally known histogram. The optimum road surface extraction method can be selected according to the distance resolution obtained in the road surface detection.
In the present embodiment, the road surface detection method of dividing the warning detection area 14 into the road surface area and the obstacle area is described by way of example using a histogram, but the method of road surface detection is not limited to this method. The method of road surface detection may be any as long as the requirements of the method are met. Other road surface detection methods include a road surface detection method using a motion solid, a road surface detection method in which color feature amounts of a road surface are learned, a road surface detection method using image segmentation of deep learning, and the like.
The obstacle detection unit 5 creates a boundary frame that is an obstacle information area surrounded by obstacle coordinates based on the information of the proximity feature points output from the optical flow calculation unit 3 and the information of the road surface area output from the road surface detection unit 4, and detects the approaching vehicle 22 that is an obstacle around the host vehicle 10. Fig. 10 is a diagram showing an outline of obstacle detection processing of the obstacle detection device 100 according to embodiment 1. The processing performed by the road surface detection unit 5 is described in detail based on fig. 10. First, information of the proximity feature point in the road surface region is deleted from the input information of the proximity feature point (step S301). When the coordinates close to the feature point are in the road surface region, the information of the feature point is determined as an error flow caused by a non-obstacle such as a pattern or a crack on the road surface, and the information is deleted. Then, the approaching feature points located at the approaching position and having the flow direction coincident are collected from the coordinates of the approaching feature points that remain without being deleted and the flow direction of the approaching feature points, and are packetized (step S302). Groups of a plurality of near feature points generated by packetization are classified to create a rectangular bounding box enclosing the near feature points (step S303). Thus, the bounding box is created by using information of the proximity feature points included in the region other than the road surface region. The information of each created bounding box is output to the image display unit 6, the alarm unit 7, and the control unit 8 as a result of detecting an obstacle around the host vehicle 10 (step S304). In fig. 4, the information of the created bounding box 19 is output. The information of the bounding box is the coordinates, size, number, average length of the stream and average direction of the stream.
When an obstacle is detected around the host vehicle 10, the warning unit 7 notifies the driver of the host vehicle 10 by sound or vibration. In the case of notification by vibration, for example, a signal is transmitted to an EPS (electric power steering), and the steering wheel is vibrated by generating high-frequency vibration in the EPS, thereby notifying the driver. In order to vibrate the steering wheel, a motor for steering wheel vibration may be attached to the steering wheel, and the steering wheel may be vibrated by vibrating the motor. Further, sound or vibration may be separately used for notification. For example, when at least a part of the warning detection area 14 is included in the boundary frame, the notification may be made by sound first, and when the approaching vehicle 22 approaches a possible contact with the own vehicle 10, the notification may be made by vibration.
When an obstacle is detected around the host vehicle 10, the image display unit 6 displays the position of the boundary frame by an image and notifies the driver of the host vehicle 10 of the position. The control unit 8 controls the host vehicle 10 so as to avoid the host vehicle 10 from coming into contact with the approaching vehicle 22, which is an obstacle indicated by a boundary frame. The host vehicle 10 may not include all of the video display portion 6, the alarm portion 7, and the control portion 8, and may include any one or two of them.
When the coordinates of the proximity feature point exist in the road surface region, the proximity feature point relating to the crack 21a and the white line 21b shown in fig. 3 is deleted by deleting the information of the proximity feature point. The deleted proximity feature points are not limited to these examples, and proximity feature points related to puddles in the road surface area and zebra patterns drawn on the road surface are also deleted. Fig. 11 is a diagram showing another example of an image captured by the obstacle detection device 100 according to embodiment 1. At the end of the alarm detection area 14, a curb 21c of a curb is regularly and repeatedly formed. Although the feature point related to the road tooth 21c may be included in the information of the proximity feature point, even for a non-obstacle such as a road tooth 21c formed regularly, when the coordinates of the proximity feature point exist in the road surface detection area, the information of the proximity feature point is deleted.
In the above case, although the optical flow calculation and the extraction of the road surface are performed in parallel on the input image, it is possible to reduce the error flow due to the pattern of the road surface area or the like by first extracting the road surface area and performing the optical flow calculation on the obstacle area based on the output result. The boundary frame is created after the adjacent feature points included in the road surface area are deleted, but the present invention is not limited to this, and the boundary frame may be created without deleting the adjacent feature points and then the boundary frame included only in the road surface area may be deleted to remove the error stream. Further, although the obstacle is detected by creating the bounding box, the detection of the obstacle is not limited to creating the bounding box, and may be detection of the obstacle using a point group, a binarized image, or the like.
As described above, since the obstacle detection device 100 detects the approaching vehicle 22, which is an obstacle around the host vehicle 10, based on the information of the approaching feature point output from the optical flow calculation unit 3 and the information of the road surface area output from the road surface detection unit 4, it is possible to improve the detection accuracy of the approaching vehicle 22, which is an obstacle, without additionally providing a sensor other than a camera. Since the bounding box is created based on the information of the proximity feature points included in the area other than the road surface area, the information of the proximity feature points included in the road surface area can be deleted, and the detection accuracy of the obstacle can be improved. Further, since the warning unit 7 is provided to notify the driver of the host vehicle 10 by sound or vibration when an obstacle is detected around the host vehicle 10, the presence of the approaching vehicle 22 can be notified to the driver at an appropriate timing. Further, since the video display unit 6 is provided to display the position of the boundary frame to the driver of the host vehicle 10 by video when an obstacle is detected around the host vehicle 10, the driver can be notified of the position of the approaching vehicle 22 at an appropriate timing. Since the detection accuracy for the approaching vehicle 22 is improved, false alarms are reduced. Further, since the control portion 8 is provided, the control portion 8 controls the own vehicle 10 so that the own vehicle 10 can avoid contact with the obstacle indicated by the boundary frame when the obstacle is detected around the own vehicle 10, the contact with the obstacle can be avoided at an appropriate timing without requiring the operation of the driver.
Embodiment mode 2
The obstacle detecting device 100 according to embodiment 2 will be described. Fig. 12 is a diagram illustrating an outline of processing added to the obstacle detection device 100. The processing of the obstacle detection device 100 according to embodiment 2 includes processing for correcting the coordinates of the detected obstacle in addition to the processing of embodiment 1.
The obstacle detecting unit 5 corrects the coordinates of the detected obstacle based on the road surface area and the obstacle area output from the road surface detecting unit 4. Specifically, the obstacle coordinates of the four corners of the bounding box created in step S303 are corrected using, for example, information of the luminance value of a part of the road surface detection area output from the road surface detecting section 4 (step S401). In the example of the image shown in fig. 4, in step S303 shown in fig. 10, the bounding box 19 is created. In step S203 shown in fig. 6, a luminance value is assigned to each pixel of each road surface detection region, and a histogram of each road surface detection region is created. From the comparison between the histogram of the road surface detection region 17 including the frequency peak having the characteristic of the approaching vehicle 22 and the histogram of the road surface detection region 16 not having the frequency peak of the approaching vehicle 22, it is known that the frequency peak G4 with the lowest luminance of the tire and shadow of the approaching vehicle 22 exists only in the road surface detection region 17. In generating the histogram, it can be identified that any obstacle other than the road surface exists in the vicinity of the boundary between the road surface detection region 17 and the road surface detection region 16 of the warning detection region 14, that is, the region where G4 is obtained, based on the luminance value assigned to each pixel of the road surface detection region. Based on this information, it can be determined that the point where the obstacle coordinates of the boundary box 19 are estimated to exist on the obstacle coordinates of the boundary box 18 matches the histogram obtained by the road surface detection processing. The obstacle coordinates at the four corners of the boundary frame 19 that are close to the vehicle 22 are corrected to the obstacle coordinates at the four corners of the boundary frame 18, and the information of the corrected boundary frame 18 is output as the processing result of the obstacle detecting unit 5 (step S402). This can improve the accuracy of the distance between the host vehicle 10 and the approaching vehicle 22. The correction is not limited to the method using information of the luminance value, and when a road surface detection method not using a histogram is used, the correction may be performed according to the method used.
Fig. 13 is a diagram illustrating an example of an image captured by the obstacle detection device 100 according to embodiment 2. The approaching vehicle in fig. 13 is a two-wheeled vehicle 22 a. In the processing of the optical flow calculation unit 3, it is difficult to extract the corner points of the tire, particularly in the case of the two-wheeled vehicle 22 a. Therefore, in the example of the image shown in fig. 13, in step S303 shown in fig. 10, the bounding box 24 is created. In step S203 shown in fig. 6, histograms of the road surface detection regions 15 to 17 are generated. From a comparison between the histogram of the road surface detection region 16 including the frequency peak having the characteristic of the two-wheeled vehicle 22a and the histogram of the road surface detection region 15 not including the frequency peak of the two-wheeled vehicle 22a, it is found that the lowest frequency peak, which is the luminance of the tire and shadow of the two-wheeled vehicle 22a, exists only in the road surface detection region 16. When the histogram is created, it is possible to recognize that any obstacle other than the road surface exists in the vicinity of the boundary between the road surface detection region 16 and the road surface detection region 15 of the warning detection region 14, based on the luminance value assigned to each pixel of the road surface detection region. Based on this information, it can be determined that the point at which the obstacle coordinates of the four corners of the boundary frame 24 are estimated to exist on the obstacle coordinates of the four corners of the boundary frame 23 matches the histogram obtained by the road surface detection processing. The obstacle coordinates at the four corners of the boundary frame 24 of the two-wheeled vehicle 22a are corrected to the obstacle coordinates at the four corners of the boundary frame 23, and the information of the corrected boundary frame 23 is output as the processing result of the obstacle detecting unit 5. Since the coordinates of the lower end of the boundary frame 23 are used to estimate the distance to the host vehicle 10, the driver can be notified of the exact position of the obstacle by correcting the position of the two-wheeled vehicle 22a to approach the alarm detection area 14 by 1 meter or more.
As described above, in the obstacle detection device 100, the obstacle detection unit 5 corrects the obstacle coordinates of the four corners of the boundary frame created by the optical flow calculation unit 3 using the information of the luminance value of a part of the road surface detection area output from the road surface detection unit 4, and therefore the accuracy of the distance between the host vehicle 10 and the approaching vehicle 22 can be improved.
Embodiment 3
The obstacle detection device 100 according to embodiment 3 will be described. Fig. 14 is a diagram schematically illustrating the configuration of the obstacle detection device 100. The obstacle detection device 100 according to embodiment 3 includes a vehicle information acquisition unit 25 in addition to the configuration according to embodiment 1.
The vehicle information acquisition unit 25 acquires the traveling information of the host vehicle 10. The travel information includes the speed, yaw rate, GPS, and the like of the vehicle 10. The vehicle information acquisition unit 25 acquires the travel information via, for example, an in-vehicle network of the host vehicle 10. The acquired travel information is output to the optical flow calculation unit 3, the road surface detection unit 4, and the obstacle detection unit 5, respectively. The optical flow calculation unit 3 extracts the proximity feature points by optical flow calculation of the feature points extracted from the plurality of images by using the travel information. By using the travel information in the obstacle detection processing, erroneous detection or non-detection of an obstacle is prevented, and the accuracy of obstacle detection can be improved. A specific example using the travel information will be described below.
The optical flow when the vehicle turns is different from the optical flow when the vehicle travels straight. When the vehicle turns, it is possible to detect an optical flow in a downward direction in the screen such as an approaching obstacle from a structure such as a building. Therefore, by determining the turning of the host vehicle 10 using the yaw rate as the travel information and by removing the optical flow in the direction corresponding to the turning of the vehicle from the value of the yaw rate when calculating the optical flow, it is possible to reduce erroneous detection and improve the accuracy of detecting an obstacle.
As described above, the obstacle detection device 100 includes the vehicle information acquisition unit 25 for acquiring the travel information of the host vehicle 10, and the optical flow calculation unit 3 extracts the information of the proximity feature point from the optical flow calculation of the feature point extracted from the plurality of images by using the travel information, so that the obstacle detection accuracy can be improved.
Embodiment 4
The obstacle detection device 100 according to embodiment 4 will be described. Fig. 15 is a diagram schematically illustrating the configuration of the obstacle detection device 100. The obstacle detection device 100 according to embodiment 4 includes a road surface boundary line tracking unit 26 in addition to the configuration of embodiment 3.
The road surface boundary line tracking unit 26 is provided between the road surface detection unit 4 and the obstacle detection unit 5, and records, in a plurality of consecutive images, a road surface boundary line between a road surface region located in the warning detection region 14 and an obstacle region, based on the binarized image output by the road surface detection unit 4. The road surface boundary line tracking unit 26 outputs road surface boundary lines of the plurality of images to the obstacle detecting unit 5 together with the binarized image output by the road surface detecting unit 4.
The obstacle detection unit 5 performs association of bounding boxes created in two images respectively between two consecutive images among the plurality of images to perform processing of performing obstacle association. In this process, when a boundary frame that has been correlated between two images cannot be correlated in the latest image, the positions of the road surface boundary lines of two consecutive images recorded by the road surface boundary line tracking section 26 are compared. When the position of the road surface boundary line between the two images does not change, in the latest image, a boundary frame is created at the same position as that of the boundary frame of the image immediately preceding the latest image, and it is determined that an obstacle is present. A specific example using the road surface boundary will be described below.
Fig. 16 is a diagram illustrating an example of an image captured by the obstacle detection device 100 according to embodiment 4. The approaching vehicle in fig. 16 is a two-wheeled vehicle 22b that travels parallel to the host vehicle 10. When the two-wheeled vehicle 22b travels in parallel, since no relative speed is generated between the host vehicle 10 and the two-wheeled vehicle 22b, the optical flow cannot be calculated, and the obstacle detecting unit 5 cannot detect an obstacle using the optical flow. When an obstacle is captured in time series, since a relative speed is generated when approaching the own vehicle 10, the obstacle can be detected from the optical flow in the past image. By recording in advance the road surface boundary lines detected in the past image, it is possible to follow the appearance of the boundary frame or the flow until the boundary frame disappears.
Since the road surface detection regions 15 and 17 in the warning detection region 14 are not affected by the motorcycle 22b, a histogram having road surface characteristics is created as shown in fig. 7. A histogram similar to fig. 9 having the characteristics of the two-wheeled vehicle 22b is generated from the road surface detection region 16. It is considered that when the host vehicle 10 and the two-wheeled vehicle 22b run in parallel, if the positions of the host vehicle 10 and the two-wheeled vehicle 22b do not change, the features of the pattern of the histogram do not change greatly. Even in a state where an obstacle cannot be detected by the optical flow, the presence and position of the two-wheeled vehicle 22b can be estimated by comparing the road surface boundary lines of a plurality of consecutive images with the road surface boundary line of the latest image.
It can be presumed that the obstacle exists when the bounding box has been created and the obstacle is recognized. Even when the boundary frame suddenly disappears, if the past road surface boundary line and the current road surface boundary line recorded by the road surface boundary line tracking section 26 do not change, it is determined that the obstacle exists at the relative speed of 0 with respect to the previous image, and the boundary frame is created at the same position as the boundary frame created in the previous image, and the information of the boundary frame is output. This allows the obstacle in the parallel traveling state to be continuously detected, thereby reducing the number of undetected obstacles.
As described above, since the obstacle detection device 100 includes the road surface boundary line tracking unit 26 and determines that the boundary frame exists at the same position as the position of the road surface boundary line in the latest image when the positions of the road surface boundary lines in the plurality of images and the position of the road surface boundary line in the latest image do not change, and determines that the obstacle exists by creating the boundary frame at the same position as the position of the preceding image in the latest image, it is possible to continue the detection of the obstacle and reduce the non-detection of the obstacle even in a state where the approaching vehicle is in a parallel traveling state and the obstacle cannot be detected by the optical flow.
Although the case where the position of the road surface boundary line between the two images is not changed is described, when there is a change, the following processing is performed. When the position of the road surface boundary line is retreated compared to the previous image, it is determined that the obstacle is retreated, and a boundary frame is created at the position of the road surface boundary line of the latest image. And when the boundary line of the road surface disappears, judging that the obstacle is separated from the alarm detection area, and discarding the information of the boundary frame. When the position of the road surface boundary line advances more than the previous image, the position of the obstacle in the latest image is estimated from the moving speed of the obstacle in the previous image. When the road surface boundary line is located on the upper side of the image with respect to the estimated position, on the rear side of the own vehicle on the actual coordinates, although the optical flow is not detected, it is determined that the vehicle slightly advances, and a boundary frame is created on the road surface boundary line. When the road surface boundary line is located forward with respect to the estimated position, it is determined that the other vehicle is moving backward with respect to the vehicle front, and a boundary frame is created on the road surface boundary line.
Embodiment 5
An obstacle detection device 100 according to embodiment 5 will be described. The processing of the obstacle detection device 100 according to embodiment 5 is added to the processing according to embodiment 4, and also added to the processing in the case where no bounding box is created and an obstacle cannot be detected.
When the optical flow operation portion 3 cannot extract the proximity feature points and no boundary frame is created at the obstacle detection portion 5 so that an obstacle cannot be detected, the obstacle detection portion 5 compares the positions of the road surface boundary lines of the plurality of continuous images with the distance between the host vehicle 10, and detects the coordinates of the road surface boundary lines as the coordinates of the obstacle when the distance is reduced. A specific example of this process will be described below.
Fig. 17 is a diagram showing an example of an image captured by the obstacle detection device 100 of embodiment 5, and fig. 18 is a diagram showing another example of an image. The approaching vehicle in fig. 17 and 18 is a two-wheeled vehicle 22c approaching the host vehicle 10 from the vicinity of the vanishing point of the image. For an obstacle that approaches the camera straight from the vicinity of the vanishing point, the optical flow computing unit 3 cannot extract the approaching feature point, and the obstacle detecting unit 5 cannot detect the obstacle by creating a bounding box. This is because the obstacle is in direct proximity to the camera, and therefore the optical flow cannot be calculated. When no boundary frame is created and thus an obstacle cannot be detected, the obstacle is detected by using the road surface boundary line.
In the vicinity of the vanishing point, the resolution of the brightness gradation of the alarm detection area 14 is increased in advance, and the alarm detection area 14 in the image is further divided narrower. When the two-wheeled vehicle 22c is included in the road surface detection region 17, the luminance in the vicinity of the tire contact patch becomes lower than the luminance of the road surface, and a histogram similar to fig. 9 is generated. In fig. 17 to 18, the histogram moves from the road surface detection region 17 to the road surface detection region 16. The position of the boundary line of the road surface is also shifted. Thus, when the distance between the position of the road surface boundary line and the host vehicle 10 is shortened, it is estimated that an obstacle exists at the position of the coordinates of the road surface boundary line, and the coordinates of the road surface boundary line are set as the coordinates of the obstacle. By recognizing the state of approaching the road surface boundary line, the vehicle approaching the host vehicle 10 can be estimated and detected even when it is difficult for the optical flow calculation unit 3 to create a boundary frame.
As described above, the obstacle detection apparatus 100 can detect an obstacle because the coordinates of the road surface boundary lines of the plurality of images are detected as the coordinates of the obstacle when the distance between the position of the road surface boundary line and the host vehicle 10 decreases even in the case where the proximity feature point cannot be extracted and the boundary frame cannot be created so that the obstacle cannot be detected.
While various exemplary embodiments and implementations have been described herein, the various features, aspects, and functions described in one or more implementations are not limited in their application to a particular implementation, but can be applied to implementations alone or in various combinations.
Therefore, a myriad of modifications not shown by way of example are conceivable within the technical scope disclosed in the present specification. For example, the case where at least one component is modified, added, or omitted, or the case where at least one component is extracted and combined with the components of other embodiments is assumed.
Description of the reference symbols
1 imaging unit
2 frame buffer
3 optical flow calculating part
4 road surface detection unit
5 obstacle detection unit
6 video display unit
7 alarm part
8 control part
9 obstacle detection ECU
10 own vehicle
10a side surface
11 pick-up head
12 shooting range
13 vanishing point
14 alarm detection area
15 road surface detection area
16 road surface detection area
17 road surface detection area
18 boundary frame
19 boundary frame
20 optical flow calculation area
21 non-obstacle
Crack of 21a
21b white line
21c road tooth
22 approaching vehicle
22a two-wheeled vehicle
22b two-wheeled vehicle
22c two-wheeled vehicle
23 bounding box
24 boundary frame
25 vehicle information acquisition unit
26 road surface boundary line tracking unit
100 obstacle detection device
110 processor
111 storage means.

Claims (11)

1. An obstacle detection device, comprising:
an imaging unit that is mounted on a vehicle, that images the surroundings of the vehicle, and that acquires a plurality of images in time series;
an optical flow calculation unit that calculates optical flows of feature points extracted from the plurality of images, extracts feature points near the vehicle as near feature points, and outputs information of the near feature points;
a road surface detection unit that outputs a result of dividing the periphery of the vehicle into a road surface region in which a road surface can be determined and an obstacle region in which an object other than the road surface is displayed, for each of the plurality of images; and
an obstacle detection unit that detects an obstacle around the vehicle based on the information on the proximity feature point and the information on the road surface area.
2. Obstacle detecting device according to claim 1,
the obstacle detecting section detects the obstacle by using information of the proximity feature point included in a region other than the road surface region.
3. Obstacle detection apparatus according to claim 1 or 2,
the obstacle detection unit corrects the coordinates of the detected obstacle based on the road surface area and the obstacle area output from the road surface detection unit.
4. Obstacle detecting device according to any one of claims 1 to 3,
includes a vehicle information acquisition section that acquires travel information of the vehicle,
the optical flow calculation unit extracts the proximity feature point by optical flow calculation of a feature point extracted from the plurality of images using the travel information.
5. Obstacle detecting device according to any one of claims 1 to 4,
includes a road surface boundary line tracking section for recording a road surface boundary line between the road surface region and the obstacle region in a plurality of continuous images,
the obstacle detecting section executes a process of associating the obstacles detected in the two images respectively between two consecutive images among the plurality of images, and when the obstacle associated in the two images in the process cannot be associated in the latest image,
comparing the positions of the road surface boundary lines of the two continuous images recorded by the road surface boundary line tracking section,
when the position of the road surface boundary line of the two images does not change, it is determined that the obstacle exists at the same position in the latest image as the position of the obstacle in the image immediately preceding the latest image.
6. Obstacle detecting device according to claim 5,
when the approaching feature point cannot be extracted in the optical flow calculation section and the obstacle cannot be detected in the obstacle detection section,
the obstacle detecting section compares positions of the road surface boundary lines of the plurality of continuous images with a distance between the vehicles, and detects coordinates of the road surface boundary lines as coordinates of the obstacle when the distance is reduced.
7. Obstacle detecting device according to any one of claims 1 to 6,
the vehicle control device includes an alarm unit that notifies a driver of the vehicle by sound when the obstacle is detected around the vehicle.
8. Obstacle detecting device according to any one of claims 1 to 6,
an alarm portion is included that notifies a driver of the vehicle by vibrating a steering wheel when the obstacle is detected around the vehicle.
9. Obstacle detecting device according to any one of claims 1 to 8,
the vehicle control device includes a video display unit that displays and notifies a driver of the vehicle of a position of the obstacle region by video when the obstacle is detected around the vehicle.
10. The obstacle detecting device according to any one of claims 1 to 9, comprising:
a control portion is included that controls the vehicle to avoid the vehicle from coming into contact with an obstacle displayed with the obstacle region when the obstacle is detected in the periphery of the vehicle.
11. A method for detecting an obstacle,
the obstacle detection method detects an obstacle approaching a vehicle, including:
a step of photographing the surroundings of the vehicle, and acquiring and outputting a plurality of images in time series;
extracting a feature point close to the vehicle as a close feature point by calculating an optical flow of the feature point extracted from the plurality of images, and outputting information of the close feature point;
a step of outputting a result obtained by dividing the periphery of the vehicle into a road surface region in which a road surface can be determined and an obstacle region in which an object other than the road surface is displayed, for each of the plurality of images; and
a step of detecting the obstacle around the vehicle based on the information of the proximity feature point and the information of the road surface area.
CN202011212140.7A 2019-11-08 2020-11-03 Obstacle detection device and obstacle detection method Pending CN112784671A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-202911 2019-11-08
JP2019202911A JP6949090B2 (en) 2019-11-08 2019-11-08 Obstacle detection device and obstacle detection method

Publications (1)

Publication Number Publication Date
CN112784671A true CN112784671A (en) 2021-05-11

Family

ID=75584028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011212140.7A Pending CN112784671A (en) 2019-11-08 2020-11-03 Obstacle detection device and obstacle detection method

Country Status (3)

Country Link
JP (1) JP6949090B2 (en)
CN (1) CN112784671A (en)
DE (1) DE102020213799A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782447A (en) * 2022-06-22 2022-07-22 小米汽车科技有限公司 Road surface detection method, device, vehicle, storage medium and chip

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102555907A (en) * 2010-12-06 2012-07-11 富士通天株式会社 Object detection apparatus and method thereof
CN107316006A (en) * 2017-06-07 2017-11-03 北京京东尚科信息技术有限公司 A kind of method and system of road barricade analyte detection

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003067752A (en) * 2001-08-28 2003-03-07 Yazaki Corp Vehicle periphery monitoring device
JP2003187228A (en) * 2001-12-18 2003-07-04 Daihatsu Motor Co Ltd Device and method for recognizing vehicle
JP2004246791A (en) * 2003-02-17 2004-09-02 Advics:Kk Alarming method and device for braking automobile
WO2011018999A1 (en) * 2009-08-12 2011-02-17 日本電気株式会社 Obstacle detection device and method and obstacle detection system
JP6450294B2 (en) * 2015-09-30 2019-01-09 株式会社デンソーアイティーラボラトリ Object detection apparatus, object detection method, and program
JP6564682B2 (en) * 2015-10-26 2019-08-21 トヨタ自動車東日本株式会社 Object detection device, object detection method, and object detection program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102555907A (en) * 2010-12-06 2012-07-11 富士通天株式会社 Object detection apparatus and method thereof
CN107316006A (en) * 2017-06-07 2017-11-03 北京京东尚科信息技术有限公司 A kind of method and system of road barricade analyte detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KOICHIRO YAMAGUCHI ET AL ET AL: "Obstacle Detection of Vehicle Front with In-vehicle Monocular Camera", 《INFORMATION PROCESSING SOCIETY OF JAPAN REPORT OF RESEARCH》, vol. 2005, pages 69 - 76 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782447A (en) * 2022-06-22 2022-07-22 小米汽车科技有限公司 Road surface detection method, device, vehicle, storage medium and chip
CN114782447B (en) * 2022-06-22 2022-09-09 小米汽车科技有限公司 Road surface detection method, device, vehicle, storage medium and chip

Also Published As

Publication number Publication date
DE102020213799A1 (en) 2021-05-12
JP2021077061A (en) 2021-05-20
JP6949090B2 (en) 2021-10-13

Similar Documents

Publication Publication Date Title
JP5693994B2 (en) Vehicle detection device
JP4248558B2 (en) Road marking line detection device
JP4930046B2 (en) Road surface discrimination method and road surface discrimination device
JP4622001B2 (en) Road lane marking detection apparatus and road lane marking detection method
US7957559B2 (en) Apparatus and system for recognizing environment surrounding vehicle
CN107273788B (en) Imaging system for performing lane detection in a vehicle and vehicle imaging system
US20130286205A1 (en) Approaching object detection device and method for detecting approaching objects
JP2011128756A (en) Object detection device
JP6089767B2 (en) Image processing apparatus, imaging apparatus, moving body control system, and program
US20160180158A1 (en) Vehicle vision system with pedestrian detection
WO2014002692A1 (en) Stereo camera
JP2013250907A (en) Parallax calculation device, parallax calculation method and parallax calculation program
US8160300B2 (en) Pedestrian detecting apparatus
JP2015148887A (en) Image processing device, object recognition device, moving body instrument control system and object recognition program
JP2010061375A (en) Apparatus and program for recognizing object
CN112784671A (en) Obstacle detection device and obstacle detection method
JP2007018451A (en) Road boundary line detecting device
JP4321410B2 (en) Object detection apparatus and method
JP6174884B2 (en) Outside environment recognition device and outside environment recognition method
JP5727639B2 (en) Vehicle detection device
JP2018073049A (en) Image recognition device, image recognition system, and image recognition method
CN115088248A (en) Image pickup apparatus, image pickup system, and image pickup method
JP5957182B2 (en) Road surface pattern recognition method and vehicle information recording apparatus
WO2020036039A1 (en) Stereo camera device
JP2011095846A (en) Device, method and program for detection of vehicle position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination