CN114379562A - Vehicle position detection device and vehicle position detection method - Google Patents

Vehicle position detection device and vehicle position detection method Download PDF

Info

Publication number
CN114379562A
CN114379562A CN202111201980.8A CN202111201980A CN114379562A CN 114379562 A CN114379562 A CN 114379562A CN 202111201980 A CN202111201980 A CN 202111201980A CN 114379562 A CN114379562 A CN 114379562A
Authority
CN
China
Prior art keywords
vehicle
unit
area
road surface
luminance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111201980.8A
Other languages
Chinese (zh)
Other versions
CN114379562B (en
Inventor
冈本卓也
藤好宏树
矢田将大
森考平
竹原成晃
清水浩一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of CN114379562A publication Critical patent/CN114379562A/en
Application granted granted Critical
Publication of CN114379562B publication Critical patent/CN114379562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4041Position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • B60W2554/801Lateral distance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The vehicle position detection device includes: a vehicle candidate region extraction unit that extracts a vehicle candidate region including the lighted headlights based on the captured image of the rear side of the host vehicle captured by the imaging unit; a road surface area detection unit that detects a road surface area from which an obstacle is removed from the captured image, based on a detection result obtained by the distance measurement unit; a road surface luminance comparing unit that sets a luminance comparison area in the road surface area further forward than the vehicle candidate area and compares the luminance obtained from the luminance comparison area; a travel area estimation unit that estimates a travel area in which another vehicle behind the vehicle is traveling, based on a comparison result obtained by the road surface brightness comparison unit; and a vehicle position determination unit that determines the position of the other vehicle based on the travel area and the vehicle candidate area.

Description

Vehicle position detection device and vehicle position detection method
Technical Field
The present application relates to a vehicle position detection device and a vehicle position detection method.
Background
The following methods have been known: the presence of headlamps is determined based on an image obtained by imaging an area including a road, and the position of another vehicle approaching the vehicle from the rear side at night is detected. However, depending on the direction of the optical axis of the headlight, whether the headlight is a high beam or a low beam, the beam angle thereof, or the like, the photographed headlight may be enlarged, and it may be erroneously detected as another vehicle approaching from behind in an adjacent lane or another vehicle approaching from behind in a lane other than the two lanes. If false detection occurs, an originally unnecessary warning may be issued to another vehicle approaching from behind in a lane other than the two lanes.
In contrast, patent document 1 discloses the following: a vehicle candidate region including a lighted headlight is extracted based on an image obtained by an imaging means for imaging the rear side of a host vehicle, a1 st road surface position on an adjacent lane of the host vehicle and a2 nd road surface position on a lane separated by one lane from the adjacent lane are set further forward of the vehicle candidate region, a lane on which another vehicle on the rear side of the host vehicle travels is determined based on a result of comparing the brightnesses of the 1 st and 2 nd road surface positions, and the position of another vehicle on the determined lane is determined based on the extracted vehicle candidate region. Thus, by combining the vehicle candidate region corresponding to the turned-on headlight with the brightness (road surface reflection) of the road surface position ahead of the vehicle candidate region, the lane of travel of another vehicle behind the own vehicle can be grasped, and the position of another vehicle can be accurately detected.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open No. 2015-55968
Disclosure of Invention
Technical problem to be solved by the invention
However, in patent document 1, since the subject of the comparison of the brightness of the headlights is the region on the adjacent lane, when there are only 1 adjacent lane to the lane on which the subject vehicle is traveling, the region selected as the subject of the comparison of the brightness of the region on the adjacent lane to the lane on which the subject vehicle is traveling is not the road surface on the lane, and therefore, there is a possibility that erroneous detection may occur.
In addition, even when an object having high brightness, for example, a white vehicle or the like, exists in a lane to be compared in brightness, the brightness of the road surfaces is not compared in the same manner, and therefore, erroneous detection may occur.
The present application discloses a technique for solving the above-described problem, and an object thereof is to provide a vehicle position detection device and a vehicle position detection method that can detect the position of another vehicle with high accuracy from the luminance of the road surface reflection of the headlight of the other vehicle.
Means for solving the problems
The application discloses vehicle position detection device includes: an imaging unit that is provided in a vehicle, and that images and outputs the surroundings of the vehicle; a distance measuring unit that is provided in the vehicle and detects an obstacle around the vehicle; a vehicle candidate region extraction unit that extracts a vehicle candidate region including the lighted headlights based on the captured image of the rear side of the vehicle captured by the imaging unit; a road surface area detection unit that detects a road surface area from which the obstacle is removed from the captured image, based on a detection result obtained by the distance measurement unit; a road surface luminance comparing unit that sets a luminance comparison area in the road surface area in front of the vehicle candidate area with respect to the vehicle candidate area existing in the road surface area, and compares luminance obtained from the luminance comparison area; a travel area estimation unit that estimates a travel area in which another vehicle behind the vehicle travels, based on a comparison result obtained by the road surface brightness comparison unit; and a vehicle position determination unit that determines the position of the other vehicle based on the travel area and the vehicle candidate area used in the estimation of the travel area.
Effects of the invention
According to the vehicle position detection device and the vehicle position detection method disclosed in the present application, the road surface area around the host vehicle is detected, and the target area for comparing the brightness is set within the detected road surface area, whereby the position of another vehicle approaching from the rear side can be accurately detected regardless of the lane.
Drawings
Fig. 1 is a schematic configuration diagram of a vehicle position detection device according to embodiment 1.
Fig. 2 is a hardware configuration diagram of the arithmetic device according to embodiment 1.
Fig. 3 is a functional block diagram of the vehicle position detection device according to embodiment 1.
Fig. 4 is a functional block diagram of a notification unit of the vehicle position detection device according to embodiment 1.
Fig. 5 is a flowchart illustrating an operation of the vehicle position detection device according to embodiment 1.
Fig. 6 is a flowchart for generating a notification signal to the driver according to embodiment 1.
Fig. 7 is a diagram showing a positional relationship between another vehicle and a host vehicle detected by the vehicle position detection device according to embodiment 1.
Fig. 8 is a diagram showing the positional relationship of light source candidates extracted from an image captured by a camera of the vehicle position detection device according to embodiment 1.
Fig. 9 is a diagram showing the positional relationship of light source candidates extracted from an image captured by the camera of the vehicle position detection device according to embodiment 1 and candidate regions of other vehicles estimated from the types of light sources.
Fig. 10 is a diagram showing a vehicle candidate region and a luminance comparison region corresponding thereto in an image captured by a camera of the vehicle position detection device according to embodiment 1.
Fig. 11 is a diagram showing the positional relationship between the alarm target region and the luminance sample region set by the vehicle position detection device according to embodiment 1.
Fig. 12 is a diagram showing an example of an image generated by the image generating unit of the vehicle position detecting device according to embodiment 1.
Fig. 13 is a diagram showing an example of an image generated by the image generating unit of the vehicle position detecting device according to embodiment 1 and displayed on the video output unit.
Fig. 14 is a functional block diagram of a vehicle position detection device according to embodiment 2.
Fig. 15 is a flowchart illustrating an operation of the vehicle position detection device according to embodiment 2.
Fig. 16 is a diagram showing a positional relationship between another vehicle and the host vehicle detected by the vehicle position detection device according to embodiment 2.
Fig. 17 is a diagram showing an example of an image generated by the image generating unit of the vehicle position detecting device according to embodiment 2.
Fig. 18 is a diagram showing an example of an image generated by the image generating unit of the vehicle position detecting device according to embodiment 2 and displayed on the video output unit.
Fig. 19 is a diagram showing another example of an image generated by the image generating unit of the vehicle position detecting device according to embodiment 2 and displayed on the video output unit.
Detailed Description
Embodiments of a vehicle position detection device and a vehicle position detection method disclosed in the present application will be described below with reference to the drawings. In the drawings, the same reference numerals denote the same or corresponding parts.
Embodiment 1.
Hereinafter, a vehicle position detection device and a vehicle position detection method according to embodiment 1 will be described with reference to the drawings.
Fig. 1 is a schematic configuration diagram of a vehicle position detection device according to embodiment 1.
In fig. 1, a vehicle 1 includes a camera 2, a sonar sensor 3, and a vehicle position detection device 10. A plurality of sonar sensors 3 are provided in front of, on the side of, and on the left and right of the vehicle, and are connected to a sonar controller 12 via sonar sensor wires 8. In fig. 1, four sonar sensors 3 are arranged in the front and rear, and one sonar sensor 3 is arranged in the left and right, respectively, but in the present application, there is no problem as long as there is at least one sonar sensor 3 in the direction of the road surface image acquired by the camera 2. The plurality of cameras 2 are arranged in the front-rear direction and the left-right direction of the vehicle, but are provided at least at positions where images can be taken of the rear side, and are connected to the monitoring camera controller 11 via the camera wiring 7. In fig. 1, one camera 2 is disposed on each of the front, rear, left, and right sides, but at least 1 or more cameras may be disposed in a direction in which imaging can be performed on the rear side. Therefore, in the present application, the left rear or the rear is preferable. The mounting position is illustrated in fig. 1 as the left and right cameras 2 being provided at the lower portion of a typical rearview mirror and the front and rear cameras 2 being provided at the center of a bumper, but the mounting position is not limited as long as the object of the present invention can be satisfied, unlike fig. 1. The vehicle position detection device 10 includes a sensor 13 and an arithmetic device 14 of the vehicle position detection device 10 in addition to the sonar controller 12 and the periphery monitoring camera controller 11, and these are connected by using a communication line 5, for example, a CAN (controller Area Network, registered trademark) or the like.
As shown in fig. 2, the arithmetic unit 14 includes a processor 1000 and a storage device 2000, and although not shown, the storage device includes a volatile storage device such as a random access memory and a non-volatile auxiliary storage device such as a flash memory. In addition, an auxiliary storage device such as a hard disk may be provided instead of the flash memory. The processor 1000 executes the program input from the storage device 2000 to complete a part or all of the operations of the flowchart illustrated in fig. 5 and the structures of the functional blocks illustrated in fig. 3 and 4. In this case, the program is input to the processor 1000 from the auxiliary storage device via the volatile storage device. The processor 1000 may output data such as input/output signals, intermediate values of the operation, and operation results to the volatile memory device of the storage device 2000, or may store the data in the auxiliary storage device via the volatile memory device. In addition, a logic circuit and an analog circuit can be used in addition to the processor 1000 and the memory device 2000. The arithmetic unit 14 completes a part or all of the functional blocks and operations of the flowchart described in embodiment 2 described later.
Next, the functional configuration of the vehicle position detection device according to embodiment 1 will be described with reference to the functional block diagrams of fig. 3 and 4, and the operation of the functional configuration will be described with reference to the flowcharts of fig. 5 and 6.
In embodiment 1, as shown in the diagram showing the positional relationship between the other vehicle V2 and the host vehicle V1 in fig. 7 and the image captured by the camera 2 in fig. 8, it is assumed that there is another vehicle V2 that lights up headlights (head light) h1 and h2 at the rear side of the host vehicle V1 that is operated and driven by the driver at night.
[ functional Structure of vehicle position detection apparatus ]
Fig. 3 is a functional block diagram showing a functional configuration of the vehicle position detection device 10 according to embodiment 1. In the figure, the vehicle position detection device 10 includes an imaging unit 101, a distance measurement unit 102, a light source extraction unit 103, a light source type determination unit 104, a vehicle candidate region extraction unit 105, a road surface region detection unit 106, an alarm target region setting unit 107, a luminance comparison region setting unit 108, a luminance sample region setting unit 109, a road surface luminance comparison unit 110, a travel region estimation unit 111, a vehicle position determination unit 112, a vehicle approach determination unit 113, and a notification unit 114.
The imaging unit 101 includes a camera 2 that can image the rear side of the plurality of cameras 2 shown in fig. 1, a monitoring camera controller 11 for outputting an image captured by the camera 2, and a camera wiring 7 connecting these components.
The distance measuring unit 102 detects an obstacle (three-dimensional object) existing around the vehicle V1 and measures a distance to the obstacle. The distance measuring unit 102 includes a plurality of sonar sensors 3 and a sonar controller 12 shown in fig. 1, and a sonar sensor wiring 8 connecting these. Here, a sonar sensor is exemplified as the distance measurement sensor, but an infrared depth sensor, a millimeter wave radar, or a LiDAR (light Detection and Ranging) technique may be used.
The light source extraction unit 103 extracts the light source from the luminance value of the image of the rear side of the own vehicle V1 captured by the camera 2 of the imaging unit 101.
When the light source is extracted by the light source extraction unit 103, the light source type determination unit 104 estimates the type of the light source from the color information for each extracted light source, and estimates whether or not the light source is a headlight of another vehicle V2.
The vehicle candidate region extraction unit 105 extracts a vehicle candidate region indicating the position of the vehicle including the lit headlamp, with respect to the light source of the headlamp estimated as the other vehicle V2, based on the result of the light source type determination unit 104.
The road surface area detection unit 106 detects a road surface area around the host vehicle V1 using the captured image of the rear side of the host vehicle V1 captured by the camera 2 of the imaging unit 101 and the obstacle detected by the sonar sensor 3 of the range finder 102.
The alarm target area setting unit 107 sets an area for determining whether or not to issue an alarm when another vehicle enters. This region is a region set from the side to the rear in the vicinity of the own vehicle V1.
The brightness comparison area setting unit 108 sets a brightness comparison area in front of the vehicle candidate area extracted by the vehicle candidate area extraction unit 105. The luminance comparison area is an area for determining whether or not the road surface is illuminated by the headlamps by comparing the luminance.
The luminance sample region setting unit 109 sets a luminance sample region, which is a region from which the luminance of a sample to be compared with the luminance value extracted from each of the luminance comparison regions set by the luminance comparison region setting unit 108 is extracted. The luminance sample region is set within the road surface region detected by the road surface region detecting unit 106.
The road surface luminance comparing unit 110 compares the luminance between the region set by the luminance comparison region setting unit 108 and the region set by the luminance sample region setting unit 109.
The travel region estimation unit 111 estimates a region in which the other vehicle V2 travels, based on the comparison result of the road surface brightness comparison unit 110.
The vehicle position specifying unit 112 specifies the lower end portion in contact with the road surface of the other vehicle V2 based on the candidate region of the vehicle position extracted by the vehicle candidate region extraction unit 105 and the region estimated to be traveled by the other vehicle V2 extracted by the travel region estimation unit 111, and sets the lower end portion as the vehicle position.
The vehicle approach determination unit 113 determines that the position of the another vehicle V2 enters the warning region based on the warning region set by the warning object region setting unit 107 and the vehicle position determined by the vehicle position determination unit 112, and determines whether or not the another vehicle V2 approaches the host vehicle V1 based on the images acquired in time series.
The notification unit 114 determines whether or not to notify the driver based on the determination result of the vehicle approach determination unit 113, and notifies the driver.
Fig. 4 is a functional block diagram showing a functional configuration of the notification unit 114. In the figure, the notification unit 114 includes an output determination unit 1141, an image generation unit 1142, a sound generation unit 1143, and a vibration control unit 1144.
The output determination unit 1141 determines whether or not to output a notification to the driver.
Based on the determination result of the output determination unit 1141, the image generation unit 1142 generates an alarm image. The warning image generated by the image generation unit 1142 is superimposed on the background in the image output unit 115 with the image captured by the camera 2, and is output. The image output unit 115 corresponds to a display unit provided in the vehicle.
Further, based on the determination result of the output determination unit 1141, the sound generation unit 1143 generates a warning sound, outputs the warning sound from the sound output unit 116, and notifies the driver of the warning content.
Further, based on the determination result of the output determination unit 1141, the vibration control unit 1144 vibrates the steering wheel or the seat of the driver's seat to notify the driver of an alarm.
[ MEANS FOR DETECTING VEHICLE POSITION ]
Next, the operation of the functional configuration of the vehicle position detection device according to embodiment 1 will be described with reference to the flowcharts of fig. 5 and 6 and fig. 7 to 13.
Fig. 5 is a flowchart showing the operation of the vehicle position detection device according to embodiment 1.
First, in step S101, the camera 2 captures a peripheral image of the vehicle V1. The peripheral image is an image of the rear side of the own vehicle V1. Fig. 7 is a diagram showing the positional relationship between the another vehicle V2 and the host vehicle V1 detected by the vehicle position detection device according to embodiment 1, and shows a case where the camera 2 attached below the right side mirror of the host vehicle V1 captures an area including the rear side of the another vehicle V2. The angle of view S of the camera 2 is set to 60 degrees. In addition, the angle of view S of the camera 2 is not limited to 60 degrees. In the figure, L1 to L4 are white lines indicating the lanes on which the vehicle travels.
In step S102, an obstacle (three-dimensional object) around the vehicle V1 is detected using the sonar sensor 3, and the distance between the obstacle and the three-dimensional object is measured. Here, in embodiment 1, the distance at which sonar sensor 3 can measure distance is set to be 3m from the sonar sensor position. However, the distance that can be measured is not limited to 3m, and other distance measuring sensors as described above may be used.
In step S103, the light source extraction unit 103 extracts light source candidates from the luminance values of the captured image acquired in step S101. The brightness of the image is represented by 256 levels of 0 (corresponding to black) to 255 (corresponding to white), and in embodiment 1, for example, a portion having a brightness value of 230 or more is extracted as a candidate of a headlamp light source of another vehicle V2. The setting of the brightness value of the light source candidate is not limited to 230.
Here, when extracted as a headlamp light source candidate, 2 light sources can be handled as 1 pair when 2 light sources are shifted closely. In this case, a headlamp of a two-wheeled vehicle is also considered, and even if the light source is a single light source, it can be extracted as a candidate of a headlamp light source. Fig. 8 shows an image obtained by extracting light source candidates from the image captured by the camera 2 in fig. 7, h1, h2, O1, and O2 are extracted as light source candidates, and h1, h2, O1, and O2 are considered in pairs. In the figure, road surfaces irradiated with headlamps of another vehicle V2 are indicated by R1 and R2.
In step S104, the light source type determination unit 104 extracts the colors of the captured images of h1, h2, O1, and O2, which are the light source candidates extracted by the light source extraction unit 103, and determines the light source type. For example, when the color information of red or green is extracted, it is determined to be the light of a signal, and the process of removing the light from the light source candidates is performed. Further, by using the vanishing point in the captured image, the highest point of the road surface area on the image is known. Since the headlights of the vehicle are positioned slightly above the road surface (lower end of the vehicle), the region in which the headlights are present on the image can be determined as a region below the line where the vanishing point is slightly above. Therefore, the following processing may be included: a vanishing point is calculated from the captured image, and it is determined that a light source located at an upper side position of a certain pixel amount or more with respect to the height of the vanishing point is not a headlight, and the light source is removed from the light source candidates. The vanishing point calculation method does not affect the effect, regardless of the method, such as a method of estimating from camera calibration information such as the internal parameters of the lens of the camera 2 and the mounting position and direction of the camera 2, or a method of calculating from the intersection of straight lines detected by optical flow or white line detection.
In step S105, the vehicle candidate region extraction unit 105 extracts a vehicle candidate region including light source candidates of headlamps of the vehicle based on the light source candidates determined by the light source type determination unit 104 among the light source candidates extracted by the light source extraction unit 103.
Fig. 9 is a diagram for explaining a method of extracting a vehicle candidate region, and sets a vehicle candidate region a1 including h1 and h2 as light source candidates of headlamps and a vehicle candidate region a2 including O1 and O2 as light source candidates. In reality, since the height of the headlight and the interval in the horizontal direction have different values for each vehicle type, it can be determined that there are other vehicles in the regions indicated by A3 and a4 in fig. 9 only from the light source candidates h1, h2, O1, and O2 shown in fig. 8. Therefore, the vehicle candidate region extraction unit 105 extracts 4 vehicle candidate regions a1, a2, A3, a 4.
In step S106, the road surface area detection unit 106 performs calculation using the captured image obtained by the camera 2 and the obstacle (three-dimensional object) distance measurement result obtained by the sonar sensor 3, and detects the road surface area RS. For the detection of the road surface region RS, for example, a road surface detection method shown in international publication No. 2019/202627 of the present applicant can be used. In this method, a histogram of color information is calculated based on a road surface image of the surroundings, and a region where the vehicle can travel and an obstacle region are extracted based on the histogram. According to this method, the road surface region RS is substantially a region obtained by removing a region in which an obstacle (three-dimensional object) is detected further outside (on the side of the white line L1) the white line L3 in the captured image of fig. 8.
From this detected road surface region RS, the end of the road surface located farthest on the road surface on which the host vehicle V1 is traveling is calculated, and the region located further outside is removed from the road surface region RS. The end of the road surface is a white line L1. Fig. 9 shows a road surface region RS from which a region further outside the white line L1 is removed.
In step S107, the luminance comparison area setting unit 108 sets a luminance comparison area based on the vehicle candidate areas a1, a2, A3, a4 extracted by the vehicle candidate area extraction unit 105 and the road surface area RS detected by the road surface area detection unit 106. Since the vehicle candidate regions outside the road surface region RS detected by the road surface region detection unit 106 are removed from the candidates, in fig. 9, the vehicle candidate regions a2, a4 are removed, leaving only the vehicle candidate regions a1, A3 as the vehicle candidate regions. The vehicle candidate areas a1 and A3 are set as luminance comparison areas C1 and C2, respectively, in front of them.
In fig. 10, the vehicle candidate regions a1 and A3 and the luminance comparison regions C1 and C2 corresponding thereto are shown on the image captured by the camera 2. Here, the luminance comparison regions C1, C2 are set as elliptical regions in the direction of the camera 2, not elliptical in the traveling direction of the own vehicle V1 and the other vehicle V2, because the regions of the road surfaces R1, R2 illuminated by the headlights of the other vehicle V2 shown in fig. 7 generally extend in the direction of the camera 2 on the captured image. The luminance comparison areas C1 and C2 are not limited to such an ellipse. The region setting may be performed so that the luminance value of the road surface irradiated with the headlights of the other vehicle V2 can be extracted.
In step S108, the sample-luminance region setting unit 109 sets sample-luminance regions for comparison with the comparison-luminance regions C1 and C2 set by the comparison-luminance region setting unit 108. As shown in fig. 11, from road surface region RS surrounded by a broken line, luminance sample region CS for comparison with luminance comparison regions C1 and C2 shown in fig. 10 is set. The road surface region RS is a region from which obstacles are removed, and the following problem can be solved by setting a brightness sample region CS as an object of comparison brightness in the road surface region RS: in the case where there is an obstacle in the luminance comparison target area of the related art or in the case where there are only 1 lane other than the lane where the own vehicle is traveling, there is a possibility that the detection of another vehicle cannot be performed correctly.
In the present embodiment, since the luminance sample region CS is highly likely to be a road surface, the region closest to the vehicle V1 and closest to the traveling direction side is set in the warning target region W described later. In the notification unit 114 described later, when it is determined that the warning is given as a result of the previous shot, that is, when the other vehicle V2 has entered the warning target region W in the previous shot image, there is a possibility that the brightness sample region CS is illuminated by the headlights of the other vehicle V2. Therefore, the brightness sample region CS is set as another region within the road surface region RS, and is not located within the warning target region W. The luminance sample region CS is a region of about 1m × 1m, but is not limited to this range.
In step S109, the alarm target area setting unit 107 sets an alarm target area W to be notified to the host vehicle V1 when another vehicle V2 enters the area. The warning target region W is, for example, a region 5m rearward from the position of the camera 2 from the vehicle side end 2m, and is a region indicated by oblique lines in fig. 11. The alarm target area W is not limited to this range.
In step S110, the road surface luminance comparing section 110 compares the luminance values between the luminance comparison regions C1, C2 and the luminance sample region CS. Here, the luminance comparison region C1 having a luminance value greater than that of the sample luminance region CS by 100 or more is determined as the irradiation region of the headlight. The threshold value (determination criterion) for determining the difference in the luminance values of the irradiation of the headlamps is not limited to 100.
In step S111, the travel region estimation unit 111 estimates the vehicle candidate region a1 corresponding to the luminance comparison region C1 determined to be irradiated with headlights by the road surface luminance comparison unit 110 as a region where another vehicle V2 is present.
In step S112, the vehicle position determination unit 112 determines the position of the other vehicle V2. Specifically, the lower end of the other vehicle V2 is determined. Generally, the brightness value of the vicinity area where the vehicle irradiated with the headlamps contacts the road surface at night is low. Therefore, an area having a low luminance value is searched between the luminance comparison area C1 determined to be irradiated with headlights by the road surface luminance comparison unit 110 and the vehicle candidate area a1 estimated to be the presence of another vehicle V2 traveling by the traveling area estimation unit 111. For example, the luminance value smaller than 30 is searched, the lower end LC of the other vehicle V2 is decided, and the vehicle position is determined. The reference (threshold) for determining the luminance value of the lower end of the vehicle is not limited to 30, and may be set as appropriate according to the road surface state, weather, and the like. The position of the lower end LC of the other vehicle V2 is illustrated in fig. 10.
In step S113, the vehicle position specifying unit 112 determines whether or not the lower end LC of the other vehicle V2 is present in the warning target area W set by the warning target area setting unit 107. If the lower end LC of the other vehicle V2 is not in the warning target area W (no in step S113), the process from step S114 onward is not performed, the flow returns to step S101, and the captured image input by the rear side camera 2 is processed in the same manner.
When the lower end LC of the another vehicle V2 is present in the warning target area W in step S113 (yes in step S113), the process proceeds to step S114, and the vehicle approach determination unit 113 determines whether or not the another vehicle V2 is approaching the host vehicle V1. The lower end LC of the other vehicle V2 is displayed on the image captured by the camera 2, and when the lower end LC approaches the own vehicle V1 in time series, it is determined that the other vehicle is approaching the own vehicle. When it is determined that another vehicle V2 located at the rear side of the host vehicle V1 is approaching the host vehicle V1 (yes at step S114), the process proceeds to step S115, and it is determined by the output determination unit 1141 of the notification unit 114 that the driver is notified. When it is determined that another vehicle V2 present on the rear side of the host vehicle V1 does not approach the host vehicle V1 (no in step S114), the process returns to step S101, and the captured image input by the rear-side camera 2 is processed in the same manner.
Next, a case where a signal to be notified to the driver is generated in step S115 will be described with reference to the flowchart of fig. 6. The following description will be given taking, as an example, a video signal, an audio signal, and a vibration as a notification signal to the driver.
In step S115, when the output determination unit 1141 of the notification unit 114 determines to notify the driver, the image generation unit 1142 included in the notification unit 114 generates an alarm image in step S1151, and outputs the alarm image to the video output unit 115 in step S1152.
Fig. 12 and 13 are diagrams showing an example of the warning image, in which the warning object area W set by the warning object area setting unit 107 and the lower end LC of the other vehicle V2 are shown on the image captured by the camera 2, and the display areas of the warning images WI101 and WI102 are superimposed on the image. In fig. 12, this corresponds to a state in which the lower end LC of the other vehicle V2 is determined in step S112, and the lower end LC is not in the alarm target area W, and therefore the alarm image WI101 is not displayed. Fig. 13 shows the following states: in step S113, it is determined that the lower end LC is located within the warning target area W, and in step S114, the lower end LC descends in time series and approaches the host vehicle V1, and in step S115, notification determination is performed. The alarm image generated in step S1151 is displayed in the alarm image WI102 in fig. 13. Thus, the driver can visually confirm that the warning image WI102 is displayed, and can grasp that the other vehicle V2 is approaching the rear side of the own vehicle V1.
In step S1151, the images of fig. 12 and 13 are generated as the video signals for notification to the driver, and when it is necessary to notify an alarm, the alarm image WI102 is displayed. Therefore, fig. 12 does not suggest to the driver.
Note that the warning images WI101 and WI102 are not limited to this, and may be displayed so long as the driver can visually confirm the warning.
In step S115, when the output determination unit 1141 of the notification unit 114 determines to notify the driver, the sound generation unit 1143 included in the notification unit 114 generates an alarm sound in step S1153, and outputs the alarm sound to the sound output unit 116 in step S1154.
The warning sound is a sound signal to be notified to the driver, and the driver can grasp that the other vehicle V2 is approaching the rear side of the own vehicle V1 by the output of the warning sound.
Further, the driver may be notified of both display of the warning image and output of the warning sound.
In step S115, when the output determination unit 1141 of the notification unit 114 determines to notify the driver, in step S1155, the vibration control unit 1144 included in the notification unit 114 vibrates the steering wheel.
By vibrating the steering wheel, the driver can grasp that another vehicle V2 is approaching the rear side of the own vehicle V1.
As the warning notification unit for the driver, it is not limited to vibrating the steering wheel, and a vibration device may be combined to the seat of the driver's seat or the steering wheel to give a warning in a tactile manner.
Further, the display of the warning image and the output of the warning sound may be combined and notified to the driver in a variety of notification methods.
The operations of steps S101 to S115 shown in fig. 5 and 6 are repeated while the host vehicle V1 is traveling.
As described above, according to embodiment 1, since the road surface area RS from which the obstacle on the rear side of the host vehicle is removed is detected using the camera 2 and the sonar sensor 3, the vehicle candidate areas a1 to a4 in which other vehicles exist are extracted from the image on the rear side obtained by the camera 2, the travel area of the other vehicle V2 is estimated and an alarm is notified by comparing the brightness of the vehicle candidate areas a1 and A3 located in the road surface area RS with the brightness comparison areas C1 and C2 set in the road surface area RS in front of the vehicle candidate areas a1 and A3 and with the brightness of the brightness sample area CS set in the road surface area RS, the brightness comparison areas C1 and C2 and the brightness sample area CS are both within the road surface area RS and are compared with each other, the detection accuracy of the other vehicle V2 is improved regardless of the lane, and the false alarm as in the related art is suppressed.
Therefore, it is possible to realize an accurate warning about a nearby vehicle approaching the host vehicle at night, and the driver can grasp the situation around the host vehicle more accurately.
Embodiment 2.
The vehicle position detection device and the vehicle position detection method according to embodiment 2 will be described below with reference to the functional block diagram of fig. 14, the flowchart of fig. 15, and fig. 16 to 19.
[ functional Structure of vehicle position detection apparatus ]
Fig. 14 is a functional block diagram showing a functional configuration of the vehicle position detection device 10 according to embodiment 2. . In fig. 14, the points different from those in fig. 3 of embodiment 1 will be described, and the description of the corresponding points will be omitted. In fig. 14, the vehicle position detection device 10 includes a lane area setting unit 221 and a traveling lane estimation unit 222. Note that the notification unit 214 has a functional configuration shown in fig. 4, as in embodiment 1.
The lane area setting unit 221 sets which area on the captured image corresponds to which lane area with respect to the lane existing around the host vehicle V1.
The road surface luminance comparing unit 210 compares the luminance values of the plurality of luminance comparison areas set by the luminance comparison setting unit 208. While the luminance sample region set by the luminance sample region setting unit 109 is compared with the plurality of luminance comparison regions in embodiment 1, the luminance values of the plurality of luminance comparison regions are compared without the luminance sample region setting unit in embodiment 2.
The traveling lane estimation unit 222 estimates the lane position where the other vehicle V2 travels based on the results of the lane area setting unit 221 and the road surface luminance comparison unit 210. Fig. 16 is a diagram showing the positional relationship of the own vehicle V1 and the other vehicle V2. The traveling lane estimation unit 222 estimates that the other vehicle V2 is traveling on the lane LN between the white line L1 and the white line L2. That is, the travel area estimation unit 222 estimates the travel area of another vehicle estimated by the travel area estimation unit according to embodiment 1, based on the result of the lane area setting unit 221, to reach the travel lane.
The vehicle position specifying unit 212 specifies the position of the other vehicle V2 based on the vehicle candidate region extracted by the vehicle candidate region extraction unit 205 and the lane in which the other vehicle V2 estimated by the traveling lane estimation unit 222 travels.
[ MEANS FOR DETECTING VEHICLE POSITION ]
Next, the operation of the functional configuration of the vehicle position detection device according to embodiment 2 will be described with reference to the flowchart of fig. 15 and fig. 16 to 19.
Fig. 15 is a flowchart showing the operation of the vehicle position detection device according to embodiment 2.
Steps S201 to S206 are the same as steps S101 to S106 shown in the flowchart of fig. 5 of embodiment 1, and therefore, the description will be simplified and the detailed description will be omitted.
In step S201, the camera 2 captures a rear-side peripheral image of the vehicle V1.
In step S202, an obstacle (three-dimensional object) around the vehicle V1 is detected using sonar sensor 3, and the distance to the obstacle is measured.
In step S203, the light source extraction unit 203 extracts light source candidates from the luminance values of the captured image acquired in step S201.
In step S204, the light source type determination unit 204 extracts the colors of the captured images of the light source candidates extracted by the light source extraction unit 203, and determines the light source type.
In step S205, the vehicle candidate region extraction unit 205 extracts a vehicle candidate region including light source candidates for headlamps of the vehicle based on the light source candidates determined by the light source type determination unit 204 among the light source candidates extracted by the light source extraction unit 203.
In step S206, the road surface area detection unit 206 performs calculation using the captured image obtained by the camera 2 and the obstacle (three-dimensional object) distance measurement result obtained by the sonar sensor 3, and detects the road surface area RS.
In step S207, the lane area setting unit 221 sets which area on the captured image corresponds to which lane area for the lane existing around the host vehicle V1. Specifically, for example, in fig. 16, the lane regions around the host vehicle V1 are estimated and set based on white lines L1 to L3 detected around the host vehicle V1, the traveling direction of the host vehicle V1, the traveling direction of another vehicle estimated based on time-series changes in the road surface region detected by the road surface region detecting unit 206, and the like.
Step S208 and step S209 are the same as step S107 and step S109 described in embodiment 1, respectively.
That is, in step S208, the luminance comparison area setting unit 208 sets the luminance comparison area based on the plurality of vehicle candidate areas extracted by the vehicle candidate area extraction unit 205 and the road surface area detected by the road surface area detection unit 206.
In step S209, the alarm target area setting unit 207 sets an alarm target area W to be notified to the host vehicle V1 when another vehicle V2 enters the area.
In step S210, the road surface luminance comparing unit 210 extracts the luminance values of the luminance comparison areas C1 and C2 set by the luminance comparison area setting unit 208, and compares the luminance values. The luminance comparison regions C1 and C2 are the same as those shown in fig. 10 of embodiment 1. In embodiment 1, the comparison target of the luminance value is compared with the luminance sample region CS extracted by the luminance sample region setting unit 109, but in embodiment 2, whether or not the luminance comparison region is the irradiation region of the headlamp is determined by comparing the luminance comparison regions.
When the luminance comparison area C1 and the luminance comparison area C2 are compared, if both areas include a portion having a luminance value of a predetermined threshold value, for example, 150 or more, it is determined that both areas are irradiation areas of the headlamps. In this case, the traveling lane estimation unit 222 described later estimates all of the 2 lanes as traveling lanes. When the luminance value of the luminance comparison area C1 is greater than the luminance comparison area C2 by, for example, 100 or more, the luminance comparison area C1 is determined as the irradiation area of the headlamp. When the luminance values of the luminance comparison regions C1 and C2 are both 100 or less, it is determined by the travelling lane estimation unit 222, which will be described later, that the vehicle is not a travelling vehicle. In embodiment 2, the luminance value of the luminance comparison area C1 is larger than the luminance comparison area C2 by, for example, 100 or more, and the luminance comparison area C1 is determined as the irradiation area of the headlamp. The threshold value (determination criterion) for determining the difference in the luminance values of the irradiation of the headlamps is not limited to 100. The reference for comparison of the luminance values may be set in consideration of weather, a state of a road surface, a vehicle speed of a running vehicle, and the like.
In embodiment 2, when there are 2 lanes on the side surface of the lane in which the host vehicle is traveling, 2 brightness comparison regions are set and the respective brightnesses are compared. Therefore, when the number of lanes located on the side of the lane on which the host vehicle is traveling is 1 lane or less, the luminance comparison area is not set and the luminance comparison is not performed. Whether or not the number of lanes on the side of the lane on which the host vehicle is traveling is 2 or more is estimated from white line detection, road surface area, or the like. The estimation of the number of lanes is not limited to the above method, and does not affect the effect.
Here, for example, as shown in fig. 10, the luminance comparison regions C1, C2 are set in the road surface region RS detected by the road surface region detection unit 206. By setting the luminance comparison regions C1 and C2 for comparing luminance in the road surface region RS, the following problems can be solved as in embodiment 1: in the case where there is an obstacle in the luminance comparison target area of the related art or in the case where there are only 1 lane other than the lane where the own vehicle is traveling, there is a possibility that the detection of another vehicle cannot be performed correctly.
In step S211, the traveling lane estimation unit 222 estimates which lane corresponds to the lane in which the other vehicle V2 is traveling, from among the lanes set by the lane area setting unit 211, based on the result of the road surface brightness comparison unit 210. Here, since the road surface luminance comparing unit 210 determines that the luminance comparison area C1 is the irradiation area of the headlights, it is estimated in fig. 16 that the other vehicle V2 is traveling on the lane LN.
In step S212, the vehicle position specifying unit 212 determines the vehicle position, that is, the lower end LC of the vehicle, based on the information on the lane LN of the other vehicle V2 estimated by the traveling lane estimating unit 222, the vehicle candidate region a1 extracted by the vehicle candidate region extracting unit 205, and the luminance comparing region C1 determined as the irradiation region of the headlights by the road surface luminance comparing unit 210, as in embodiment 1.
In step S213, the vehicle position specifying unit 212 determines whether or not the lower end LC of the other vehicle V2 is present in the warning target area W set by the warning target area setting unit 207. When the lower end LC of the other vehicle V2 is located in the warning target area W (yes in step S213), the process proceeds to step S215, and it is determined that the driver is notified. If the lower end LC of the other vehicle V2 is not in the alarm target area W (no in step S213), the process proceeds to step S214.
In step S214, the vehicle position determination unit 212 determines whether the lower ends LC of the other vehicles V2 are on the two adjacent lanes of the own vehicle V1. That is, even if the other vehicle V2 is not present in the warning target area W, since the warning target area W is set on two adjacent lanes of the host vehicle V1, it is determined whether the other vehicle V2 is present on the lane in which the warning target area W is set. If the lower ends LC of the other vehicles V2 are located on the two adjacent lanes of the host vehicle V1 (yes at step S214), the process proceeds to step S215, and it is determined that the notification is given to the driver. If the lower end LC of the other vehicle V2 is not on the two adjacent lanes of the host vehicle V1 (no in step S214), the process thereafter is not performed, and the process returns to step S201, and the captured image input by the rear-side camera 2 is processed in the same manner.
In step S211, if the lane LN estimated by the traveling lane estimation unit 222 is different from the lane in which the warning target region W exists, the present step S214 can be skipped without proceeding to step S213 (returning to step S201).
Next, a case where a signal to be notified to the driver is generated in step S215 will be described with reference to the flowchart of fig. 6 shown in embodiment 1. As in embodiment 1, the notification signal to the driver is a video signal, an audio signal, or a vibration.
In step S215, when the output determination unit 1141 of the notification unit 214 determines to notify the driver, the image generation unit 1142 included in the notification unit 214 generates an alarm image in step S1151, and outputs the alarm image to the video output unit 115 in step S1152.
Fig. 17 to 19 are diagrams showing an example of the warning image, in which the warning target area W set by the warning target area setting unit 207 and the lower end LC of the other vehicle V2 are shown on the image captured by the camera 2, and the display areas of the warning images WI101 to WI103 are superimposed on the image. Fig. 17 corresponds to a state in which the lower end LC of the other vehicle V2 is determined in step S212, and the lower end LC is not in the warning object area W nor in the same lane as the warning object area W, and therefore, the warning image WI201 is not displayed. Fig. 18 shows the following states: it is determined in step S213 that the lower end LC is not within the warning object area W, and it is determined in step S214 that the lower ends LC of the other vehicles V2 are located on the two adjacent lanes of the own vehicle V1. In the figure, an image showing attention as the warning image generated in step S1151 is displayed in the warning image WI 202. Fig. 19 shows the following states: in step S213, it is determined that the lower end LC is located in the alarm target area W. In the figure, the alarm image generated in step S1151 is displayed in the alarm image WI 203.
As described above, the driver can visually confirm that the warning image WI203 is displayed, and can grasp that the other vehicle V2 is approaching the rear side of the own vehicle V1.
In embodiment 2, when it is determined that the lane adjacent to the host vehicle V1 matches the traveling lane LN of the other vehicle V2, even if the lower end LC indicating the vehicle position does not exist in the warning target area W, the warning image WI202 in fig. 18 indicates that the other vehicle exists in the adjacent lane behind the traveling lane of the host vehicle V1. This can prompt the driver to pay attention in advance before the display of the warning image WI 203.
In step S1151, the images of fig. 17 to 19 are generated, and when the alarm notification is required, the alarm images WI202 and WI203 are displayed. Therefore, fig. 17 does not suggest to the driver.
Note that the warning images WI201 to WI203 are not limited to this, and may be displayed so long as the driver can visually confirm the warning.
In step S215, when the output determination unit 1141 of the notification unit 214 determines to notify the driver, the sound generation unit 1143 included in the notification unit 214 generates an alarm sound in step S1153, and outputs the alarm sound to the sound output unit 116 in step S1154.
At this time, as shown in fig. 18, when it is determined that the lower end LC of the vehicle is not present in the warning target area W but the adjacent lane of the host vehicle V1 matches the lane LN of the another vehicle V2, a sound is generated to transmit a message indicating that "the another vehicle V2 is present in the lane adjacent to the traveling lane of the host vehicle". As shown in fig. 19, when it is determined that the lower end LC of the vehicle is present in the warning target region W, a sound is generated to transmit a message indicating that "another vehicle V2 is present in the vicinity of the host vehicle on a lane adjacent to the lane in which the host vehicle is traveling".
Further, the driver may be notified of both display of the warning image and display of output of the warning sound.
In step S215, when the output determination unit 1141 of the notification unit 214 determines to notify the driver, in step S1155, the vibration control unit 1144 included in the notification unit 214 vibrates the steering wheel. At this time, if it is determined that the lower end LC of the vehicle is not present in the warning target area W as shown in fig. 18 but the adjacent lane of the host vehicle V1 coincides with the lane LN of the other vehicle V2, or if it is determined that the lower end LC of the vehicle is present in the warning target area W as shown in fig. 19, the degree of urgency is more likely to be transmitted to the driver if the intensity (frequency or amplitude) of the vibration is increased in the case of fig. 19. By vibrating the steering wheel, the driver can grasp that another vehicle V2 is approaching the rear side of the own vehicle V1.
As the warning notification unit for the driver, it is not limited to vibrating the steering wheel, and a vibration device may be combined to the seat or the steering wheel of the driver to provide a warning in a tactile manner.
Further, the display of the warning image and the output of the warning sound may be combined and notified to the driver in a variety of notification methods.
The operations of steps S201 to S215 shown in fig. 15 and 6 are repeated while the host vehicle V1 is traveling.
As described above, according to embodiment 2, the same effects as those of embodiment 1 are obtained. That is, since the road surface area RS from which the obstacle on the rear side of the vehicle is removed is detected using the camera 2 and the sonar sensor 3, the vehicle candidate areas a1 to a4 in which other vehicles exist are extracted from the image on the rear side obtained by the camera 2, the vehicle candidate areas a1 and A3 located in the road surface area RS are set with the luminance comparison areas C1 and C2 in the road surface area RS in front of them, and the luminances of the luminance comparison areas C1 and C2 are compared to estimate the travel area of the other vehicle V2 and notify an alarm, the luminance comparison areas C1 and C2 are both in the road surface area RS and are compared, the detection accuracy of the other vehicle V2 is improved regardless of the lane, and the false alarm as in the related art is suppressed.
Therefore, it is possible to realize an accurate warning about a nearby vehicle approaching the host vehicle at night, and the driver can grasp the situation around the host vehicle more accurately.
In addition, since the lane area is detected and the lane on which the other vehicle V2 is traveling is estimated in advance in embodiment 2, the detection accuracy of the other vehicle V2 is further improved.
[ other modifications ] A method for producing a semiconductor device
In embodiments 1 and 2 described above, although the driver is not presented with fig. 12 and 17 showing the positional relationship between the host vehicle V1 and the other vehicle V2 detected when the notification is not given to the driver, the driver may be shown with the positional relationship of the vehicle superimposed on the image captured by the camera 2 in the image generating unit 1142 of the notification unit 114 or 214 after the positions are determined in steps S112 and S212, respectively, and any of the warning images WI101, WI102, WI201 to WI203 may be displayed when the notification is required.
In embodiment 1 described above, although the notification is not made when it is determined in step S114 that the other vehicle V2 is not approaching the own vehicle V1, for example, an image indicating an intermediate warning state between the warning image WI101 and the warning image WI102 may be generated to notify that the other vehicle V2 is present in the vicinity behind the own vehicle V1.
Although the vehicle approach determination unit is not provided in embodiment 2, a vehicle approach determination unit may be provided in the same manner as in embodiment 1 to notify an alarm in accordance with the approach state of another vehicle.
In embodiment 2, the luminance sample region CS may be set in the road surface region RS and compared with the luminance comparison region.
In embodiment 1, as in embodiment 2, a lane area setting unit may be provided to enable setting of a lane.
Although the present embodiments 1 and 2 show the positional relationship between the host vehicle V1 and the other vehicle V2 separated by two lanes in fig. 7 and 16, this is an example, and it is needless to say that the other vehicle V2 can be detected with high accuracy even in the lane where the host vehicle V1 and the other vehicle V2 are adjacent to each other.
While various exemplary embodiments and examples are described herein, the various features, aspects, and functions described in one or more embodiments are not limited in their application to a particular embodiment, but may be applied to embodiments alone or in various combinations.
Therefore, it is considered that numerous modifications not illustrated are also included in the technical scope disclosed in the present specification. For example, the present invention includes a case where at least one of the components is modified, added, or omitted, and a case where at least one of the components is extracted and combined with the components of the other embodiments.
Description of the reference symbols
1 vehicle
2 pick-up head
3 sonar sensor
5 communication line
7 Camera Wiring
8 sonar sensor wiring
10 vehicle position detecting device
11 monitoring camera controller
12 sonar controller
13 other sensors
14 arithmetic device
101. 201 imaging unit
102. 202 distance measuring unit
103. 203 light source extraction part
104. 204 light source type determination unit
105. 205 vehicle candidate region extracting unit
106. 206 road surface area extraction unit
107. 207 alarm target area setting unit
108. 208 brightness comparison area setting unit
109 brightness sample region setting unit
110. 210 road surface brightness comparing part
111 travel region estimation unit
112. 212 vehicle position determining section
113 vehicle approach determination unit
114. 214 notification unit
115 image output unit
116 sound output unit
221 lane area setting unit
222 lane estimation unit
1141 output judging unit
1142 image generating part
1143 Sound generating part
1144 vibration control part
1000 processor
2000 storage device
V1 own vehicle
V2 other vehicles
Visual angle of S camera
L1-L4 white line
Road surface irradiated with R1 and R2 by headlights of other vehicles
h1, h2 other vehicle headlamps
Candidate light sources O1 and O2 (light sources other than headlamps)
A1-A4 vehicle candidate areas
RS road surface area
C1, C2 brightness comparison area
CS sample area of luminance
Lower end of LC vehicle (vehicle position)
LN Lane where other vehicles are present
W alarm target area
WI101, WI102, WI201 to WI203 alarm images.

Claims (12)

1. A vehicle position detecting apparatus, characterized by comprising:
an imaging unit that is provided in a vehicle, and that images and outputs the surroundings of the vehicle;
a distance measuring unit that is provided in the vehicle and detects an obstacle around the vehicle;
a vehicle candidate region extraction unit that extracts a vehicle candidate region including the lighted headlights based on the captured image of the rear side of the vehicle captured by the imaging unit;
a road surface area detection unit that detects a road surface area from which the obstacle is removed from the captured image, based on a detection result obtained by the distance measurement unit;
a road surface luminance comparing unit that sets a luminance comparing area in front of the vehicle candidate area with respect to the vehicle candidate area existing in the road surface area, and compares luminance obtained from the luminance comparing area;
a travel area estimation unit that estimates a travel area in which another vehicle behind the vehicle travels, based on a comparison result obtained by the road surface brightness comparison unit; and
a vehicle position determination unit that determines a position of the other vehicle based on the travel area and the vehicle candidate area used in the estimation of the travel area.
2. The vehicle position detecting apparatus according to claim 1,
the vehicle position determining unit may determine that the other vehicle is located within a warning target area set in advance or in the same lane as the lane in which the warning target area is set.
3. The vehicle position detecting apparatus according to claim 2,
the notification signal is a video signal, and the notification unit includes an image generation unit that generates, as the video signal, an image in which the position of the other vehicle specified by the vehicle position specification unit and an alarm image are superimposed on the captured image captured by the imaging unit.
4. The vehicle position detecting apparatus according to claim 2 or 3,
the notification signal is a sound signal, and the notification unit includes a sound generation unit that generates an alarm sound as the sound signal.
5. The vehicle position detection apparatus according to any one of claims 2 to 4,
the notification signal is vibration, and the notification unit includes a vibration control unit configured to vibrate a steering wheel of the vehicle or a seat of a driver's seat.
6. The vehicle position detection apparatus according to any one of claims 2 to 5,
the warning target area is set on a lane adjacent to a lane in which the vehicle is traveling.
7. The vehicle position detection apparatus according to any one of claims 2 to 6,
the vehicle approach determination unit may be configured to determine whether or not the other vehicle is approaching based on a time-series change in the position of the other vehicle determined by the vehicle position determination unit, and the notification unit may generate a notification signal indicating that the other vehicle is approaching when the vehicle approach determination unit determines that the other vehicle is approaching.
8. The vehicle position detection apparatus according to any one of claims 2 to 7,
the road surface brightness comparison unit compares a brightness value of a brightness sample region preset in the road surface region with a brightness value of a brightness comparison region set in the road surface region and located further forward than the vehicle candidate region.
9. The vehicle position detection apparatus according to any one of claims 2 to 8,
the vehicle candidate region extraction unit extracts a vehicle candidate region based on information obtained by discriminating a light source and a type thereof from a luminance value and a color of the captured image captured by the imaging unit.
10. The vehicle position detection apparatus according to any one of claims 1 to 9,
the travel area estimation unit estimates a travel lane on which another vehicle behind the vehicle travels, based on a lane area estimated from the captured image captured by the imaging unit and a comparison result obtained by the road surface brightness comparison unit.
11. A vehicle position detection method characterized by comprising:
a step of photographing the periphery of the vehicle;
detecting an obstacle in the periphery of the vehicle;
extracting a vehicle candidate region including the lighted headlights based on the captured image of the rear side of the vehicle;
detecting a road surface area from which the obstacle detected from the captured image is removed;
setting a brightness comparison area in front of the vehicle candidate area for the vehicle candidate area existing in the road surface area, and comparing brightness obtained from the brightness comparison area;
estimating a travel area in which another vehicle on the rear side of the vehicle travels, based on a result of the luminance comparison in the luminance comparison area; and
a step of determining the position of the other vehicle based on the travel area and the vehicle candidate area used in the estimation of the travel area.
12. The vehicle position detecting method according to claim 11, characterized by further comprising:
setting a warning target area in a lane adjacent to a lane in which the vehicle is traveling; and
and generating a notification signal when the other vehicle whose position is specified is located within the warning target area or on the same lane as the lane in which the warning target area is set.
CN202111201980.8A 2020-10-20 2021-10-15 Vehicle position detection device and vehicle position detection method Active CN114379562B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020175731A JP7034231B1 (en) 2020-10-20 2020-10-20 Vehicle position detection device and vehicle position detection method
JP2020-175731 2020-10-20

Publications (2)

Publication Number Publication Date
CN114379562A true CN114379562A (en) 2022-04-22
CN114379562B CN114379562B (en) 2024-05-10

Family

ID=80929471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111201980.8A Active CN114379562B (en) 2020-10-20 2021-10-15 Vehicle position detection device and vehicle position detection method

Country Status (3)

Country Link
JP (1) JP7034231B1 (en)
CN (1) CN114379562B (en)
DE (1) DE102021209974A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007193702A (en) * 2006-01-20 2007-08-02 Sumitomo Electric Ind Ltd Image processing device and image processing method
CN102902952A (en) * 2011-07-28 2013-01-30 株式会社日立制作所 Onboard environment recognition system
JP2013140515A (en) * 2012-01-05 2013-07-18 Toyota Central R&D Labs Inc Solid object detection device and program
CN103348394A (en) * 2011-04-13 2013-10-09 日产自动车株式会社 Driving assistance device and adjacent vehicle detection method therefor
JP2014013453A (en) * 2012-07-03 2014-01-23 Clarion Co Ltd In-vehicle surrounding environment recognition apparatus
CN104115204A (en) * 2012-03-01 2014-10-22 日产自动车株式会社 Three-dimensional object detection device
JP2015055968A (en) * 2013-09-11 2015-03-23 アルパイン株式会社 Vehicle position detection device and blind spot warning system
FR3046393A1 (en) * 2016-01-05 2017-07-07 Valeo Schalter & Sensoren Gmbh METHOD IMPLEMENTED IN A MOTOR VEHICLE AND ASSOCIATED MOTOR VEHICLE

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112018007484T5 (en) * 2018-04-16 2021-02-25 Mitsubishi Electric Corporation Obstacle detection device, automatic braking device using an obstacle detection device, obstacle detection method, and automatic braking method using an obstacle detection method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007193702A (en) * 2006-01-20 2007-08-02 Sumitomo Electric Ind Ltd Image processing device and image processing method
CN103348394A (en) * 2011-04-13 2013-10-09 日产自动车株式会社 Driving assistance device and adjacent vehicle detection method therefor
CN102902952A (en) * 2011-07-28 2013-01-30 株式会社日立制作所 Onboard environment recognition system
JP2013140515A (en) * 2012-01-05 2013-07-18 Toyota Central R&D Labs Inc Solid object detection device and program
CN104115204A (en) * 2012-03-01 2014-10-22 日产自动车株式会社 Three-dimensional object detection device
JP2014013453A (en) * 2012-07-03 2014-01-23 Clarion Co Ltd In-vehicle surrounding environment recognition apparatus
JP2015055968A (en) * 2013-09-11 2015-03-23 アルパイン株式会社 Vehicle position detection device and blind spot warning system
FR3046393A1 (en) * 2016-01-05 2017-07-07 Valeo Schalter & Sensoren Gmbh METHOD IMPLEMENTED IN A MOTOR VEHICLE AND ASSOCIATED MOTOR VEHICLE

Also Published As

Publication number Publication date
JP7034231B1 (en) 2022-03-11
JP2022067169A (en) 2022-05-06
CN114379562B (en) 2024-05-10
DE102021209974A1 (en) 2022-04-21

Similar Documents

Publication Publication Date Title
US9836657B2 (en) System and method for periodic lane marker identification and tracking
JP3349060B2 (en) Outside monitoring device
US10380439B2 (en) Vehicle sensing system for detecting turn signal indicators
US6744380B2 (en) Apparatus for monitoring area adjacent to vehicle
KR20150051735A (en) Parking Guide System and the Method
JP2002319091A (en) Device for recognizing following vehicle
US20130083971A1 (en) Front vehicle detecting method and front vehicle detecting apparatus
JP3747599B2 (en) Obstacle detection device for vehicle
JPH08320997A (en) Vehicle travel lane recognition device, and obstacle detector and road deviation reporting device
JP2017529517A (en) Method of tracking a target vehicle approaching a car by a car camera system, a camera system, and a car
JP2010146494A (en) Vehicle surroundings monitoring device
JP2008027309A (en) Collision determination system and collision determination method
US8160300B2 (en) Pedestrian detecting apparatus
KR20060021922A (en) Two camera based obstacle detection method and device
JPH1139597A (en) Collision preventing device for vehicle
EP2347931A1 (en) Headlamp controller
JP2001195698A (en) Device for detecting pedestrian
JP2017129543A (en) Stereo camera device and vehicle
US7057502B2 (en) Vehicle drive assist apparatus
CN114379562B (en) Vehicle position detection device and vehicle position detection method
JPH04193641A (en) Obstacle detection device for vehicle
JPH09272414A (en) Vehicle control device
JP4140118B2 (en) Vehicle obstacle detection device
CN112542060B (en) Rear side alarm device for vehicle
KR20100034281A (en) A system for detecting a unevenness on the road using a vehicle front camera image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant