WO2019176418A1 - Vehicular lamp, vehicle detection method, and vehicle detection device - Google Patents

Vehicular lamp, vehicle detection method, and vehicle detection device Download PDF

Info

Publication number
WO2019176418A1
WO2019176418A1 PCT/JP2019/004994 JP2019004994W WO2019176418A1 WO 2019176418 A1 WO2019176418 A1 WO 2019176418A1 JP 2019004994 W JP2019004994 W JP 2019004994W WO 2019176418 A1 WO2019176418 A1 WO 2019176418A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
light spot
unit
image
lamp
Prior art date
Application number
PCT/JP2019/004994
Other languages
French (fr)
Japanese (ja)
Inventor
光治 眞野
亮太 小倉
高範 難波
Original Assignee
株式会社小糸製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018048050A external-priority patent/JP2019156277A/en
Priority claimed from JP2018048049A external-priority patent/JP2019156276A/en
Application filed by 株式会社小糸製作所 filed Critical 株式会社小糸製作所
Publication of WO2019176418A1 publication Critical patent/WO2019176418A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/0017Devices integrating an element dedicated to another function
    • B60Q1/0023Devices integrating an element dedicated to another function the element being a sensor, e.g. distance sensor, camera
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • B60Q1/14Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights having dimming means
    • B60Q1/1415Dimming circuits
    • B60Q1/1423Automatic dimming circuits, i.e. switching between high beam and low beam due to change of ambient light or light level in road traffic
    • B60Q1/143Automatic dimming circuits, i.e. switching between high beam and low beam due to change of ambient light or light level in road traffic combined with another condition, e.g. using vehicle recognition from camera images or activation of wipers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2300/00Indexing codes for automatically adjustable headlamps or automatically dimmable headlamps
    • B60Q2300/05Special features for controlling or switching of the light beam
    • B60Q2300/056Special anti-blinding beams, e.g. a standard beam is chopped or moved in order not to blind

Definitions

  • the present disclosure relates to a vehicle lamp suitable for application to a headlamp of an automobile, and more particularly to a vehicle lamp provided with imaging means such as a camera.
  • the present disclosure also relates to a vehicle detection method and a vehicle detection device that are mounted on a vehicle such as an automobile and detect other vehicles such as an oncoming vehicle and a preceding vehicle.
  • ADB control Adaptive Driving Beam
  • AHB Automatic High Beam
  • ADB control Adaptive Driving Beam
  • AHB Automatic High Beam
  • a headlamp capable of light control hereinafter also referred to as ADB control
  • a headlamp in which a camera is integrated is proposed as an imaging means for detecting an oncoming vehicle or a preceding vehicle.
  • a camera unit is housed together with a lamp unit in a lamp housing, and imaging is performed by the camera unit.
  • the imaging optical axis of the imaging means and the light irradiation optical axis of the lamp unit can be brought closer as compared with the case where the imaging means is disposed in the front window of the automobile, such as ADB control. Can be performed with higher accuracy. In other words, it is possible to reduce the light distribution error due to the parallax generated between the two central axes by bringing the central axis of the image capturing the oncoming vehicle and the preceding vehicle close to the central axis of the light distribution such as ADB control. is there.
  • an ADB AdaptiveingDriving
  • Beam Beam
  • AHB Automatic High Beam
  • Patent Document 2 a light spot in an image obtained by an imaging means is detected, and the attribute of the detected light spot, for example, the color and behavior (moving state) of the light spot, A technique has been proposed in which the headlamps and taillights of other vehicles are discriminated to detect other vehicles. Patent Document 2 also discloses a technique for shortening the detection time by setting a detection area for detecting another vehicle based on the detected attribute of the light spot as a predetermined area and determining the set area. Proposed.
  • aiming adjustment for adjusting the optical axis direction of the lamp unit is performed after the headlamp is assembled to the car body of the automobile.
  • the image is picked up by the image pickup means. It is conceivable to perform aiming adjustment while confirming the light distribution pattern.
  • vehicle detection is performed by analyzing an image captured by the imaging unit. Detection by this image analysis In order to improve accuracy, it is considered to use AI (Artificial Intelligence) having a learning function.
  • AI Artificial Intelligence
  • An object of the present disclosure is to provide a vehicle lamp capable of realizing suitable light distribution control.
  • Patent Document 2 is effective in reducing the detection time by limiting the detection area.
  • it is required to detect other vehicles over a wide area, and therefore it is difficult for the technology of Patent Document 2 to meet this requirement.
  • illuminants such as road signs and advertisements increase, and the number of light spots in the captured image tends to increase. Therefore, a process for discriminating other vehicles based on all the behaviors of these light spots is required, and the number of detection steps increases and the detection time increases.
  • An object of the present disclosure is to provide a vehicle detection method and a vehicle detection device that can narrow down light spots in a captured image and detect a vehicle accurately and quickly.
  • the present disclosure relates to a lamp unit that performs light irradiation, an imaging unit that captures at least a light irradiation region of the lamp unit, and a signal bus disposed in a vehicle, for controlling the lamp unit and the imaging unit.
  • a vehicular lamp having a control unit and configured to perform aiming adjustment of the lamp unit based on a signal obtained by performing signal processing on an imaging signal captured by the imaging unit, and the control unit is configured to receive a predetermined command signal.
  • the signal processed signal is output to the signal bus.
  • the signal processed signal is preferably an image signal based on the imaging signal or an aiming adjustment signal.
  • the camera and the lamp unit are built in the lamp housing, and the camera is detachably supported with respect to the lamp housing.
  • the camera is disposed on one selected vehicle lamp of each vehicle lamp.
  • control unit includes a detection unit that detects at least another vehicle based on an imaging signal captured by the camera, and the detection unit is machine-learned based on the automatically generated traveling simulation data. It is preferable. In the traveling simulation data, it is preferable that traveling image data and teacher data are automatically generated.
  • the vehicle detection method is a detection method that is mounted on a vehicle, images the surroundings of the vehicle, and detects the vehicle from the behavior of a light spot that is captured in an image obtained by the imaging.
  • the method includes detecting a behavior of a target light spot in the vehicle and detecting a light spot having a predetermined behavior as a vehicle.
  • a vehicle detection device includes an imaging unit that is mounted on a vehicle and images the surroundings of the vehicle, and a detection unit that detects a vehicle from a light spot captured in an image captured by the imaging unit.
  • the vehicle is detected from the target light spot that is the vehicle detection target from the plurality of light spots existing in the light source, the light spot identification unit that identifies the other non-target light spots, and the behavior of the identified target light spot in the image
  • a light detection unit that recognizes shapes of the plurality of light spots, and identifies the target light spot and the non-target light spot based on the recognition;
  • the aiming adjuster can acquire a signal obtained by performing signal processing on an image pickup signal picked up by the image pickup unit, for example, an image signal or an aiming adjustment signal obtained by calculation, through the signal bus of the vehicle.
  • a signal obtained by performing signal processing on an image pickup signal picked up by the image pickup unit for example, an image signal or an aiming adjustment signal obtained by calculation, through the signal bus of the vehicle.
  • High-precision aiming adjustment can be easily performed, and suitable light distribution control can be realized.
  • the shape of the light spot in the image captured by the imaging unit is identified, and the light spot that is likely to be a vehicle is identified as the target light spot from this shape, and the identified target light spot
  • the process for detecting the vehicle only is executed. As a result, even when there are many light spots in the image, there is no need to perform detection processing for light spots that are not likely to be light spots of the vehicle, and the number of processing steps for detecting the vehicle is reduced. In addition, rapid vehicle detection is possible.
  • the conceptual diagram of the motor vehicle which applied the headlamp concerning this indication The schematic horizontal sectional view of a right headlamp. The exploded perspective view of a camera.
  • the block diagram of lamp ECU. The flow diagram of DL (deep learning).
  • the conceptual diagram which shows the 2nd aiming adjustment form. Schematic which shows the CAN connection state of a right-and-left headlamp.
  • the block diagram of lamp ECU of the headlamp of FIG. Schematic which shows the image imaged with the camera of the left-right headlamp, and its light distribution.
  • Schematic which shows the image imaged with the camera of the left-right headlamp, and its light distribution Schematic which shows the image imaged with the camera of the left-right headlamp, and its light distribution.
  • FIG. 3 is a schematic horizontal sectional view of a headlamp to which the present disclosure is applied.
  • FIG. 1 is a conceptual diagram of an embodiment in which the present disclosure is applied to a headlamp HL of an automobile.
  • the left and right headlamps L-HL and R-HL of the automobile CAR are each provided with a lamp unit 1 capable of ADB control in a lamp housing 3.
  • the lamp unit 1 includes, for example, a light source 11 composed of a plurality of LEDs (light emitting diodes). All or selected LEDs 11 are caused to emit light, and the emitted light is projected onto a front area of the automobile by a projection lens 12. Thus, it is possible to perform light irradiation with a desired light distribution pattern P.
  • each light emitting region of the plurality of LEDs 11 is set to illuminate a predetermined segmented region Ap in the front region of the automobile, and only the segmented region Ap corresponding to the LED 11 selected and emitted is included. Illuminated. Therefore, when all the LEDs 11 emit light, the entire area of the light distribution pattern P is illuminated. Moreover, since the area
  • a camera 2 that captures a moving image is provided as an imaging means in the lamp housing 3 of each of the headlamps L-HL and R-HL.
  • the camera 2 images at least a front area of the automobile irradiated with light by the lamp unit 1 and outputs an imaging signal.
  • the left and right headlamps L-HL and R-HL are respectively assembled to the left and right of the front part of the car body of the car CAR in the car assembly process.
  • the optical axis adjustment that is, aiming adjustment of the lamp unit 1 and the camera 2 is executed.
  • the irradiation optical axis of the lamp unit 1 and the imaging optical axis of the camera 2 are set in a predetermined direction, so that highly accurate ADB control can be realized.
  • FIG. 2 is a schematic horizontal sectional view of the right headlamp R-HL among the headlamps HL.
  • the lamp housing 3 includes a lamp body 31 and a translucent front cover 32.
  • a base plate 5 is supported on the lamp body 31 by an aiming mechanism 4.
  • the lamp unit 1 and the camera 2 are attached to the base plate 5.
  • the irradiation optical axis of the lamp unit 1 and the imaging optical axis of the camera 2 are attached to the base plate 5 in a state where they are directed in the same direction.
  • the base plate 5 is tilted up and down, left and right, and the irradiation optical axis of the lamp unit 1 and the imaging of the camera 2 are captured. Aiming adjustment of the optical axis is performed. Since the aiming mechanism 4 having such an aiming screw 41 is already known, detailed description thereof is omitted, but here, the aiming adjustment is performed manually.
  • the camera 2 includes a camera body 21 including an imaging lens 211 and an imaging element 212, and a camera holder 22 that holds the camera body 21.
  • the camera holder 22 is fixed to the base plate 5 by appropriate means.
  • the camera body 21 is detachably attached to the camera holder 22.
  • the camera body 21 can be attached and detached by sliding up and down with respect to the camera holder 22 from the outside of the lamp housing 3 through an opening not shown in the drawing provided on the upper surface of the lamp body 31. It is configured.
  • These camera body 21 and camera holder 22 are provided with electrodes 23 and 24, respectively. When the camera body 21 is mounted on the camera holder 22, both are electrically connected to each other by these electrodes 23 and 24.
  • FIG. 2 is a block diagram of the lamp ECU 6.
  • the lamp ECU 6 includes a main control unit 61.
  • An input / output unit 62, a signal processing unit 63, an image analysis unit 64, and a lighting drive unit 65 are connected to the main control unit 61.
  • the main control unit 61 includes the input / output unit 62 and the signal processing unit 63. The operations of the image analysis unit 64 and the lighting drive unit 65 are controlled.
  • the lamp unit 1 and the camera 2 described above are connected to the input / output unit 62.
  • the camera body 21 is connected to the input / output unit 62 via the camera holder 22.
  • the input / output unit 62 is connected to a signal bus, for example, CAN (Controller ⁇ ⁇ ⁇ Area Network) 100 that extends to the car CAR.
  • the input / output unit 62 is connected to various other ECUs of the automobile not shown in the drawing via the CAN 100.
  • the signal processing unit 63 processes the image pickup signal of the camera 2 input from the input / output unit 62 to generate a moving image or a still image signal.
  • the image analysis unit 64 analyzes an image obtained from the generated image signal, and detects an object in the captured image, particularly other vehicles such as an oncoming vehicle and a preceding vehicle. Therefore, the image analysis unit 64 is configured as other vehicle detection means.
  • the lighting drive unit 65 recognizes the object detected by the image analysis unit 64, sets an appropriate light distribution pattern that does not dazzle the oncoming vehicle and the preceding vehicle, and generates the light distribution pattern.
  • An ADB control signal is generated. Based on this ADB control signal, the light source of the lamp unit 1, that is, the light emission of the plurality of LEDs 11 is controlled.
  • the lamp ECU 6 detects the other vehicle such as the oncoming vehicle and the preceding vehicle based on the image obtained by imaging with the camera 2, and does not dazzle the detected other vehicle, while other regions.
  • the lighting of the lamp unit 1 is controlled so as to obtain a light distribution pattern capable of brightly illuminating and appropriate ADB control is executed.
  • the image analysis unit 64 that is, the other vehicle detection means detects the other vehicle by performing machine learning based on the traveling simulation data generated by DL (Deep Learning) using AI.
  • DL Deep Learning
  • the image analysis unit 64 detects the other vehicle by performing machine learning based on the traveling simulation data generated by DL (Deep Learning) using AI.
  • AI Deep Learning
  • it is configured.
  • it is desirable to perform a large amount of learning without a break.
  • the image analysis unit 64 when executing the DL, automatically generates a driving environment (S1), automatically generates and arranges road objects (S2), and automatically runs a road as shown in the flow of FIG. Generation (S3), acquisition of correct data by own vehicle traveling (S4), and learning by a learning device are repeated (S5). This increases the learning amount and improves the detection performance.
  • an image of traveling on a road in a virtual space is created by a traveling simulator.
  • the presence / absence of the vehicle and the position on the image viewed from the camera of the own vehicle are converted from 3D to 2D from the position, orientation, distance of the vehicle arranged at an arbitrary coordinate in the virtual space, and the direction and position of the camera of the own vehicle. Etc., and is recorded as correct answer data.
  • camera parameters F value, angle of view, exposure time, etc.
  • the vehicle On top of that, the vehicle will travel on all roads (both up and down).
  • the own vehicle and vehicles other than the own vehicle go around the course at the set road speed limit of ⁇ 20 km / h, and are set so as to observe traffic rules such as traffic lights and temporary stops. Also, make sure that the vehicles do not collide. Then, the learning device inputs the image and the correct answer obtained by traveling of the host vehicle as teacher data.
  • a road is generated in a virtual space from an actual road map image (2D map, satellite photograph image).
  • roads are generated by identifying roads and other roads by image processing from maps and satellite photographs.
  • the number of lanes may be set appropriately from the width of the road, or the lanes may be forcibly determined according to the test case.
  • Objects other than roads may be appropriately arranged according to areas other than roads such as buildings, parking lots, signboards, and mountainous areas. Pedestrian crossings and signals are placed where the roads intersect. Other vehicles may be arranged according to the length of the generated building or road, and may be set so as to continue moving from one building to another.
  • the detection performance by DL is improved by repeating the learning using the traveling simulator. Therefore, in this embodiment, the detection accuracy of other vehicles such as an oncoming vehicle and a preceding vehicle in the image analysis unit of the lamp ECU that has been machine-learned based on simulation data obtained based on such learning is improved. ADB control becomes possible.
  • the configuration of the left headlamp L-HL is substantially the same except that the arrangement of the lamp unit 1 and the camera 2 in the lamp housing 3 is bilaterally symmetrical to the configuration of FIG. Although illustration is omitted, also in the left headlamp L-HL, a lamp ECU is built in the lamp housing and connected to the CAN 100.
  • left and right headlamps L-HL and R-HL are mounted on the body of an automobile CAR, and then the irradiation optical axis of the lamp unit 1 and the imaging optical axis of the camera 2 are directed in a predetermined direction. Aiming adjustment is performed. In this aiming adjustment, as shown in FIG. 1, the light of the lamp unit 1 is irradiated to the screen Sc disposed at a predetermined position in front of the car CAR, and the light distribution pattern P generated by the light irradiation is obtained. Is captured by the camera 2.
  • the irradiation optical axis of the lamp unit 1 is detected from the captured image of the light distribution pattern, and the irradiation optical axis is aimed at a predetermined direction, for example, the center O that is the intersection of the horizontal line H and the vertical line V. Aiming adjustment is performed by operating the screw 41.
  • the aiming adjustment of the imaging optical axis of the camera 2 is also performed by the aiming adjustment of the lamp unit 1.
  • FIG. 6 is a schematic diagram for explaining the first adjustment mode.
  • an external connector 7 is provided on a part of the lamp housing 3 and is directly connected to the camera 2.
  • the external aiming adjuster 8A is connected to the external connector 7.
  • the first aiming adjuster 8A includes a signal processing unit 81 similar to the signal processing unit 63 provided in the lamp ECU 6, and a monitor 82 for displaying an image signal generated by the signal processing unit 63.
  • a monitor driving unit 83 for displaying an image corresponding to the image signal on the monitor 82 is provided.
  • a reference is clearly specified in advance on the screen Sc shown in FIG. 1, and the posture of the automobile CAR is adjusted in accordance with this reference.
  • the lamp unit 1 is turned on and the screen Sc is irradiated with light, and then the light distribution pattern P irradiated by the camera 2 is imaged.
  • the first aiming adjuster 8 ⁇ / b> A captures the captured image signal through the external connector 7, and the signal processing unit 81 generates an image signal.
  • the monitor driving unit 83 displays an image, that is, a captured light distribution pattern on the monitor 82 based on the generated image signal.
  • the operator confirms how much the light distribution pattern displayed while visually recognizing the monitor 82 is deviated from a predetermined position, for example, the center in each of the three directions of yaw, roll, and pitch, and aiming to eliminate this deviation. Manually adjust the screw 41. Thereby, aiming adjustment is executed.
  • the lamp housing 3 of the headlamp HL With an external connector 7 for connecting to the first aiming adjuster 8A in order to capture an image signal captured by the camera 2. This is an obstacle to miniaturization of the lamp HL and increases the cost. Further, it is necessary to connect and remove the external connector 7, and it takes time for the work and the work efficiency is deteriorated. Further, the first aiming adjuster 8A needs to incorporate the signal processing unit 81, which is expensive.
  • the lamp housing 3 of the headlamp HL is not provided with an external connector.
  • the second aiming adjuster 8B is not provided with a signal processing unit, and includes a monitor 82 and a monitor driving unit 83.
  • the second aiming adjuster 8B can be connected to the CAN 100 connected to the lamp ECU 6.
  • the connection to the CAN 100 can be easily realized by connecting to an existing CAN connector provided in the automobile CAR.
  • the lamp ECU 6 incorporates a predetermined program in the main control unit 61 in advance.
  • a predetermined command signal output from the second aiming adjuster 8B through the CAN 100 is input, the lamp ECU 6 outputs (transmits) the image signal generated by the signal processing unit 63 from the input / output unit 62 to the CAN 100. It is configured.
  • this predetermined command signal for example, a signal output when the power of the second aiming adjuster 8B is turned on or a signal output in advance by an operation in the second aiming adjuster 8B can be adopted.
  • the second aiming adjuster 8B inputs (receives) the image signal output from the lamp ECU 6 to the CAN 100 and causes the monitor driving unit 83 to display the image signal on the monitor 82.
  • the operator who performs aiming adjustment can perform aiming adjustment while visually recognizing the image picked up by the camera 2 on the monitor 82 as in the first adjustment mode.
  • the second adjustment mode it is not necessary to provide an external connector on the lamp housing 3, and connection and disconnection with respect to the external connector are not required. For this reason, according to the second adjustment mode, the headlamp HL can be reduced in size and cost, and quick and easy adjustment can be realized. Furthermore, since it is not necessary to incorporate a signal processing unit in the first aiming adjuster 8A, the first aiming adjuster 8A can be configured at a low price.
  • the lamp ECU 6 is configured to realize the second adjustment mode, so that the aiming adjuster having a simple configuration such as the second aiming adjuster 8B can be used quickly and easily. Aiming adjustment can be realized. Thereby, the irradiation optical axis of the lamp unit 1 and the imaging optical axis of the camera 2 in the headlamp HL can be accurately adjusted, and a suitable ADB can be realized.
  • the aiming adjuster may include a calculation unit that calculates the amount of deviation of the light distribution pattern from a predetermined position and a motor drive unit that feedback-controls the motor so that the amount of deviation is zero. Thereby, aiming adjustment can be performed automatically. In this case, since the aiming adjustment can be performed only by using the image signal, the monitor and the monitor driving unit may be omitted.
  • the image signal of an area necessary for aiming adjustment for example, a predetermined including the center of the imaging area captured by the camera 2 is included. Only the image signal in the area may be output to the CAN 100. In this way, it is possible to reduce the amount of data when outputting the image signal and to increase the speed of the aiming adjustment. Alternatively, when it is predicted that the amount of deviation in aiming is not so large, the luminance of the captured light distribution pattern is measured, and it is highly probable that the high luminance region is a region including the center. May be output to the CAN 100.
  • the main control unit 61 of the lamp ECU 6 may include a calculation unit that has calculated the deviation amount in the second aiming adjuster, and the rotation amount of the aiming screw motor calculated based on the deviation amount. You may provide the calculating part which calculates
  • the former aiming adjustment signal it is only necessary to output the calculated deviation amount from the lamp ECU 6 to the CAN 100, and the second aiming adjustment machine only needs to include a motor drive unit.
  • the latter aiming adjustment signal it is only necessary to output only the calculated motor rotation speed from the lamp ECU 6, and the configuration of the motor drive unit of the second aiming adjustment machine can be simplified. Thereby, further aiming adjustment can be simplified and speeded up.
  • the left and right headlamps L-HL and R-HL lamp ECUs 6 send and receive signals to each other through the CAN 100. It becomes possible to do. That is, an image signal output from one headlamp HL to the CAN 100 can be captured by the lamp ECU 6 of the other headlamp HL. For example, when the lamp ECU 6 of the other headlamp HL receives a command signal output from the lamp ECU 6 of one headlamp HL to the CAN100, the received lamp ECU6 outputs an image signal generated by itself to the CAN100. This output image signal can be received by the lamp ECU 6 of one headlamp HL and captured.
  • the left and right headlamps L-HL and R-HL lamp ECUs 6 (6L, 6R) are each provided with an abnormality detection unit 66 for detecting an abnormality of the camera, as shown in FIG.
  • the main control unit 61 is configured to output a predetermined abnormality command signal when the abnormality detection unit 66 detects an abnormality of the camera 2.
  • the right headlamp R-HL uses a lamp based on the image signal captured by the camera 2R on the own side.
  • Unit 1 cannot perform ADB control.
  • an abnormal command signal is output from the main control unit 61 of the lamp ECU 6 to the CAN 100.
  • the abnormality of the camera is a state in which a normal image cannot be acquired, such as a case where imaging is impossible or a field of view is deteriorated during imaging.
  • the lamp ECU 6L of the left headlamp L-HL performs ADB control based on the image signal captured by the camera 2L on its own side, but the abnormal command signal output from the lamp ECU 6R of the right headlamp R-HL is CAN100.
  • the signal is input through the camera 100, an image signal captured by the camera 2L on the own side is output to the CAN 100. Since this image signal is input to the lamp ECU 6R of the right headlamp R-HL in an abnormal state through the CAN 100, the right headlamp R-HL is converted into the image signal captured by the camera 2L of the input left headlamp L-HL. Based on this, ADB control is executed.
  • both heads are used based on the image captured by the other normal camera 2R or 2L.
  • ADB control can be secured in each of the lamps L-HL and R-HL.
  • the parallax angle between the irradiation optical axis of the lamp unit and the imaging optical axis of a normal camera becomes large. It is preferable to increase the value so as to reliably prevent dazzling oncoming vehicles and preceding vehicles.
  • the left and right headlamps L-HL and R-HL are each provided with a camera.
  • the camera is provided only on one headlamp. It is done.
  • the parallax angle between the imaging optical axis of the camera and the irradiation optical axis of the lamp unit of the other headlamp not equipped with the camera becomes a problem.
  • ADB control is performed with the other headlamp based on an image obtained from the camera of one headlamp, a situation may occur in which the other vehicle is dazzled.
  • FIGS. 10A to 10C are examples of left-hand traffic, but in a driving situation where the preceding vehicle CAR1 and the oncoming vehicle CAR2 of FIG. 10A exist, images captured by the camera 2R of the right headlamp R-HL on the oncoming lane side are FIG. 10B is an image captured by the camera 2L of the left headlamp L-HL on the shoulder side in FIG. 10B.
  • the camera 2L of the left headlamp L-HL on the shoulder side may not be able to reliably image the oncoming vehicle CAR2 existing in the vicinity of the host vehicle, and it is difficult to detect this.
  • a headlamp on the opposite lane that makes it easy to reliably image and detect an oncoming vehicle in the vicinity of the host vehicle in the opposite lane, for example, in a left-hand traffic area such as Japan, a camera is installed in the right headlamp. Then, ADB control is executed in each of the left and right headlamps based on the image captured by the camera.
  • the camera 2 includes a camera body 21 and a camera holder 22.
  • the camera body 21 is supported by the base plate 5 via the camera holder 22 and is electrically connected to the lamp ECU 6. Accordingly, if the left and right headlamps L-HL and R-HL are configured in this manner, one camera body 21 is provided in each of the left and right headlamps L-HL and R-HL.
  • the holder 22 can be used for attachment or removal.
  • the ADB control can be executed by attaching the camera body 21 to the right headlamp R-HL.
  • ADB control can be performed by removing the camera body 21 attached to the right headlamp R-HL and attaching it to the left headlamp L-HL.
  • the headlamps are different even when the automobile travels across regions of different transportation systems.
  • High-precision ADB control based on an image captured by the headlamp camera on the opposite lane side can be realized without changing to a specification.
  • the left and right headlamps are provided with the lamp ECUs, respectively, but one lamp ECU is connected to each lamp unit of the left and right headlamps and the camera by, for example, LIN (Local Interconnect Network). May be.
  • LIN Local Interconnect Network
  • an external aiming adjuster is connected to LIN.
  • input / output of image signals between the left and right headlamps is performed via the one lamp ECU and LIN.
  • one lamp unit is provided in the lamp housing, but one or more lamp units, for example, a lamp unit such as a clearance lamp or a turn signal lamp, may be incorporated.
  • a lamp unit such as a clearance lamp or a turn signal lamp
  • ADB control is shown as the light distribution control of the present disclosure.
  • the present invention can be similarly applied to a headlamp that performs the above-described AHB control and other light distribution controls.
  • FIG. 11 is a schematic horizontal sectional view of an embodiment in which the vehicle detection device according to the present disclosure is incorporated in a headlamp HL1 provided on the front left and right of a vehicle body 108 of an automobile.
  • FIG. 11 shows the internal structure of the right headlamp HL1, but the left headlamp has the same configuration.
  • the lamp housing 103 of the headlamp HL1 includes a lamp body 1031 and a translucent front cover 1032.
  • a lamp unit 101 and a camera 102 are housed inside the lamp housing 103.
  • the lamp unit 101 and the camera 102 are supported by a base plate 105.
  • the lamp unit 101 and the camera 102 can be adjusted in their optical axis directions by an aiming mechanism 104 including an aiming screw 1041 and the like.
  • the lamp unit 101 includes a light source 1011 having a plurality of LEDs arranged in a simplified manner, and a projection lens 1012 that projects light emitted from the light source 1011 toward the front of the automobile.
  • the lamp unit 101 can illuminate the front area of the automobile with a desired light distribution by controlling the light emission of the light source 1011.
  • the camera 102 is an imaging unit according to the present disclosure and is not shown in the drawing, but is a digital camera including an imaging element, for example.
  • the camera 102 images a front area of the host vehicle, at least an area including an area irradiated with light from the lamp unit 101, and outputs an imaging signal.
  • Objects captured by the camera 102 include signs, signs, and vehicles such as preceding vehicles and oncoming vehicles that are present in the area where the host vehicle is traveling.
  • a lamp ECU 106 is built in the lamp housing 103 and is connected to the lamp unit 101 and the camera 102, respectively.
  • the lamp ECU 106 can set a suitable light distribution based on an image obtained by imaging with the camera 102.
  • the lamp ECU 106 can perform light distribution of the lamp unit 101 based on this setting, for example, ADB control.
  • the lamp ECU 106 is connected to a vehicle ECU or the like not shown in the figure via a CAN (Controller (Area Network) 10100, and sends and receives required signals to and from them.
  • CAN Controller (Area Network) 10100
  • the lamp ECU 106 performs signal processing on an imaging signal output from the imaging element of the camera 102 to generate an image signal as image data, and this image signal.
  • the image analysis unit 1062 detects the vehicle existing in the image, and the lighting control unit 1063 recognizes the detected vehicle and controls the light distribution of the lamp unit 101. And.
  • the signal processing unit 1061 and the image analysis unit 1062 perform an operation for detecting the vehicle. These constitute a vehicle detection means in a broad sense, but in particular, the vehicle is directly detected. In a sense, the image analysis unit 1062 constitutes a vehicle detection means in a narrow sense. Therefore, the lamp ECU 106 detects the vehicle by the image analysis unit 1062, that is, the vehicle detection means, based on the image signal obtained by the camera 102, and controls the lighting so as to shield the area where the detected vehicle exists.
  • the ADB control is executed by controlling the light distribution of the lamp unit 101 in the unit 1063.
  • the image analysis unit 1062 serving as a vehicle detection unit identifies a light spot extraction unit 1064 that extracts a light spot existing in the image, and identifies the shape of the extracted light spot.
  • a vehicle detection operation in the image analysis unit 1062 as vehicle detection means will be described.
  • the image of the front region obtained by capturing with the camera 102 is in the state of FIG. 13 when the automobile in which the ADB control of the headlamp HL1 is performed at night.
  • the preceding vehicle CAR11 and the oncoming vehicles CAR12 and CAR13, whose lamps are lit, are traveling on a road on which a white line (including a yellow line) LINE is drawn.
  • lighting facilities such as a road lamp B and a delineator D are provided, and a rectangular road sign S1 and a rhombus road sign S2 are provided.
  • various signs are also included in the sign.
  • An image signal (image data) is obtained by imaging the front area with the camera 102 and processing the image signal with the signal processing unit 1061. Furthermore, the image shown in FIG. 14 is obtained from this image signal.
  • This image is obtained by arranging bright and dark pixels (light dots) corresponding to the image sensor of the camera 102 in the X direction (row direction) and the Y direction (column direction). And in this image, the imaged vehicle, lighting equipment, a sign, etc. are displayed as a light spot.
  • the dark area of the background is represented by a white background, and the light spot is drawn with dots.
  • the light spots LP1 to LP3 due to the head lamps or tail lamps of the vehicles CAR11 to CAR13, and the light spots LP4 and LP5 due to the lighting facilities B and D are directly captured by the camera 102.
  • the light spot has a shape close to.
  • the light spots LP6 and LP7 formed by the signs S1 and S2 are picked up by the camera 102 with respect to the high-luminance surface illuminated by the light from the illuminant, so that each shape, that is, a rectangle (square, rectangle) or rhombus
  • the light spot has the shape of The white line LINE on the road surface is a long and narrow light spot LP8.
  • the light spot extraction unit 1064 extracts light spots that are candidates for inspection from this image. For example, by setting a luminance value of a predetermined level as a threshold value, the light spot extraction unit 1064 detects a region having a luminance higher than the threshold value. As a result, light spots of low brightness due to light reflected simply by the object, for example, road surface areas illuminated by the headlamps of the host vehicle are excluded, and light spots by a vehicle equipped with a sign or lamp equipped with a light emitter are detected. In the case of FIG. 14, all such light spots are detected.
  • the light spot extraction unit 1064 further sets the light spot area with the xy coordinates in the image for the detected light spot. That is, for the rectangular light spot LP6, a rectangular light spot area indicated by diagonal coordinates (xa, ya) to (xb, yb) is set as shown by a broken line in FIG. 15A. The same applies to the diamond light spot LP7 shown in FIG. 15B, the circular light spots LP1 to LP5 shown in FIG. 15C, and the white light spot LP8 shown in FIG. 15D, and the coordinates (xm, ym) of the maximum value in the xy directions, respectively.
  • the light spot region is set with ⁇ (xn, yn). In these figures, n and m are values set for each light spot.
  • the light spot identifying unit 1065 identifies the shape of each light spot for each set light spot area. For example, as shown by an arrow in FIG. 15A, while scanning in the x direction (row direction) from the position on the left side of the base of the set light spot area, that is, from the position of (xa, ya) to the right side, in one line The luminance of the pixel is measured. In this measurement, the above-described luminance threshold value can be used. Next, the above-mentioned row is performed, and this is repeated for several lines, for example, 2 to 5 rows in the y direction (column direction). Actually, about 3 lines are sufficient.
  • the shape of the light spot is identified as a rectangle.
  • the shape of the light spot is identified as a rhombus or a circle as shown in FIG. 15B or 15C.
  • the number of lines to be measured is increased by several lines, and the decrease state of the high luminance pixels is measured. If the rate of decrease is large, it is identified as a diamond in FIG. 15D, and if the rate of decrease is smaller than that, it is identified as a circle in FIG. 15C.
  • the shape of the intersection is identified as a linear shape.
  • the amount of change in position is constant, it is identified as a linear shape, and when the amount of change in position changes, it is identified as a curved shape.
  • FIG. 16 shows a light spot with a tag “target light spot”.
  • a tag “non-target light spot” is attached to the light spots LP6 and LP7. Then, the light spots LP6 and LP7 are excluded from extraction.
  • the vehicle detection unit 1066 detects the vehicle by determining the behavior of the light spots in FIG. 16 with the tag “target light spot”, in this case, the moving state in the image. .
  • the relative position change of the light spot with respect to the host vehicle that is, the movement direction and the movement speed are tracked over time, and the road is traveled based on the result.
  • FIG. 17 shows the light spot of the vehicle detected in this way.
  • the light spots LP1 and LP2 are detected as the preceding vehicle CAR11 and the oncoming vehicle CAR12.
  • the light spot LP3 due to the oncoming car CAR13 is small in size, so that it can be detected that it is located far away, and is excluded because there is little possibility of dazzling the oncoming car CAR13 by the headlamp of the host vehicle at this time.
  • the lighting control unit 1063 sets a light distribution that does not dazzle the preceding car CAR11 and the oncoming car CAR12, and executes the light distribution control in the lamp unit 101. Thereby, suitable ADB control is performed.
  • the light spot identifying unit 1065 identifies a light spot that is likely to be a vehicle based on the image obtained by imaging with the camera 102, and the vehicle detecting unit 1066 identifies this.
  • a process for detecting the vehicle only for the light spot is executed. Therefore, even when there are a large number of light spots in the captured image, detection processing is performed for light spots that are clearly not from the vehicle or light spots that are likely not from the vehicle. This eliminates the need to reduce the number of processing steps when detecting a vehicle, and enables rapid vehicle detection.
  • the camera 102 is provided integrally with the headlamp HL1, and is substantially at the same height as the headlamp of the oncoming vehicle. In some cases, it is lower than the headlamps of the oncoming vehicle. Therefore, when the camera is arranged at a position higher than the lamp of the oncoming vehicle, for example, the moving direction of the oncoming vehicle in the captured image is larger than when the camera is arranged on the front window or side mirror of the automobile. Is different.
  • FIG. 18A is an image when the camera is positioned higher than the headlamp HL1 of the oncoming vehicle CAR12 as in the case where the camera is disposed at the upper part of the front window of the host vehicle.
  • the moving direction of the light spot LP2 by the headlamp HL1 of the oncoming vehicle CAR12 is directed to the lower right with respect to the horizontal line H in the image as indicated by an arrow.
  • FIG. 18B is an image in the case where the camera is integrated with the headlamp as in this embodiment and is at a height or a position substantially equal to the headlamp HL1 of the oncoming vehicle CAR12.
  • the moving direction of the light spot LP2 by the headlamp HL1 of the oncoming vehicle CAR12 is directed to the right along the horizontal line H in the image, or slightly lower right than the horizontal line H.
  • the vehicle detection unit 1066 of the embodiment can detect the light spot moving in the horizontal direction in the image or in the lower right direction than the horizontal direction as the light spot of the oncoming vehicle.
  • the camera 102 provided on the host vehicle is tilted in the roll direction (rotation direction about the longitudinal axis of the automobile), for example, the camera 102 is mounted tilted in the left-down direction.
  • the image captured by the camera 102 is also tilted to the lower left as shown in FIG. 18C.
  • the light spot LP2 of the headlamp HL1 of the oncoming car CAR12 is moved to the upper right side of the horizontal line H in the image. Therefore, when the light spot is detected only in the moving direction in the image, the light spot LP2 of the oncoming vehicle may be erroneously detected as a light spot other than the vehicle, for example, a sign or a road illumination light. Therefore, when detecting the vehicle, it is necessary to detect not only the moving direction of the light spot in the image but also the moving speed.
  • the vehicle detection unit 1066 refers to the light spot that has been identified as the “non-target light spot” by the light spot identification unit 1065.
  • the vehicle detection unit 1066 refers to the light spot LP6 of the sign S1 identified as having a rectangular shape, and corrects the detected horizontal side, that is, the direction of the bottom side and the top side, in the corrected horizontal direction. Set as DH. Then, the vehicle detection unit 1066 corrects the moving direction of the light spot LP2 of the oncoming vehicle CAR12 in the image indicated by the arrow as the direction with respect to the corrected horizontal direction DH.
  • the vehicle detection unit 1066 displays an image. Can be corrected to the corrected horizontal direction DH. If the moving direction of the light spot LP6 of the oncoming vehicle CAR12 is detected with reference to the corrected horizontal direction DH, the vehicle detection unit 1066 detects that the light spot LP6 is moving in the horizontal direction or the lower right direction. It is possible to accurately detect the oncoming vehicle CAR12. That is, even if the moving speed of the light spot is not detected at the time of detection, the vehicle can be accurately detected only by detecting the moving direction, thereby simplifying the processing and speeding up the detection.
  • an average of the bottom and top sides of the plurality of rectangles may be taken as the reference horizontal direction.
  • the bottom side or the top side of the rectangular light spot having the largest dimension, which can easily identify the shape with high accuracy may be employed.
  • the reference horizontal direction may be set using a light spot of another shape. For example, in the case of a diamond-shaped light spot, a horizontal diagonal line may be used as the reference horizontal direction. If the vehicle has a vehicle height sensor or the like and the roll angle of the host vehicle can be detected, the moving direction of the light spot in the image may be corrected based on the detected roll angle. Good.
  • the light spot LP8 of the white line LINE identified by the light spot identifying unit 1065 may be used to further improve the accuracy of vehicle detection, particularly the accuracy of oncoming vehicle detection.
  • the camera 102 is provided integrally with the headlamp HL1, and is provided at a lower position closer to the road surface than when provided on the front window or side mirror.
  • the white line LINE can be imaged more clearly.
  • the white line LINE is a light spot obtained by imaging light reflected from the headlamps of the oncoming vehicle and the host vehicle, it is difficult to normally capture images with high brightness. By providing in, it can image as a high-intensity light spot.
  • the vehicle detection unit 1066 is in the vicinity of the recognized white line light spot LP8 among the light spots of the plurality of “target light spots” identified by the light spot identifying unit 1065, and Only the light spot that exists at the position along the extension direction of the light spot LP8 and moves in the extension direction can be detected as the light spot of the vehicle. Therefore, by performing vehicle detection based only on the detected light spot based on the moving direction and moving speed of the light spot, the accuracy of vehicle detection is improved, and the processing steps for vehicle detection are simplified, and the detection speed is increased. Can be realized.
  • the moving direction of the light spot in the image is a direction close to the light spot LP8 of the white line and along the extending direction thereof. It is possible to accurately detect the vehicle simply by detecting. That is, even when the host vehicle rolls and the camera and the vehicle body are tilted, accurate vehicle detection is possible without considering the corrected horizontal direction as described in FIG. 18C. The complexity of processing can be avoided.
  • the vehicle detection unit 1066 calculates the curvature from the shape of the light spot LP8 when it is identified by the light spot identification unit 1065.
  • the vehicle detection unit 1066 performs correction such as subtraction or addition of the movement amount based on the direction of the curved path and the calculated curvature. It may be. This makes it possible to accurately detect the vehicle from the light spot.
  • the vehicle detection device of the present disclosure is used for light distribution control by ADB control of a headlamp, but it may be used to perform AHB control or other light distribution control. Further, since the vehicle can be detected quickly and accurately, it can also be used for automatic driving control of an automobile.
  • the vehicle detection device of the present disclosure can be configured to take an image of a side region and a rear region of an automobile and detect other vehicles existing in the side and rear regions of the host vehicle.
  • the camera may be integrated into a side mirror or tail lamp.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)

Abstract

Provided are: a lamp unit (1); an image capture means (camera) (2) that captures an image of a light irradiation region of the lamp unit (1); and a control means (lamp ECU) (6) for controlling the lamp unit (1) and the image capture means (2), the control means (6) being connected to a signal bus (CAN) (100) installed in a vehicle. Aiming of the lamp unit (1) is adjusted on the basis of signals obtained by performing signal-processing on image signals captured by the image capture means (2). When a prescribed command signal has been inputted, the control means (6) outputs the signal-processed signal to the signal bus (100), and adjusts aiming using an aiming adjuster connected to the signal bus (100).

Description

車両用ランプ、車両検出方法及び車両検出装置Vehicle lamp, vehicle detection method, and vehicle detection device
 本開示は自動車のヘッドランプに適用して好適な車両用ランプに関し、特にカメラ等の撮像手段を備える車両用ランプに関するものである。また、本開示は自動車等の車両に搭載されて対向車や先行車等の他車両を検出するための車両検出方法及び車両検出装置に関するものである。 The present disclosure relates to a vehicle lamp suitable for application to a headlamp of an automobile, and more particularly to a vehicle lamp provided with imaging means such as a camera. The present disclosure also relates to a vehicle detection method and a vehicle detection device that are mounted on a vehicle such as an automobile and detect other vehicles such as an oncoming vehicle and a preceding vehicle.
 自動車のヘッドランプとして、対向車や先行車を眩惑しないように一部の領域を遮光し、その他の領域を広範囲に照明するADB(Adaptive Driving Beam)制御やAHB(Automatic High Beam)制御等の配光制御(以下、ADB制御等と称することがある)が可能なヘッドランプが提案されている。この種のランプでは対向車や先行車を検出するための撮像手段として、カメラを一体的に組み込んだヘッドランプが提案されている。例えば、特許文献1は、ランプハウジング内にランプユニットと共にカメラユニットを内装し、このカメラユニットにより撮像を行っている。 As an automobile headlamp, ADB (Adaptive Driving Beam) control and AHB (Automatic High Beam) control, etc., are used to block out some areas so that oncoming cars and preceding cars are not dazzled and to illuminate other areas over a wide area. A headlamp capable of light control (hereinafter also referred to as ADB control) has been proposed. In this type of lamp, a headlamp in which a camera is integrated is proposed as an imaging means for detecting an oncoming vehicle or a preceding vehicle. For example, in Patent Document 1, a camera unit is housed together with a lamp unit in a lamp housing, and imaging is performed by the camera unit.
 撮像手段をヘッドランプに内装することにより、撮像手段を自動車のフロントウインドに配設した場合に比較して撮像手段の撮像光軸とランプユニットの光照射光軸とを近づけることができ、ADB制御等をより高精度に行うことが可能になる。すなわち、対向車や先行車を撮像する画像の中心軸と、ADB制御等の配光の中心軸を近づけることにより、両中心軸の間に生じる視差による配光誤差を低減することができるからである。 By incorporating the imaging means in the headlamp, the imaging optical axis of the imaging means and the light irradiation optical axis of the lamp unit can be brought closer as compared with the case where the imaging means is disposed in the front window of the automobile, such as ADB control. Can be performed with higher accuracy. In other words, it is possible to reduce the light distribution error due to the parallax generated between the two central axes by bringing the central axis of the image capturing the oncoming vehicle and the preceding vehicle close to the central axis of the light distribution such as ADB control. is there.
 また、自車両の前方領域の視認性を向上するために、前方領域に存在する他車両を検出し、当該他車両を眩惑することがないようにヘッドランプの配光を制御するADB(Adaptive Driving Beam)制御やAHB(Automatic High Beam)制御が行われている。また近年では自動運転制御のために他車両を検出することが行われている。このような他車両を検出する技術として、撮像手段により自車両の周囲領域を撮像し、撮像により得られた画像を解析して他車両を検出する技術が提案されている。 In addition, in order to improve the visibility of the front area of the host vehicle, an ADB (AdaptiveingDriving) that detects the other vehicle existing in the front area and controls the light distribution of the headlamp so as not to dazzle the other vehicle. Beam) control and AHB (Automatic High Beam) control are performed. In recent years, other vehicles have been detected for automatic driving control. As a technique for detecting such other vehicles, a technique has been proposed in which a surrounding area of the host vehicle is imaged by an imaging unit, and an image obtained by the imaging is analyzed to detect the other vehicle.
 特許文献2には、撮像手段で得られた画像中の光点を検出し、検出した光点の属性、例えば光点の色や挙動(移動状態)から、街路灯や建造物の照明光と、他車両のヘッドランプやテールランプとを判別して他車両を検出する技術が提案されている。また、特許文献2には、検出した光点の属性に基づいて他車両を検出するための検出領域を所定の領域に設定し、設定した領域について判別を行うことで検出時間を短縮する技術も提案されている。 In Patent Document 2, a light spot in an image obtained by an imaging means is detected, and the attribute of the detected light spot, for example, the color and behavior (moving state) of the light spot, A technique has been proposed in which the headlamps and taillights of other vehicles are discriminated to detect other vehicles. Patent Document 2 also discloses a technique for shortening the detection time by setting a detection area for detecting another vehicle based on the detected attribute of the light spot as a predetermined area and determining the set area. Proposed.
日本国特開2013-164913号公報Japanese Unexamined Patent Publication No. 2013-164913 日本国特開2012-20662号公報Japanese Unexamined Patent Publication No. 2012-20672
 ところで、ヘッドランプを自動車の車体に組み付けた後にランプユニットの光軸方向を調整するためのエイミング調整が行われているが、このように撮像手段を内装したヘッドランプでは、当該撮像手段で撮像した配光パターンを確認しながらエイミング調整を行うことが考えられる。 By the way, aiming adjustment for adjusting the optical axis direction of the lamp unit is performed after the headlamp is assembled to the car body of the automobile. In the headlamp having the image pickup means as described above, the image is picked up by the image pickup means. It is conceivable to perform aiming adjustment while confirming the light distribution pattern.
 また、ADB制御等を行う際に撮像手段を用いて対向車や先行車を検出する際には、撮像手段で撮像した画像を解析して車両検出を行っているが、この画像解析での検出精度を高めるために学習機能を備えたAI(人工知能:Artificial Intelligence)を利用することが考えられている。 In addition, when detecting an oncoming vehicle or a preceding vehicle using an imaging unit when performing ADB control or the like, vehicle detection is performed by analyzing an image captured by the imaging unit. Detection by this image analysis In order to improve accuracy, it is considered to use AI (Artificial Intelligence) having a learning function.
 従来では、このようにエイミング調整やAIでの学習については、詳細を後述するように満足するものが提案されてはいない。そのため、エイミング調整やAIでの学習は、撮像手段を利用した好適なADB制御等を実現する上での課題となっている。 Conventionally, no satisfactory content has been proposed for aiming adjustment or AI learning as described later in detail. For this reason, aiming adjustment and learning by AI are problems in realizing suitable ADB control using an imaging unit.
 本開示の目的は、好適な配光制御を実現することが可能な車両用ランプを提供することにある。 An object of the present disclosure is to provide a vehicle lamp capable of realizing suitable light distribution control.
 特許文献2の技術は、検出領域を限定することにより検出時間を短縮する上では有効である。しかし、自動車の走行環境が多様化している近年の状況では、広い領域にわたって他車両を検出することが要求されるため、特許文献2の技術では、この要求に対応することが難しい。また、これと同時に道路標識や広告等の発光体が増加し、撮像した画像中の光点の数が多くなる傾向にある。そのため、これら光点の全ての挙動に基づいて他車両を判別する処理が必要になり、検出工数が増加して検出時間が増加する。 The technique of Patent Document 2 is effective in reducing the detection time by limiting the detection area. However, in the recent situation where the driving environment of automobiles is diversified, it is required to detect other vehicles over a wide area, and therefore it is difficult for the technology of Patent Document 2 to meet this requirement. At the same time, illuminants such as road signs and advertisements increase, and the number of light spots in the captured image tends to increase. Therefore, a process for discriminating other vehicles based on all the behaviors of these light spots is required, and the number of detection steps increases and the detection time increases.
 本開示の目的は、撮像した画像中における光点を絞り込み、車両を正確にかつ迅速に検出することを可能にした車両検出方法及び車両検出装置を提供することにある。 An object of the present disclosure is to provide a vehicle detection method and a vehicle detection device that can narrow down light spots in a captured image and detect a vehicle accurately and quickly.
 本開示は、光照射を行うランプユニットと、少なくとも当該ランプユニットの光照射領域を撮像する撮像手段と、車両に配設された信号バスに接続され、これらランプユニットと撮像手段を制御するための制御手段を備え、撮像手段で撮像した撮像信号を信号処理した信号に基づいてランプユニットのエイミング調整を行う構成の車両用ランプであって、この制御手段は所定のコマンド信号が入力されたときに、信号処理した信号を信号バスに出力する。この信号処理した信号は撮像信号に基づく画像信号又はエイミング調整信号であることが好ましい。 The present disclosure relates to a lamp unit that performs light irradiation, an imaging unit that captures at least a light irradiation region of the lamp unit, and a signal bus disposed in a vehicle, for controlling the lamp unit and the imaging unit. A vehicular lamp having a control unit and configured to perform aiming adjustment of the lamp unit based on a signal obtained by performing signal processing on an imaging signal captured by the imaging unit, and the control unit is configured to receive a predetermined command signal. The signal processed signal is output to the signal bus. The signal processed signal is preferably an image signal based on the imaging signal or an aiming adjustment signal.
 本開示において、カメラとランプユニットはランプハウジングに内装されており、カメラは当該ランプハウジングに対して着脱可能に支持されることが好ましい。例えば、車両の両側にそれぞれ車両用ランプが配設された場合には、カメラは各車両用ランプの選択された一方の車両用ランプに配設される。 In the present disclosure, it is preferable that the camera and the lamp unit are built in the lamp housing, and the camera is detachably supported with respect to the lamp housing. For example, when vehicle lamps are disposed on both sides of the vehicle, the camera is disposed on one selected vehicle lamp of each vehicle lamp.
 さらに、本開示において、制御手段は、少なくともカメラで撮像した撮像信号に基づいて他車両を検出する検出手段を備えており、当該検出手段は自動生成された走行シミュレーションデータに基づいて機械学習されることが好ましい。この走行シミュレーションデータにおいては、走行映像データと教師データが自動生成されることが好ましい。 Furthermore, in the present disclosure, the control unit includes a detection unit that detects at least another vehicle based on an imaging signal captured by the camera, and the detection unit is machine-learned based on the automatically generated traveling simulation data. It is preferable. In the traveling simulation data, it is preferable that traveling image data and teacher data are automatically generated.
 本開示の車両検出方法は、車両に搭載されて当該車両の周囲を撮像し、撮像により得られた画像に写された光点の挙動から車両を検出する検出方法であって、前記画像中に存在する複数の光点の形状を認識する工程と、認識した形状のうち車両検出対象となる光点を対象光点とし、それ以外の光点を非対象光点としてそれぞれ識別する工程と、画像中における対象光点の挙動を検出し、所定の挙動をする光点を車両として検出する工程を含んでいる。 The vehicle detection method according to the present disclosure is a detection method that is mounted on a vehicle, images the surroundings of the vehicle, and detects the vehicle from the behavior of a light spot that is captured in an image obtained by the imaging. A step of recognizing the shapes of a plurality of existing light spots, a step of identifying a light spot that is a vehicle detection target among the recognized shapes as a target light spot, and a light spot other than that as a non-target light spot, and an image The method includes detecting a behavior of a target light spot in the vehicle and detecting a light spot having a predetermined behavior as a vehicle.
 本開示の車両検出装置は、車両に搭載されて当該車両の周囲を撮像する撮像手段と、前記撮像手段で撮像した画像に写された光点から車両を検出する検出手段を備え、前記画像中に存在する複数の光点から車両検出対象となる対象光点と、それ以外の非対象光点を識別する光点識別部と、前記画像中における識別した対象光点の挙動から車両を検出する車両検出部を備え、前記光点識別部は前記複数の光点の形状を認識し、この認識に基づいて前記対象光点と前記非対象光点を識別する。 A vehicle detection device according to the present disclosure includes an imaging unit that is mounted on a vehicle and images the surroundings of the vehicle, and a detection unit that detects a vehicle from a light spot captured in an image captured by the imaging unit. The vehicle is detected from the target light spot that is the vehicle detection target from the plurality of light spots existing in the light source, the light spot identification unit that identifies the other non-target light spots, and the behavior of the identified target light spot in the image A light detection unit that recognizes shapes of the plurality of light spots, and identifies the target light spot and the non-target light spot based on the recognition;
 本開示によれば、撮像手段で撮像した撮像信号を信号処理した信号、例えば画像信号や、演算により得られたエイミング調整信号を、車両の信号バスを通してエイミング調整機が取得することができるので、高い精度のエイミング調整を容易に実行することができ、好適な配光制御が実現できる。 According to the present disclosure, the aiming adjuster can acquire a signal obtained by performing signal processing on an image pickup signal picked up by the image pickup unit, for example, an image signal or an aiming adjustment signal obtained by calculation, through the signal bus of the vehicle. High-precision aiming adjustment can be easily performed, and suitable light distribution control can be realized.
 また、本開示によれば、撮像手段で撮像した画像中の光点の形状を識別し、この形状から車両である可能性の高い光点を対象光点として識別し、この識別した対象光点についてのみ車両を検出するための処理を実行する。これにより、画像中に多数の光点が存在している場合においても、車両の光点ではない可能性の高い光点についての検出処理を行う必要がなくなり、車両の検出に際しての処理工数を低減し、迅速な車両検出ができる。 Further, according to the present disclosure, the shape of the light spot in the image captured by the imaging unit is identified, and the light spot that is likely to be a vehicle is identified as the target light spot from this shape, and the identified target light spot The process for detecting the vehicle only is executed. As a result, even when there are many light spots in the image, there is no need to perform detection processing for light spots that are not likely to be light spots of the vehicle, and the number of processing steps for detecting the vehicle is reduced. In addition, rapid vehicle detection is possible.
本開示にかかるヘッドランプを適用した自動車の概念図。The conceptual diagram of the motor vehicle which applied the headlamp concerning this indication. 右ヘッドランプの概略水平断面図。The schematic horizontal sectional view of a right headlamp. カメラの分解斜視図。The exploded perspective view of a camera. ランプECUのブロック図。The block diagram of lamp ECU. DL(深層学習)のフロー図。The flow diagram of DL (deep learning). 第1のエイミング調整形態を示す概念図。The conceptual diagram which shows the 1st aiming adjustment form. 第2のエイミング調整形態を示す概念図。The conceptual diagram which shows the 2nd aiming adjustment form. 左右ヘッドランプのCAN接続状態を示す概略図。Schematic which shows the CAN connection state of a right-and-left headlamp. 図8のヘッドランプのランプECUのブロック図。The block diagram of lamp ECU of the headlamp of FIG. 左右ヘッドランプのカメラで撮像した画像とその配光を示す概略図。Schematic which shows the image imaged with the camera of the left-right headlamp, and its light distribution. 左右ヘッドランプのカメラで撮像した画像とその配光を示す概略図。Schematic which shows the image imaged with the camera of the left-right headlamp, and its light distribution. 左右ヘッドランプのカメラで撮像した画像とその配光を示す概略図。Schematic which shows the image imaged with the camera of the left-right headlamp, and its light distribution. 本開示を適用したヘッドランプの概略水平断面図。FIG. 3 is a schematic horizontal sectional view of a headlamp to which the present disclosure is applied. 車両検出手段を含むランプECUのブロック図。The block diagram of lamp ECU containing a vehicle detection means. カメラで撮像する自車両の前方領域の一例の概略図。Schematic of an example of the front area | region of the own vehicle imaged with a camera. カメラで撮像して得られた光点の概略図。Schematic of the light spot obtained by imaging with a camera. 光点の抽出領域と形状認識を説明するための概念図。The conceptual diagram for demonstrating the extraction area | region and shape recognition of a light spot. 光点の抽出領域と形状認識を説明するための概念図。The conceptual diagram for demonstrating the extraction area | region and shape recognition of a light spot. 光点の抽出領域と形状認識を説明するための概念図。The conceptual diagram for demonstrating the extraction area | region and shape recognition of a light spot. 光点の抽出領域と形状認識を説明するための概念図。The conceptual diagram for demonstrating the extraction area | region and shape recognition of a light spot. 対象光点の概略図。Schematic of target light spot. 車両の光点として検出された概略図。Schematic detected as a light spot of a vehicle. カメラが高位置にあるときの車両検出方法を説明する概念図。The conceptual diagram explaining the vehicle detection method when a camera exists in a high position. カメラが低位置にあるときの車両検出方法を説明する概念図。The conceptual diagram explaining the vehicle detection method when a camera exists in a low position. 画像が傾いたときの車両検出方法を説明する概念図。The conceptual diagram explaining the vehicle detection method when an image inclines. 白線の光点を利用した車両検出方法を説明する概念図。The conceptual diagram explaining the vehicle detection method using the light spot of a white line.
(車両用ランプ)
 次に、本開示に係る車両用ランプの形態について図面を参照して説明する。図1は本開示を自動車のヘッドランプHLに適用した実施形態の概念図である。自動車CARの左右のヘッドランプL-HL,R-HLはそれぞれランプハウジング3内にADB制御が可能なランプユニット1が内装されている。このランプユニット1は、例えば、複数のLED(発光ダイオード)からなる光源11を備えており、このLED11を全てあるいは選択して発光させ、発光した光を投影レンズ12により自動車の前方領域に投影することにより所望の配光パターンPでの光照射を行うことが可能である。
(Vehicle lamp)
Next, the form of the vehicle lamp according to the present disclosure will be described with reference to the drawings. FIG. 1 is a conceptual diagram of an embodiment in which the present disclosure is applied to a headlamp HL of an automobile. The left and right headlamps L-HL and R-HL of the automobile CAR are each provided with a lamp unit 1 capable of ADB control in a lamp housing 3. The lamp unit 1 includes, for example, a light source 11 composed of a plurality of LEDs (light emitting diodes). All or selected LEDs 11 are caused to emit light, and the emitted light is projected onto a front area of the automobile by a projection lens 12. Thus, it is possible to perform light irradiation with a desired light distribution pattern P.
 例えば、図1では、複数のLED11の各発光領域はそれぞれ自動車の前方領域の所定の区分領域Apを照明するように設定されており、選択して発光されたLED11に対応する区分領域Apのみが照明される。したがって、全てのLED11を発光したときには配光パターンPの全領域が照明される。また、消光したLEDに対応する領域は照明されないので、例えば、対向車や先行車を検出した領域のLEDを消光することで、これらの自動車に対する眩惑が防止できる。 For example, in FIG. 1, each light emitting region of the plurality of LEDs 11 is set to illuminate a predetermined segmented region Ap in the front region of the automobile, and only the segmented region Ap corresponding to the LED 11 selected and emitted is included. Illuminated. Therefore, when all the LEDs 11 emit light, the entire area of the light distribution pattern P is illuminated. Moreover, since the area | region corresponding to the quenched LED is not illuminated, the dazzling with respect to these motor vehicles can be prevented by quenching LED of the area | region which detected the oncoming vehicle and the preceding vehicle, for example.
 また、前記各ヘッドランプL-HL,R-HLのランプハウジング3内には撮像手段として動画を撮像するカメラ2が内装されている。カメラ2は、少なくとも前記ランプユニット1で光照射する自動車の前方領域を撮像して撮像信号を出力する。 In addition, a camera 2 that captures a moving image is provided as an imaging means in the lamp housing 3 of each of the headlamps L-HL and R-HL. The camera 2 images at least a front area of the automobile irradiated with light by the lamp unit 1 and outputs an imaging signal.
 前記左右のヘッドランプL-HL,R-HLは、自動車の組立工程において自動車CARの車体前部の左右にそれぞれ組み付けられる。この組み付けが行われた後、前記ランプユニット1と前記カメラ2の光軸調整、すなわちエイミング調整が実行される。このエイミング調整により、ランプユニット1の照射光軸とカメラ2の撮像光軸は所定方向に設定され、精度の高いADB制御が実現できるようになる。 The left and right headlamps L-HL and R-HL are respectively assembled to the left and right of the front part of the car body of the car CAR in the car assembly process. After this assembly, the optical axis adjustment, that is, aiming adjustment of the lamp unit 1 and the camera 2 is executed. By this aiming adjustment, the irradiation optical axis of the lamp unit 1 and the imaging optical axis of the camera 2 are set in a predetermined direction, so that highly accurate ADB control can be realized.
 図2は前記ヘッドランプHLのうち、右ヘッドランプR-HLの概略水平断面図である。前記ランプハウジング3は、ランプボディ31と、透光性のある前面カバー32と、を備えている。このランプボディ31に、エイミング機構4によりベースプレート5が支持されている。前記ランプユニット1と前記カメラ2は、このベースプレート5に取り付けられている。ここで、これらランプユニット1の照射光軸とカメラ2の撮像光軸は同一方向に向けられた状態でベースプレート5に取り付けられる。 FIG. 2 is a schematic horizontal sectional view of the right headlamp R-HL among the headlamps HL. The lamp housing 3 includes a lamp body 31 and a translucent front cover 32. A base plate 5 is supported on the lamp body 31 by an aiming mechanism 4. The lamp unit 1 and the camera 2 are attached to the base plate 5. Here, the irradiation optical axis of the lamp unit 1 and the imaging optical axis of the camera 2 are attached to the base plate 5 in a state where they are directed in the same direction.
 したがって、前記エイミング機構4の一部を構成しているエイミングスクリュ41を手動あるいはモータにより軸転操作することによりベースプレート5が上下、左右に傾動され、ランプユニット1の照射光軸とカメラ2の撮像光軸のエイミング調整が行われる。このようなエイミングスクリュ41を有するエイミング機構4は既に知られているので詳細な説明は省略するが、ここでは手動でエイミング調整を行う構成とされている。 Accordingly, when the aiming screw 41 constituting a part of the aiming mechanism 4 is pivoted manually or by a motor, the base plate 5 is tilted up and down, left and right, and the irradiation optical axis of the lamp unit 1 and the imaging of the camera 2 are captured. Aiming adjustment of the optical axis is performed. Since the aiming mechanism 4 having such an aiming screw 41 is already known, detailed description thereof is omitted, but here, the aiming adjustment is performed manually.
 前記カメラ2は、図3の分解斜視図に示されるように、撮像レンズ211と撮像素子212を備えるカメラ本体21と、このカメラ本体21を保持するカメラホルダ22と、を備えている。前記カメラホルダ22は前記ベースプレート5に適宜な手段によって固定されている。前記カメラ本体21は、このカメラホルダ22に対して着脱可能に取り付けられている。この例では、ランプボディ31の上面に設けられた図には表れない開口を通して、ランプハウジング3の外部からカメラホルダ22に対して、カメラ本体21は、上下方向にスライドすることで着脱ができるように構成されている。これらカメラ本体21とカメラホルダ22にはそれぞれ電極23,24が設けられている。カメラ本体21をカメラホルダ22に装着したときに、両者はこれら電極23,24により相互に電気接続される。 As shown in the exploded perspective view of FIG. 3, the camera 2 includes a camera body 21 including an imaging lens 211 and an imaging element 212, and a camera holder 22 that holds the camera body 21. The camera holder 22 is fixed to the base plate 5 by appropriate means. The camera body 21 is detachably attached to the camera holder 22. In this example, the camera body 21 can be attached and detached by sliding up and down with respect to the camera holder 22 from the outside of the lamp housing 3 through an opening not shown in the drawing provided on the upper surface of the lamp body 31. It is configured. These camera body 21 and camera holder 22 are provided with electrodes 23 and 24, respectively. When the camera body 21 is mounted on the camera holder 22, both are electrically connected to each other by these electrodes 23 and 24.
 一方、図2に示したように、前記ランプハウジング3内にはランプECU(電子制御ユニット)6が内装されている。このランプECU6に前記ランプユニット1と前記カメラ2が電気接続されている。図4はこのランプECU6のブロック図である。ランプECU6は、主制御部61を備えている。この主制御部61には、入出力部62、信号処理部63、画像解析部64、点灯駆動部65が接続されており、当該主制御部61は、これら入出力部62、信号処理部63、画像解析部64、点灯駆動部65のそれぞれの動作を制御する。 On the other hand, as shown in FIG. 2, a lamp ECU (electronic control unit) 6 is housed in the lamp housing 3. The lamp unit 1 and the camera 2 are electrically connected to the lamp ECU 6. FIG. 4 is a block diagram of the lamp ECU 6. The lamp ECU 6 includes a main control unit 61. An input / output unit 62, a signal processing unit 63, an image analysis unit 64, and a lighting drive unit 65 are connected to the main control unit 61. The main control unit 61 includes the input / output unit 62 and the signal processing unit 63. The operations of the image analysis unit 64 and the lighting drive unit 65 are controlled.
 前記したランプユニット1とカメラ2は、前記入出力部62に接続されている。前記カメラ2において、カメラ本体21は、前記カメラホルダ22を介して前記入出力部62に接続されている。また、前記入出力部62は、自動車CARに延設されている信号バス、例えばCAN(Controller Area Network)100に接続されている。入出力部62は、このCAN100を介して図には表れない自動車の他の各種ECUと相互に接続されている。 The lamp unit 1 and the camera 2 described above are connected to the input / output unit 62. In the camera 2, the camera body 21 is connected to the input / output unit 62 via the camera holder 22. The input / output unit 62 is connected to a signal bus, for example, CAN (Controller 例 え ば Area Network) 100 that extends to the car CAR. The input / output unit 62 is connected to various other ECUs of the automobile not shown in the drawing via the CAN 100.
 前記ランプECU6において、信号処理部63は、前記入出力部62から入力されるカメラ2の撮像信号を処理して動画又は静止画の画像信号を生成する。画像解析部64は生成された画像信号から得られる画像を解析し、撮像された画像中の対象物、特に対向車や先行車等の他車両を検出する。したがって、この画像解析部64は他車両検出手段として構成される。点灯駆動部65は、画像解析部64で検出された対象物を認識し、対向車や先行車を眩惑することがない適正な配光パターンを設定し、かつこの配光パターンを生成するためのADB制御信号を生成する。その上で、このADB制御信号に基づいてランプユニット1の光源、すなわち複数のLED11の発光を制御する。 In the lamp ECU 6, the signal processing unit 63 processes the image pickup signal of the camera 2 input from the input / output unit 62 to generate a moving image or a still image signal. The image analysis unit 64 analyzes an image obtained from the generated image signal, and detects an object in the captured image, particularly other vehicles such as an oncoming vehicle and a preceding vehicle. Therefore, the image analysis unit 64 is configured as other vehicle detection means. The lighting drive unit 65 recognizes the object detected by the image analysis unit 64, sets an appropriate light distribution pattern that does not dazzle the oncoming vehicle and the preceding vehicle, and generates the light distribution pattern. An ADB control signal is generated. Based on this ADB control signal, the light source of the lamp unit 1, that is, the light emission of the plurality of LEDs 11 is controlled.
 このように、ランプECU6は、カメラ2で撮像して得られる画像に基づいて対向車や先行車等の他両を検出し、検出した他車両を眩惑することがなく、その一方でその他の領域を明るく照明することが可能な配光パターンとなるようにランプユニット1の点灯を制御し、適正なADB制御を実行する。 In this way, the lamp ECU 6 detects the other vehicle such as the oncoming vehicle and the preceding vehicle based on the image obtained by imaging with the camera 2, and does not dazzle the detected other vehicle, while other regions. The lighting of the lamp unit 1 is controlled so as to obtain a light distribution pattern capable of brightly illuminating and appropriate ADB control is executed.
 ここで、前記画像解析部64、すなわち他車両検出手段は、AIを利用したDL(深層学習:Deep Learning)により生成される走行シミュレーションデータに基づいて機械学習して、他車両を検出するように構成されるのが好ましい。このDLにおいて対象物の検出性能を向上させるには、多量の学習を休みなく行うことが望ましい。多量の学習を行うには、多量の走行映像データとその走行データの正解である教師データを予め作成し、学習させる必要がある。そこで、これらのデータを自動で生成できるようにする。 Here, the image analysis unit 64, that is, the other vehicle detection means detects the other vehicle by performing machine learning based on the traveling simulation data generated by DL (Deep Learning) using AI. Preferably it is configured. In order to improve the detection performance of an object in this DL, it is desirable to perform a large amount of learning without a break. In order to perform a large amount of learning, it is necessary to create and learn in advance a large amount of traveling video data and teacher data that is the correct answer of the traveling data. Therefore, these data can be automatically generated.
 走行データ収集のためには、実際の車両で仕向地内の出来るだけ多くの道路を走行し、データを取得する必要があり、膨大な費用、時間が必要となる。また、蓄積した走行映像から、人が見て車両の有無と位置を抽出し、正解データを作成しなければならず、蓄積したデータを多いほど、膨大な時間が必要となる。 In order to collect travel data, it is necessary to travel as many roads as possible in the destination with actual vehicles and acquire data, which requires enormous costs and time. In addition, the presence / absence and position of the vehicle must be extracted from the accumulated traveling video and the correct answer data must be created. The more accumulated data, the more time is required.
 この実施形態では、画像解析部64は、DLを実行するに際し、図5のフローに示すように、走行環境の自動生成(S1)、道路オブジェクトの自動生成と配置(S2)、走行路の自動生成(S3)、自車走行により正解データの取得(S4)、学習器による学習を繰り返す(S5)ことを行っている。これにより学習量を増大させ、検出性能を向上させている。 In this embodiment, when executing the DL, the image analysis unit 64 automatically generates a driving environment (S1), automatically generates and arranges road objects (S2), and automatically runs a road as shown in the flow of FIG. Generation (S3), acquisition of correct data by own vehicle traveling (S4), and learning by a learning device are repeated (S5). This increases the learning amount and improves the detection performance.
 すなわち、走行シミュレータにて仮想空間の道路上を走行した映像を作成する。自車のカメラから見た車両の有無及び画像上の位置は、仮想空間上の任意座標に配置された車両の位置、向き、距離及び自車のカメラの向き、位置から3Dから2Dへの変換等を行って把握され、正解データとして記録される。 That is, an image of traveling on a road in a virtual space is created by a traveling simulator. The presence / absence of the vehicle and the position on the image viewed from the camera of the own vehicle are converted from 3D to 2D from the position, orientation, distance of the vehicle arranged at an arbitrary coordinate in the virtual space, and the direction and position of the camera of the own vehicle. Etc., and is recorded as correct answer data.
 また、カメラの取付け位置から見たカメラの映像を正しく取得するようにカメラのパラメータ(F値、画角、露光時間等)を設定することにより、実際のカメラ映像に近い映像を取得することができる。 Also, by setting camera parameters (F value, angle of view, exposure time, etc.) so as to correctly acquire the camera image viewed from the camera mounting position, an image close to the actual camera image can be acquired. it can.
 仮想空間には、既に作成されている道路上のオブジェクトが配置されているテストコース等を使用する。 ∙ Use a test course in which objects on the road that have already been created are placed in the virtual space.
 その上で、自車は配置される道路を全て(上り、下りとも)走行する。自車及び自車以外の車両は、設定されている道路の制限速度±20km/hでコースを周回し、信号機や一時停止等の交通ルールを守るように設定する。また、車両どうしが衝突しないようにする。そして、自車の走行により得られた画像及び正解を、教師データとして、学習器に入力させる。 On top of that, the vehicle will travel on all roads (both up and down). The own vehicle and vehicles other than the own vehicle go around the course at the set road speed limit of ± 20 km / h, and are set so as to observe traffic rules such as traffic lights and temporary stops. Also, make sure that the vehicles do not collide. Then, the learning device inputs the image and the correct answer obtained by traveling of the host vehicle as teacher data.
 走行路の自動生成においては、実際の道路地図画像(2Dの地図、衛星写真の画像)から仮想空間に道路を生成する。この場合、地図、衛星写真から、画像処理で道路とそれ以外に識別し、道路を生成する。また、道路の幅から車線数を適当に設定し、あるいは、テストケースに応じて強制的に車線を決定しても良い。 In the automatic generation of the travel route, a road is generated in a virtual space from an actual road map image (2D map, satellite photograph image). In this case, roads are generated by identifying roads and other roads by image processing from maps and satellite photographs. Further, the number of lanes may be set appropriately from the width of the road, or the lanes may be forcibly determined according to the test case.
 道路以外のオブジェクトは、建物、駐車場、看板、山間部など、道路以外の面積に応じて適当に配置されてもよい。道路が交差している箇所には、横断歩道や信号が配置される。他車は生成された建物や道路の長さに応じて配置され、建物から別の建物に移動し続けるように設定されてもよい。 Objects other than roads may be appropriately arranged according to areas other than roads such as buildings, parking lots, signboards, and mountainous areas. Pedestrian crossings and signals are placed where the roads intersect. Other vehicles may be arranged according to the length of the generated building or road, and may be set so as to continue moving from one building to another.
 以上のように走行シミュレータを用いた学習を繰り返すことで、DLによる検出性能が向上される。したがって、この実施形態では、このような学習に基づいて得られたシミュレーションデータに基づいて機械学習されたランプECUの画像解析部における対向車や先行車等の他車の検出精度が高められ、好適なADB制御が可能になる。 As described above, the detection performance by DL is improved by repeating the learning using the traveling simulator. Therefore, in this embodiment, the detection accuracy of other vehicles such as an oncoming vehicle and a preceding vehicle in the image analysis unit of the lamp ECU that has been machine-learned based on simulation data obtained based on such learning is improved. ADB control becomes possible.
 なお、左ヘッドランプL-HLの構成はランプハウジング3内におけるランプユニット1とカメラ2の配置が図2の構成と左右対称であることを除けば、他の構成はほぼ同じである。図示は省略するが、この左ヘッドランプL-HLにおいても、ランプハウジング内にランプECUが内装されており、CAN100に接続されている。 The configuration of the left headlamp L-HL is substantially the same except that the arrangement of the lamp unit 1 and the camera 2 in the lamp housing 3 is bilaterally symmetrical to the configuration of FIG. Although illustration is omitted, also in the left headlamp L-HL, a lamp ECU is built in the lamp housing and connected to the CAN 100.
 これら左右のヘッドランプL-HL,R-HLは、自動車CARの車体に取り付けられた後、ランプユニット1の照射光軸およびカメラ2の撮像光軸が所定の方向に向けられるようにするためのエイミング調整が行われる。このエイミング調整では、図1に示したように、自動車CARの前方の所定位置に配置されているスクリーンScに対してランプユニット1の光照射を行い、この光照射により生成された配光パターンPをカメラ2で撮像する。そして、撮像した配光パターンの画像からランプユニット1の照射光軸を検出し、この照射光軸が所定の方向、例えば水平ラインHと垂直ラインVの交点である中心Oに向けられるようにエイミングスクリュ41を操作してエイミング調整を行う。このランプユニット1のエイミング調整によってカメラ2の撮像光軸のエイミング調整も行われることになる。 These left and right headlamps L-HL and R-HL are mounted on the body of an automobile CAR, and then the irradiation optical axis of the lamp unit 1 and the imaging optical axis of the camera 2 are directed in a predetermined direction. Aiming adjustment is performed. In this aiming adjustment, as shown in FIG. 1, the light of the lamp unit 1 is irradiated to the screen Sc disposed at a predetermined position in front of the car CAR, and the light distribution pattern P generated by the light irradiation is obtained. Is captured by the camera 2. Then, the irradiation optical axis of the lamp unit 1 is detected from the captured image of the light distribution pattern, and the irradiation optical axis is aimed at a predetermined direction, for example, the center O that is the intersection of the horizontal line H and the vertical line V. Aiming adjustment is performed by operating the screw 41. The aiming adjustment of the imaging optical axis of the camera 2 is also performed by the aiming adjustment of the lamp unit 1.
 このようなエイミング調整においては、2つの調整形態が考えられる。図6は第1の調整形態を説明するための概略図である。この第1の調整形態では、ランプハウジング3の一部に外部コネクタ7を設けておき、これをカメラ2に直接接続している。これにより、カメラ2で撮像した撮像信号は、当該外部コネクタ7を通して直接出力され得る。この外部コネクタ7には第1エイミング調整機8Aが接続される。この第1エイミング調整機8Aには、前記ランプECU6に設けられている信号処理部63と同様の信号処理部81と、この信号処理部63で生成される画像信号を表示するためのモニター82と、画像信号に対応した画像を当該モニター82に表示させるためのモニター駆動部83が備えられている。 In such aiming adjustment, two adjustment modes are conceivable. FIG. 6 is a schematic diagram for explaining the first adjustment mode. In the first adjustment mode, an external connector 7 is provided on a part of the lamp housing 3 and is directly connected to the camera 2. Thereby, the image signal picked up by the camera 2 can be directly output through the external connector 7. The external aiming adjuster 8A is connected to the external connector 7. The first aiming adjuster 8A includes a signal processing unit 81 similar to the signal processing unit 63 provided in the lamp ECU 6, and a monitor 82 for displaying an image signal generated by the signal processing unit 63. A monitor driving unit 83 for displaying an image corresponding to the image signal on the monitor 82 is provided.
 エイミング調整に際しては、図1に示したスクリーンScには予め基準が明示されており、自動車CARはこの基準に合せて姿勢を調節しておく。ランプユニット1を点灯してスクリーンScに光照射を行った上で、カメラ2により照射された配光パターンPを撮像する。第1エイミング調整機8Aは、撮像された撮像信号を、外部コネクタ7を通して取り込み、信号処理部81において画像信号を生成する。そして、モニター駆動部83は生成された画像信号に基づいてモニター82に画像、すなわち撮像した配光パターンを表示する。作業者はこのモニター82を視認しながら表示された配光パターンがヨー、ロール、ピッチの3方向にそれぞれ所定の位置、例えば中心からどの程度ずれているかを確認し、このずれが無くなるようにエイミングスクリュ41を手動調整する。これによりエイミング調整が実行される。 In the aiming adjustment, a reference is clearly specified in advance on the screen Sc shown in FIG. 1, and the posture of the automobile CAR is adjusted in accordance with this reference. The lamp unit 1 is turned on and the screen Sc is irradiated with light, and then the light distribution pattern P irradiated by the camera 2 is imaged. The first aiming adjuster 8 </ b> A captures the captured image signal through the external connector 7, and the signal processing unit 81 generates an image signal. The monitor driving unit 83 displays an image, that is, a captured light distribution pattern on the monitor 82 based on the generated image signal. The operator confirms how much the light distribution pattern displayed while visually recognizing the monitor 82 is deviated from a predetermined position, for example, the center in each of the three directions of yaw, roll, and pitch, and aiming to eliminate this deviation. Manually adjust the screw 41. Thereby, aiming adjustment is executed.
 この第1の調整形態では、ヘッドランプHLのランプハウジング3に、カメラ2で撮像した撮像信号を取り込むために第1エイミング調整機8Aと接続を行うための外部コネクタ7を設ける必要があり、ヘッドランプHLの小型化の障害になるとともにコスト高の要因になる。また、外部コネクタ7に対する接続、取り外しの作業が必要であるとともに、そのための時間がかかり作業効率が悪くなる。さらに、第1エイミング調整機8Aには信号処理部81を内蔵させる必要があり高価なものになる。 In this first adjustment mode, it is necessary to provide the lamp housing 3 of the headlamp HL with an external connector 7 for connecting to the first aiming adjuster 8A in order to capture an image signal captured by the camera 2. This is an obstacle to miniaturization of the lamp HL and increases the cost. Further, it is necessary to connect and remove the external connector 7, and it takes time for the work and the work efficiency is deteriorated. Further, the first aiming adjuster 8A needs to incorporate the signal processing unit 81, which is expensive.
 一方、第2の調整形態は、図7のように、ヘッドランプHLのランプハウジング3には外部コネクタは設けられていない。また、第2エイミング調整機8Bには信号処理部は設けてられておらず、モニター82と、モニター駆動部83を備えている。そして、この第2エイミング調整機8Bは、前記ランプECU6に接続されているCAN100に接続可能とされている。このCAN100への接続は、自動車CARに設けられている既存のCANコネクタに接続することにより容易に実現できる。 On the other hand, in the second adjustment mode, as shown in FIG. 7, the lamp housing 3 of the headlamp HL is not provided with an external connector. The second aiming adjuster 8B is not provided with a signal processing unit, and includes a monitor 82 and a monitor driving unit 83. The second aiming adjuster 8B can be connected to the CAN 100 connected to the lamp ECU 6. The connection to the CAN 100 can be easily realized by connecting to an existing CAN connector provided in the automobile CAR.
 この第2の調整形態を実行するために、前記ランプECU6では、前記主制御部61に予め所定のプログラムが組み込まれている。第2エイミング調整機8BからCAN100を通して出力されてくる所定のコマンド信号が入力されたとき、ランプECU6は、信号処理部63で生成した画像信号を入出力部62からCAN100に出力(送信)するように構成されている。この所定のコマンド信号は、例えば第2エイミング調整機8Bの電源がONされたときに出力される信号、あるいは予め第2エイミング調整機8Bにおける操作により出力される信号が採用できる。 In order to execute this second adjustment mode, the lamp ECU 6 incorporates a predetermined program in the main control unit 61 in advance. When a predetermined command signal output from the second aiming adjuster 8B through the CAN 100 is input, the lamp ECU 6 outputs (transmits) the image signal generated by the signal processing unit 63 from the input / output unit 62 to the CAN 100. It is configured. As this predetermined command signal, for example, a signal output when the power of the second aiming adjuster 8B is turned on or a signal output in advance by an operation in the second aiming adjuster 8B can be adopted.
 この第2の調整形態では、第2エイミング調整機8Bは、ランプECU6からCAN100に出力された画像信号を入力(受信)し、モニター駆動部83によりモニター82に表示させる。これにより、エイミング調整を行う作業者は、前記した第1の調整形態と同様に、カメラ2で撮像された画像をモニター82で視認しながらエイミング調整を行うことができる。 In the second adjustment mode, the second aiming adjuster 8B inputs (receives) the image signal output from the lamp ECU 6 to the CAN 100 and causes the monitor driving unit 83 to display the image signal on the monitor 82. Thereby, the operator who performs aiming adjustment can perform aiming adjustment while visually recognizing the image picked up by the camera 2 on the monitor 82 as in the first adjustment mode.
 したがって、第2の調整形態では、ランプハウジング3に外部コネクタを設ける必要がなく、また外部コネクタに対する接続、取り外しも不要になる。このため、第2の調整形態によれば、ヘッドランプHLの小型化、低コスト化を図るとともに、迅速かつ容易な調整が実現できる。さらに、第1エイミング調整機8Aに信号処理部を内蔵させる必要もなくなるので、第1エイミング調整機8Aを低価格に構成することができる。 Therefore, in the second adjustment mode, it is not necessary to provide an external connector on the lamp housing 3, and connection and disconnection with respect to the external connector are not required. For this reason, according to the second adjustment mode, the headlamp HL can be reduced in size and cost, and quick and easy adjustment can be realized. Furthermore, since it is not necessary to incorporate a signal processing unit in the first aiming adjuster 8A, the first aiming adjuster 8A can be configured at a low price.
 このように、前記ランプECU6を、第2の調整形態を実現するための構成としておくことにより、第2エイミング調整機8Bのような簡易な構成のエイミング調整機を用いながらも、迅速かつ容易なエイミング調整を実現させることができる。これにより、ヘッドランプHLにおけるランプユニット1の照射光軸とカメラ2の撮像光軸を正確に調整することができ、好適なADBを実現することができる。 As described above, the lamp ECU 6 is configured to realize the second adjustment mode, so that the aiming adjuster having a simple configuration such as the second aiming adjuster 8B can be used quickly and easily. Aiming adjustment can be realized. Thereby, the irradiation optical axis of the lamp unit 1 and the imaging optical axis of the camera 2 in the headlamp HL can be accurately adjusted, and a suitable ADB can be realized.
 ここで、第2の調整形態において、ヘッドランプHLに設けたエイミング機構4のエイミングスクリュ41をモータ等により自動的に調整することができるように構成されている場合には、図示は省略するが、エイミング調整機に、配光パターンが所定の位置からずれている量を演算する演算部と、このずれ量が零になるようにモータをフィードバック制御するモータ駆動部を付加してもよい。これにより、エイミング調整を自動的に行うことができる。また、この場合には、画像信号を利用するのみでエイミング調整を行うことも可能であるので、モニター及びモニター駆動部は省略してもよい。 Here, in the second adjustment mode, when the aiming screw 41 of the aiming mechanism 4 provided in the headlamp HL can be automatically adjusted by a motor or the like, the illustration is omitted. The aiming adjuster may include a calculation unit that calculates the amount of deviation of the light distribution pattern from a predetermined position and a motor drive unit that feedback-controls the motor so that the amount of deviation is zero. Thereby, aiming adjustment can be performed automatically. In this case, since the aiming adjustment can be performed only by using the image signal, the monitor and the monitor driving unit may be omitted.
 一方、第2の調整形態において、ランプECU6の主制御部61では、画像信号を出力する際には、エイミング調整に必要な領域の画像信号、例えばカメラ2で撮像した撮像領域の中心を含む所定の領域の画像信号のみをCAN100に出力するようにしてもよい。このようにすれば、画像信号を出力する際のデータ量を低減し、エイミング調整の高速化が実現できる。あるいは、エイミングのずれ量がそれ程大きくないと予測される場合には、撮像した配光パターンの輝度を測定し、輝度の高い領域が中心を含む領域である蓋然性が高いので、この領域の画像信号のみをCAN100に出力するようにしてもよい。 On the other hand, in the second adjustment mode, when the main control unit 61 of the lamp ECU 6 outputs an image signal, the image signal of an area necessary for aiming adjustment, for example, a predetermined including the center of the imaging area captured by the camera 2 is included. Only the image signal in the area may be output to the CAN 100. In this way, it is possible to reduce the amount of data when outputting the image signal and to increase the speed of the aiming adjustment. Alternatively, when it is predicted that the amount of deviation in aiming is not so large, the luminance of the captured light distribution pattern is measured, and it is highly probable that the high luminance region is a region including the center. May be output to the CAN 100.
 さらには、ランプECU6の主制御部61には、第2エイミング調整機においてずれ量を算出していた演算部を備えてもよく、またずれ量に基づいて演算されるエイミングスクリュのモータの回転量を演算する演算部を備えてもよい。これらのずれ量と回転量は、いずれもエイミング調整信号として出力される。前者のエイミング調整信号の場合には、ランプECU6からは演算されたずれ量をCAN100に出力するだけでよく、第2エイミング調整機はモータ駆動部を備えるのみでよい。後者のエイミング調整信号の場合には、ランプECU6からは演算されたモータ回転数のみを出力するだけでよく、第2エイミング調整機のモータ駆動部の構成を簡略化できる。これにより、さらなるエイミング調整簡易化及び高速化が実現できる。 Furthermore, the main control unit 61 of the lamp ECU 6 may include a calculation unit that has calculated the deviation amount in the second aiming adjuster, and the rotation amount of the aiming screw motor calculated based on the deviation amount. You may provide the calculating part which calculates | requires. Both the shift amount and the rotation amount are output as aiming adjustment signals. In the case of the former aiming adjustment signal, it is only necessary to output the calculated deviation amount from the lamp ECU 6 to the CAN 100, and the second aiming adjustment machine only needs to include a motor drive unit. In the case of the latter aiming adjustment signal, it is only necessary to output only the calculated motor rotation speed from the lamp ECU 6, and the configuration of the motor drive unit of the second aiming adjustment machine can be simplified. Thereby, further aiming adjustment can be simplified and speeded up.
 この第2のエイミング調整の形態を実現するために構成されたランプECUにおいては、図8のように、左右のヘッドランプL-HL,R-HLの各ランプECU6はCAN100を通して相互に信号を送受することが可能になる。すなわち、一方のヘッドランプHLからCAN100に出力した画像信号を、他方のヘッドランプHLのランプECU6で取り込むことも可能になる。例えば、一方のヘッドランプHLのランプECU6からCAN100に出力されたコマンド信号を他方のヘッドランプHLのランプECU6が受信したときに、受信したランプECU6は自身が生成した画像信号をCAN100に出力し、この出力された画像信号を一方のヘッドランプHLのランプECU6が受信して、これを取り込むことが可能になる。 In the lamp ECU configured to realize the second aiming adjustment mode, as shown in FIG. 8, the left and right headlamps L-HL and R-HL lamp ECUs 6 send and receive signals to each other through the CAN 100. It becomes possible to do. That is, an image signal output from one headlamp HL to the CAN 100 can be captured by the lamp ECU 6 of the other headlamp HL. For example, when the lamp ECU 6 of the other headlamp HL receives a command signal output from the lamp ECU 6 of one headlamp HL to the CAN100, the received lamp ECU6 outputs an image signal generated by itself to the CAN100. This output image signal can be received by the lamp ECU 6 of one headlamp HL and captured.
 そこで、左右のヘッドランプL-HL,R-HLの各ランプECU6(6L,6R)は、図9に示すように、カメラの異常を検出する異常検出部66を備えている。また、主制御部61は当該異常検出部66においてカメラ2の異常が検出されたときに、所定の異常コマンド信号を出力するように構成される。 Therefore, the left and right headlamps L-HL and R-HL lamp ECUs 6 (6L, 6R) are each provided with an abnormality detection unit 66 for detecting an abnormality of the camera, as shown in FIG. The main control unit 61 is configured to output a predetermined abnormality command signal when the abnormality detection unit 66 detects an abnormality of the camera 2.
 このように構成することにより、例えば、右ヘッドランプR-HLのカメラ2Rに異常が生じた場合には、当該右ヘッドランプR-HLでは、自側のカメラ2Rで撮像した画像信号に基づくランプユニット1のADB制御ができなくなる。その際にはランプECU6の主制御部61から異常コマンド信号をCAN100に出力する。ここで、カメラの異常とは、撮像が不能になること、あるいは撮像に際しての視界不良が生じる等、正常な画像が取得できない状態である。 With this configuration, for example, when an abnormality occurs in the camera 2R of the right headlamp R-HL, the right headlamp R-HL uses a lamp based on the image signal captured by the camera 2R on the own side. Unit 1 cannot perform ADB control. At that time, an abnormal command signal is output from the main control unit 61 of the lamp ECU 6 to the CAN 100. Here, the abnormality of the camera is a state in which a normal image cannot be acquired, such as a case where imaging is impossible or a field of view is deteriorated during imaging.
 左ヘッドランプL-HLのランプECU6Lは自側のカメラ2Lで撮像した画像信号に基づいてADB制御を実行しているが、右ヘッドランプR-HLのランプECU6Rから出力された異常コマンド信号がCAN100を通して入力されたときには、自側のカメラ2Lで撮像した画像信号をCAN100に出力する。この画像信号はCAN100を通して異常状態の右ヘッドランプR-HLのランプECU6Rに入力されるので、右ヘッドランプR-HLは、入力された左ヘッドランプL-HLのカメラ2Lで撮像した画像信号に基づいてADB制御を実行する。 The lamp ECU 6L of the left headlamp L-HL performs ADB control based on the image signal captured by the camera 2L on its own side, but the abnormal command signal output from the lamp ECU 6R of the right headlamp R-HL is CAN100. When the signal is input through the camera 100, an image signal captured by the camera 2L on the own side is output to the CAN 100. Since this image signal is input to the lamp ECU 6R of the right headlamp R-HL in an abnormal state through the CAN 100, the right headlamp R-HL is converted into the image signal captured by the camera 2L of the input left headlamp L-HL. Based on this, ADB control is executed.
 これにより、左右のヘッドランプL-HL,R-HLのいずれか一方のカメラ2L又は2Rに異常が生じた場合でも、他方の正常なカメラ2R又は2Lで撮像した画像に基づいて、両方のヘッドランプL-HL,R-HLにおいて、それぞれADB制御が確保できる。なお、この場合、実際にはカメラの異常が生じたヘッドランプでは、ランプユニットの照射光軸と、正常なカメラの撮像光軸との視差角が大きくなるので、当該ヘッドランプではADB制御におけるマージンを大きくし、対向車や先行車に対する眩惑を確実に防止できるようにすることが好ましい。 Thus, even if an abnormality occurs in one of the left and right headlamps L-HL and R-HL, both heads are used based on the image captured by the other normal camera 2R or 2L. ADB control can be secured in each of the lamps L-HL and R-HL. In this case, in a headlamp in which an abnormality of the camera actually occurs, the parallax angle between the irradiation optical axis of the lamp unit and the imaging optical axis of a normal camera becomes large. It is preferable to increase the value so as to reliably prevent dazzling oncoming vehicles and preceding vehicles.
 以上説明した実施形態では、左右のヘッドランプL-HL,R-HLのそれぞれにカメラを内装しているが、低グレードの自動車の場合には一方のヘッドランプにのみカメラを内装することが考えられる。しかし、この場合にはカメラの撮像光軸と、当該カメラを内装していない他方のヘッドランプのランプユニットの照射光軸との間の視差角が問題になる。そのため、一方のヘッドランプのカメラから得られる画像に基づいて他方のヘッドランプでのADB制御を行うと、他車を眩惑する状況が生じるおそれがある。特に、路肩側のヘッドランプのカメラで撮像した画像では、対向車線の自車近傍に存在する対向車を撮像することが困難になる。 In the embodiment described above, the left and right headlamps L-HL and R-HL are each provided with a camera. However, in the case of a low-grade automobile, it is conceivable that the camera is provided only on one headlamp. It is done. However, in this case, the parallax angle between the imaging optical axis of the camera and the irradiation optical axis of the lamp unit of the other headlamp not equipped with the camera becomes a problem. For this reason, if ADB control is performed with the other headlamp based on an image obtained from the camera of one headlamp, a situation may occur in which the other vehicle is dazzled. In particular, in an image captured by a roadside headlamp camera, it is difficult to capture an oncoming vehicle that is in the vicinity of the host vehicle in the oncoming lane.
 図10A~Cは左側通行の例であるが、図10Aの先行車CAR1と対向車CAR2が存在している走行状況において、対向車線側の右ヘッドランプR-HLのカメラ2Rで撮像した画像は図10Bであり、路肩側の左ヘッドランプL-HLのカメラ2Lで撮像した画像は図10Cである。この例のように、路肩側の左ヘッドランプL-HLのカメラ2Lでは自車両の近傍に存在する対向車CAR2を確実に撮像できない場合があり、これを検出することは困難になる。 FIGS. 10A to 10C are examples of left-hand traffic, but in a driving situation where the preceding vehicle CAR1 and the oncoming vehicle CAR2 of FIG. 10A exist, images captured by the camera 2R of the right headlamp R-HL on the oncoming lane side are FIG. 10B is an image captured by the camera 2L of the left headlamp L-HL on the shoulder side in FIG. 10B. As in this example, the camera 2L of the left headlamp L-HL on the shoulder side may not be able to reliably image the oncoming vehicle CAR2 existing in the vicinity of the host vehicle, and it is difficult to detect this.
 そのため、図10Bの画像に基づいてADB制御を行うと点描した破線領域のような配光パターンP1となるが、図10Cの画像に基づいてADB制御を行うと破線のような配光パターンP2となる。配光パターンP1を適用しても対向車CAR2を眩惑することはないが、配光パターンP2を適用すると、図10Aの破線領域のように対向車CAR2を眩惑してしまうことになる。 Therefore, when the ADB control is performed based on the image of FIG. 10B, a light distribution pattern P1 like a dotted dotted area is obtained. However, when the ADB control is performed based on the image of FIG. Become. Even if the light distribution pattern P1 is applied, the oncoming vehicle CAR2 is not dazzled. However, when the light distribution pattern P2 is applied, the oncoming vehicle CAR2 is dazzled as shown by a broken line region in FIG. 10A.
 そこで、対向車線の自車両の近傍に存在する対向車を確実に撮像して検出することが容易な対向車線側のヘッドランプ、例えば日本のような左側通行の地域では右ヘッドランプにカメラを内装し、このカメラで撮像した画像に基づいて左右のヘッドランプのそれぞれにおいてADB制御を実行する。 Therefore, a headlamp on the opposite lane that makes it easy to reliably image and detect an oncoming vehicle in the vicinity of the host vehicle in the opposite lane, for example, in a left-hand traffic area such as Japan, a camera is installed in the right headlamp. Then, ADB control is executed in each of the left and right headlamps based on the image captured by the camera.
 しかし、このように一方のヘッドランプにのみカメラを内装する場合には、ヨーロッパ等の右側通行の地域では左側のヘッドランプにカメラを内装することが要求される。そのため、各地域に対応して左右のヘッドランプを個別設計する必要があり、結果としてコスト高を招くおそれがある。また、同一自動車が異なる交通制度の地域に跨いで走行する際には、路肩側のヘッドランプに内装されたカメラで撮像した画像に基づいてADB制御を行う状況が生じ、ADB制御の精度が低下される。 However, when the camera is mounted only on one headlamp in this way, it is required to mount the camera on the left headlamp in the right-hand traffic area such as Europe. Therefore, it is necessary to individually design the left and right headlamps corresponding to each region, and as a result, the cost may increase. Also, when the same vehicle travels across different traffic system areas, there is a situation where ADB control is performed based on an image captured by a camera built in a headlamp on the roadside, and the accuracy of ADB control is reduced. Is done.
 この実施形態では、図3に示したように、カメラ2は、カメラ本体21とカメラホルダ22と、を備えている。カメラ本体21は、カメラホルダ22を介してベースプレート5に支持され、かつランプECU6に電気接続されている。したがって、左右のヘッドランプL-HL,R-HLにおいて、それぞれこのように構成しておけば、1つのカメラ本体21を左右のヘッドランプL-HL,R-HLのそれぞれに設けられているカメラホルダ22を利用して取り付け、または取り外すことができる。これにより、左側通行の地域では、右ヘッドランプR-HLにカメラ本体21を取り付けることで、ADB制御を実行することができる。右側通行の地域では、右ヘッドランプR-HLに取り付けられていたカメラ本体21を取り外し、左ヘッドランプL-HLに取り付けることで、ADB制御を行うことが可能になる。 In this embodiment, as shown in FIG. 3, the camera 2 includes a camera body 21 and a camera holder 22. The camera body 21 is supported by the base plate 5 via the camera holder 22 and is electrically connected to the lamp ECU 6. Accordingly, if the left and right headlamps L-HL and R-HL are configured in this manner, one camera body 21 is provided in each of the left and right headlamps L-HL and R-HL. The holder 22 can be used for attachment or removal. As a result, in the left-hand traffic area, the ADB control can be executed by attaching the camera body 21 to the right headlamp R-HL. In the area of right-hand traffic, ADB control can be performed by removing the camera body 21 attached to the right headlamp R-HL and attaching it to the left headlamp L-HL.
 このように、1つのカメラ本体を左右のヘッドランプのいずれか一方に選択的に交換して取り付ける構成とすることで、自動車が異なる交通制度の地域を跨いで走行する場合でも、ヘッドランプを異なる仕様のものに取り替えることなく、常に対向車線側のヘッドランプのカメラで撮像した画像に基づく高精度のADB制御が実現できる。 In this way, by adopting a configuration in which one camera body is selectively exchanged and attached to either one of the left and right headlamps, the headlamps are different even when the automobile travels across regions of different transportation systems. High-precision ADB control based on an image captured by the headlamp camera on the opposite lane side can be realized without changing to a specification.
 以上の実施形態では、左右のヘッドランプにそれぞれランプECUを配設しているが、1つのランプECUを例えばLIN(Local Interconnect Network)により左右のヘッドランプの各ランプユニットとカメラに接続するようにしてもよい。この場合には、エイミング調整に際しては、外部のエイミング調整機はLINに対して接続することになる。また、左右のヘッドランプ間での画像信号の入出力は、当該1つのランプECUとLINを介して行われることになる。 In the above embodiment, the left and right headlamps are provided with the lamp ECUs, respectively, but one lamp ECU is connected to each lamp unit of the left and right headlamps and the camera by, for example, LIN (Local Interconnect Network). May be. In this case, in aiming adjustment, an external aiming adjuster is connected to LIN. In addition, input / output of image signals between the left and right headlamps is performed via the one lamp ECU and LIN.
 また、実施形態ではランプハウジング内に1つのランプユニットを備えているが、その他に1つ以上のランプユニット、例えば、クリアランスランプやターンシグナルランプ等のランプユニットが内装されていてもよい。 In the embodiment, one lamp unit is provided in the lamp housing, but one or more lamp units, for example, a lamp unit such as a clearance lamp or a turn signal lamp, may be incorporated.
 以上の実施形態の説明では、本開示の配光制御としてADB制御の例を示したが、前記したAHB制御やその他の配光制御を行うヘッドランプについても同様に適用できることは言うまでもない。 In the above description of the embodiment, an example of ADB control is shown as the light distribution control of the present disclosure. However, it goes without saying that the present invention can be similarly applied to a headlamp that performs the above-described AHB control and other light distribution controls.
(車両検出方法及び車両検出装置)
 次に、本開示の実施の形態について図面を参照して説明する。図11は本開示にかかる車両検出装置を自動車の車体108の前部左右に備えられるヘッドランプHL1に組み込んだ実施形態の概略水平断面図である。この図11は右ヘッドランプHL1の内部構造を示しているが、左ヘッドランプも同様の構成である。ヘッドランプHL1のランプハウジング103は、ランプボディ1031と、透光性の前面カバー1032と、を備えている。このランプハウジング103内には、ランプユニット101とカメラ102が内装されている。これらランプユニット101とカメラ102は、ベースプレート105に支持されている。ランプユニット101とカメラ102は、エイミングスクリュ1041等を備えるエイミング機構104により、それぞれの光軸方向が調整可能とされている。
(Vehicle detection method and vehicle detection device)
Next, an embodiment of the present disclosure will be described with reference to the drawings. FIG. 11 is a schematic horizontal sectional view of an embodiment in which the vehicle detection device according to the present disclosure is incorporated in a headlamp HL1 provided on the front left and right of a vehicle body 108 of an automobile. FIG. 11 shows the internal structure of the right headlamp HL1, but the left headlamp has the same configuration. The lamp housing 103 of the headlamp HL1 includes a lamp body 1031 and a translucent front cover 1032. A lamp unit 101 and a camera 102 are housed inside the lamp housing 103. The lamp unit 101 and the camera 102 are supported by a base plate 105. The lamp unit 101 and the camera 102 can be adjusted in their optical axis directions by an aiming mechanism 104 including an aiming screw 1041 and the like.
 前記ランプユニット101は、図示を簡略した複数のLEDを配列した光源1011と、この光源1011の出射光を自動車の前方に向けて投影する投影レンズ1012を備えている。ランプユニット101は、光源1011の発光を制御することにより、自動車の前方領域を所望の配光で照明することができる。 The lamp unit 101 includes a light source 1011 having a plurality of LEDs arranged in a simplified manner, and a projection lens 1012 that projects light emitted from the light source 1011 toward the front of the automobile. The lamp unit 101 can illuminate the front area of the automobile with a desired light distribution by controlling the light emission of the light source 1011.
 前記カメラ102は、本開示における撮像手段であり、図には表れないが、例えば、撮像素子を備えるデジタルカメラでる。このカメラ102は、自車両の前方領域、少なくとも前記ランプユニット101で光照射する領域を含む領域を撮像して撮像信号を出力する。このカメラ102で撮像される対象物には、自車両が走行する領域に存在する標識、看板、及び前方に存在する先行車や対向車等の車両が含まれる。 The camera 102 is an imaging unit according to the present disclosure and is not shown in the drawing, but is a digital camera including an imaging element, for example. The camera 102 images a front area of the host vehicle, at least an area including an area irradiated with light from the lamp unit 101, and outputs an imaging signal. Objects captured by the camera 102 include signs, signs, and vehicles such as preceding vehicles and oncoming vehicles that are present in the area where the host vehicle is traveling.
 また、前記ランプハウジング103内にはランプECU106が内装されており、前記ランプユニット101と前記カメラ102にそれぞれ接続されている。このランプECU106は前記カメラ102で撮像して得られる画像に基づいて好適な配光を設定することができる。ランプECU106は、この設定に基づいて前記ランプユニット101の配光、例えばADB制御を行うことができる。また、ランプECU106はCAN(Controller Area Network)10100を介して図には表れない車両ECU等に接続され、これらとの間で所要の信号を送受するようになっている。 A lamp ECU 106 is built in the lamp housing 103 and is connected to the lamp unit 101 and the camera 102, respectively. The lamp ECU 106 can set a suitable light distribution based on an image obtained by imaging with the camera 102. The lamp ECU 106 can perform light distribution of the lamp unit 101 based on this setting, for example, ADB control. The lamp ECU 106 is connected to a vehicle ECU or the like not shown in the figure via a CAN (Controller (Area Network) 10100, and sends and receives required signals to and from them.
 前記ランプECU106は、図12にブロック図を示すように、前記カメラ102の撮像素子から出力される撮像信号を信号処理して画像データとしての画像信号を生成する信号処理部1061と、この画像信号を画像解析して画像を形成し、この画像中に存在している車両を検出する画像解析部1062と、検出された車両を認識して前記ランプユニット101の配光を制御する点灯制御部1063と、を備えている。 As shown in the block diagram of FIG. 12, the lamp ECU 106 performs signal processing on an imaging signal output from the imaging element of the camera 102 to generate an image signal as image data, and this image signal. The image analysis unit 1062 detects the vehicle existing in the image, and the lighting control unit 1063 recognizes the detected vehicle and controls the light distribution of the lamp unit 101. And.
 このランプECU106では、前記信号処理部1061と画像解析部1062とで車両を検出するための動作を行うので、これらは広義の車両検出手段を構成しているが、特に車両を直接に検出するという意味では、前記画像解析部1062が狭義の車両検出手段を構成する。したがって、前記ランプECU106は、カメラ102で撮像して得られた画像信号に基づいて画像解析部1062、すなわち車両検出手段において車両を検出し、検出した車両が存在する領域を遮光するように点灯制御部1063においてランプユニット101の配光を制御することにより、ADB制御が実行されることになる。 In the lamp ECU 106, the signal processing unit 1061 and the image analysis unit 1062 perform an operation for detecting the vehicle. These constitute a vehicle detection means in a broad sense, but in particular, the vehicle is directly detected. In a sense, the image analysis unit 1062 constitutes a vehicle detection means in a narrow sense. Therefore, the lamp ECU 106 detects the vehicle by the image analysis unit 1062, that is, the vehicle detection means, based on the image signal obtained by the camera 102, and controls the lighting so as to shield the area where the detected vehicle exists. The ADB control is executed by controlling the light distribution of the lamp unit 101 in the unit 1063.
 車両検出手段としての前記画像解析部1062は、図12に示すように、画像中に存在する光点を抽出する光点抽出部1064と、抽出された光点の形状を識別して車両以外の光点を除去する光点識別部1065と、識別した光点の挙動、ここでは光点の画像中における移動状態を判定して車両を検出する車両検出部1066と、を備えている。 As shown in FIG. 12, the image analysis unit 1062 serving as a vehicle detection unit identifies a light spot extraction unit 1064 that extracts a light spot existing in the image, and identifies the shape of the extracted light spot. A light spot identifying unit 1065 for removing a light spot, and a vehicle detection unit 1066 for detecting the vehicle by determining the behavior of the identified light spot, here, the movement state of the light spot in the image.
 車両検出手段としての前記画像解析部1062における車両検出動作を説明する。ここでは、ヘッドランプHL1のADB制御が行われる自動車の夜間走行時に、カメラ102で撮像して得られた前方領域の画像が図13の状態である場合について説明する。この画像では、白線(黄色線を含む)LINEが描かれている道路上を、ランプを点灯した先行車CAR11と対向車CAR12,CAR13が走行している。また、道路脇には道路照明灯B、デリニエータD等の照明設備が配設されるとともに、標識面が照明されている矩形の道路標識S1と菱形の道路標識S2が配設されている。なお、ここでは各種看板も標識に含まれているものとする。 A vehicle detection operation in the image analysis unit 1062 as vehicle detection means will be described. Here, a case will be described in which the image of the front region obtained by capturing with the camera 102 is in the state of FIG. 13 when the automobile in which the ADB control of the headlamp HL1 is performed at night. In this image, the preceding vehicle CAR11 and the oncoming vehicles CAR12 and CAR13, whose lamps are lit, are traveling on a road on which a white line (including a yellow line) LINE is drawn. On the side of the road, lighting facilities such as a road lamp B and a delineator D are provided, and a rectangular road sign S1 and a rhombus road sign S2 are provided. Here, various signs are also included in the sign.
 この前方領域をカメラ102で撮像し、撮像信号を信号処理部1061において信号処理することにより画像信号(画像データ)が得られる。さらに、この画像信号から図14に示す画像が得られる。この画像は、カメラ102の撮像素子に対応した明暗の画素(光ドット)がX方向(行方向)とY方向(列方向)に配列されることにより得られる。そして、この画像中には、撮像された車両や照明設備、標識等が光点として表示される。ここでは、便宜的に背景の暗黒領域を白地で表し、光点を点描画している。 An image signal (image data) is obtained by imaging the front area with the camera 102 and processing the image signal with the signal processing unit 1061. Furthermore, the image shown in FIG. 14 is obtained from this image signal. This image is obtained by arranging bright and dark pixels (light dots) corresponding to the image sensor of the camera 102 in the X direction (row direction) and the Y direction (column direction). And in this image, the imaged vehicle, lighting equipment, a sign, etc. are displayed as a light spot. Here, for the sake of convenience, the dark area of the background is represented by a white background, and the light spot is drawn with dots.
 通常では、車両CAR11~CAR13のヘッドランプ又はテールランプによる光点LP1~LP3や、照明設備B,Dによる光点LP4,LP5は、発光体を直接にカメラ102で撮像しているため、円形又は円形に近い形状の光点となる。また、標識S1,S2による光点LP6,LP7は、発光体からの光で照明されている輝度の高い面がカメラ102で撮像されるため、それぞれの形状、すなわち矩形(正方形、長方形)や菱形の形状をした光点となる。路面の白線LINEは細長い光点LP8となる。 Normally, the light spots LP1 to LP3 due to the head lamps or tail lamps of the vehicles CAR11 to CAR13, and the light spots LP4 and LP5 due to the lighting facilities B and D are directly captured by the camera 102. The light spot has a shape close to. In addition, the light spots LP6 and LP7 formed by the signs S1 and S2 are picked up by the camera 102 with respect to the high-luminance surface illuminated by the light from the illuminant, so that each shape, that is, a rectangle (square, rectangle) or rhombus The light spot has the shape of The white line LINE on the road surface is a long and narrow light spot LP8.
 光点抽出部1064は、この画像から検査対象の候補となる光点を抽出する。例えば、所定レベルの輝度値をしきい値として設定しておくことで、光点抽出部1064は、このしきい値よりも高輝度の領域を検出する。これにより、単に物体で反射された光による低輝度の光点、例えば自車両のヘッドランプで照明した路面領域は除外され、発光体を備える標識やランプを備える車両による光点が検出される。図14の場合には、このような光点が全て検出される。 The light spot extraction unit 1064 extracts light spots that are candidates for inspection from this image. For example, by setting a luminance value of a predetermined level as a threshold value, the light spot extraction unit 1064 detects a region having a luminance higher than the threshold value. As a result, light spots of low brightness due to light reflected simply by the object, for example, road surface areas illuminated by the headlamps of the host vehicle are excluded, and light spots by a vehicle equipped with a sign or lamp equipped with a light emitter are detected. In the case of FIG. 14, all such light spots are detected.
 光点抽出部1064は、さらに検出した光点について、画像中のxy座標でその光点領域を設定する。すなわち、矩形の光点LP6については、図15Aに破線で示すように、対角の座標(xa,ya)~(xb,yb)で示される矩形をした光点領域を設定する。図15Bに示す菱形の光点LP7、図15Cに示す円形の光点LP1~LP5、図15Dに示す白線の光点LP8についても同様であり、それぞれxy方向の最大値の座標(xm,ym)~(xn,yn)で光点領域を設定する。これらの図におけるn,mは各光点にそれぞれ設定される値である。 The light spot extraction unit 1064 further sets the light spot area with the xy coordinates in the image for the detected light spot. That is, for the rectangular light spot LP6, a rectangular light spot area indicated by diagonal coordinates (xa, ya) to (xb, yb) is set as shown by a broken line in FIG. 15A. The same applies to the diamond light spot LP7 shown in FIG. 15B, the circular light spots LP1 to LP5 shown in FIG. 15C, and the white light spot LP8 shown in FIG. 15D, and the coordinates (xm, ym) of the maximum value in the xy directions, respectively. The light spot region is set with ~ (xn, yn). In these figures, n and m are values set for each light spot.
 次いで、光点識別部1065は、設定された各光点領域について、それぞれの光点の形状を識別する。例えば、図15Aに矢印で示すように、設定された光点領域の底辺の左側の位置、すなわち(xa,ya)の位置から右側に向けてx方向(行方向)に走査しながら1行中の画素の輝度を測定する。この測定においては、前記した輝度のしきい値を利用することができる。次にその上の行について行い、これをy方(列方向)に向けて数ライン、例えば2~5行繰り返す。実際には3行程度でよい。 Next, the light spot identifying unit 1065 identifies the shape of each light spot for each set light spot area. For example, as shown by an arrow in FIG. 15A, while scanning in the x direction (row direction) from the position on the left side of the base of the set light spot area, that is, from the position of (xa, ya) to the right side, in one line The luminance of the pixel is measured. In this measurement, the above-described luminance threshold value can be used. Next, the above-mentioned row is performed, and this is repeated for several lines, for example, 2 to 5 rows in the y direction (column direction). Actually, about 3 lines are sufficient.
 図15Aのように、測定した各行での画素の大部分がしきい値よりも高輝度である場合には、当該光点の形状は矩形であると識別する。しきい値よりも高輝度の画素の数が上側の行に行くほど少なくなる場合には、図15Bまたは図15Cのように、当該光点の形状は菱形または円形であると識別する。この場合には、測定する行数を数行増やし、高輝度の画素の減少状態を測定する。減少の割合が大きい場合には、図15Dの菱形であると識別し、減少の割合がそれよりも小さい場合には図15Cの円形であると識別する。 As shown in FIG. 15A, when most of the measured pixels in each row have a luminance higher than the threshold value, the shape of the light spot is identified as a rectangle. When the number of pixels with higher luminance than the threshold decreases toward the upper row, the shape of the light spot is identified as a rhombus or a circle as shown in FIG. 15B or 15C. In this case, the number of lines to be measured is increased by several lines, and the decrease state of the high luminance pixels is measured. If the rate of decrease is large, it is identified as a diamond in FIG. 15D, and if the rate of decrease is smaller than that, it is identified as a circle in FIG. 15C.
 また、しきい値よりも高輝度の画素の数がほぼ同じであるが、上側の行に行くのにしたがって高輝度の画素が徐々に左又は右に変化して行く場合には、図15Dのように、当該交点の形状は線状であると識別する。位置の変化量が一定の場合には直線形状であると識別し、位置の変化量が変化して行く場合には曲線形状であると識別する。 If the number of pixels with higher luminance than the threshold is substantially the same, but the pixels with higher luminance gradually change to the left or right as they go to the upper row, FIG. Thus, the shape of the intersection is identified as a linear shape. When the amount of change in position is constant, it is identified as a linear shape, and when the amount of change in position changes, it is identified as a curved shape.
 なお、前記した識別において、図15Dの菱形あるいは図15Cの円形であると識別するときには、光点領域の左辺のy方向の中央から底辺のx方向の中央に向けて1行ずつ下に階段状に数行測定するようにしてもよい。あるいは、光点領域の右辺から、また上方に階段状に測定してもよい。このようにしたときに、画素がいずれも高輝度であれば菱形である可能性は高くなる。円形の場合には測定途中の画素の輝度が低下し、最後に再び輝度の高い画素が測定されることになる。この円形には楕円も含まれる。 In the above-described identification, when identifying the diamond in FIG. 15D or the circle in FIG. 15C, a stepped downward line by line from the center in the y direction on the left side of the light spot region to the center in the x direction on the bottom side. Several lines may be measured. Or you may measure from the right side of a light spot area | region, and stepwise upwards. In such a case, if all the pixels have high luminance, the possibility of being a rhombus increases. In the case of a circle, the luminance of the pixel being measured is lowered, and finally the pixel having a higher luminance is measured again. This circle includes an ellipse.
 そして、矩形又は菱形と識別した場合には、これらの光点は標識による光点である可能性が高いので検出対象とはならない「非対象光点」のタグを付す。また、線形状と識別した場合にも、この光点は道路の白線による光点であると考えられるので「非対象光点」のタグを付す。一方、円形と識別した光点は車両による光点である可能性が高いので、検出対象となる「対象光点」のタグを付する。そして、光点識別部1065は、これらのタグを付した各光点を車両検出部1066に出力する。図16は「対象光点」のタグを付した光点を示している。ここでは、光点LP6,LP7に「非対象光点」のタグが付される。そして、光点LP6,LP7は、抽出から除外される。 Then, when the light spot is identified as a rectangle or a diamond, a tag “non-target light spot” that is not a detection target is attached because it is highly likely that the light spot is a light spot by a marker. In addition, even when the light spot is identified as a line shape, the light spot is considered to be a light spot by a white line on the road, and therefore, a tag “non-target light spot” is attached. On the other hand, since a light spot identified as a circle is likely to be a light spot by a vehicle, a tag “target light spot” to be detected is attached. Then, the light spot identification unit 1065 outputs each light spot with these tags attached to the vehicle detection unit 1066. FIG. 16 shows a light spot with a tag “target light spot”. Here, a tag “non-target light spot” is attached to the light spots LP6 and LP7. Then, the light spots LP6 and LP7 are excluded from extraction.
 車両検出部1066は、識別された光点のうち、「対象光点」のタグが付いた図16の光点について、その挙動、ここでは画像中における移動状態を判定することにより車両を検出する。この検出手法は、例えば特許文献2と同様に、自車両に対する光点の相対的な位置変化、すなわち移動方向や移動速度について経時的な追跡を行い、その結果に基づいて当該道路を走行している先行車あるいは対向車を検出する。図17はこのようにして検出された車両の光点を示している。ここでは、光点LP1,LP2が先行車CAR11と対向車CAR12として検出される。なお、対向車CAR13による光点LP3はサイズが小さいので、遠方に存在していることが検出でき、現時点では自車両のヘッドランプにより対向車CAR13を眩惑するおそれが少ないため除外されている。 Of the identified light spots, the vehicle detection unit 1066 detects the vehicle by determining the behavior of the light spots in FIG. 16 with the tag “target light spot”, in this case, the moving state in the image. . In this detection method, for example, as in Patent Document 2, the relative position change of the light spot with respect to the host vehicle, that is, the movement direction and the movement speed are tracked over time, and the road is traveled based on the result. Detects a preceding vehicle or an oncoming vehicle. FIG. 17 shows the light spot of the vehicle detected in this way. Here, the light spots LP1 and LP2 are detected as the preceding vehicle CAR11 and the oncoming vehicle CAR12. Note that the light spot LP3 due to the oncoming car CAR13 is small in size, so that it can be detected that it is located far away, and is excluded because there is little possibility of dazzling the oncoming car CAR13 by the headlamp of the host vehicle at this time.
 このようにして車両検出部1066で検出した対向車や先行車の情報は点灯制御部1063に出力される。点灯制御部1063は、この情報に基づいて、先行車CAR11や対向車CAR12を眩惑しないような配光を設定し、ランプユニット101での配光制御を実行する。これにより、好適なADB制御が実行される。 Information on the oncoming vehicle and the preceding vehicle detected by the vehicle detection unit 1066 in this way is output to the lighting control unit 1063. Based on this information, the lighting control unit 1063 sets a light distribution that does not dazzle the preceding car CAR11 and the oncoming car CAR12, and executes the light distribution control in the lamp unit 101. Thereby, suitable ADB control is performed.
 以上のように、実施形態では、カメラ102で撮像して得られた画像に基づいて、光点識別部1065が車両である可能性の高い光点を識別し、車両検出部1066がこの識別した光点についてのみ車両を検出するための処理を実行する。したがって、撮像した画像中に多数の光点が存在している場合においても、車両によるものではないことが明らかな光点や、車両のものではない可能性の高い光点についての検出処理を行う必要がなくなり、車両の検出に際しての処理工数を低減し、迅速な車両検出ができる。 As described above, in the embodiment, the light spot identifying unit 1065 identifies a light spot that is likely to be a vehicle based on the image obtained by imaging with the camera 102, and the vehicle detecting unit 1066 identifies this. A process for detecting the vehicle only for the light spot is executed. Therefore, even when there are a large number of light spots in the captured image, detection processing is performed for light spots that are clearly not from the vehicle or light spots that are likely not from the vehicle. This eliminates the need to reduce the number of processing steps when detecting a vehicle, and enables rapid vehicle detection.
 ここで、車両の検出に際し、特に対向車を検出する場合においては光点の移動方向について修正を行うことが好ましい。すなわち、実施形態ではカメラ102はヘッドランプHL1に一体的に設けられており、対向車のヘッドランプと略同じ高さ位置にある。場合によっては対向車のヘッドランプよりも低い位置にある。そのため、カメラが対向車のランプよりも高い位置に配設されている場合、例えば自動車のフロントウインドやサイドミラーに配設されている場合に比べて、撮像した画像中における対向車の移動方向が相違する。 Here, it is preferable to correct the moving direction of the light spot when detecting the vehicle, particularly when detecting an oncoming vehicle. In other words, in the embodiment, the camera 102 is provided integrally with the headlamp HL1, and is substantially at the same height as the headlamp of the oncoming vehicle. In some cases, it is lower than the headlamps of the oncoming vehicle. Therefore, when the camera is arranged at a position higher than the lamp of the oncoming vehicle, for example, the moving direction of the oncoming vehicle in the captured image is larger than when the camera is arranged on the front window or side mirror of the automobile. Is different.
 図18Aはカメラが自車両のフロントウインドの上部に配設されている場合のように対向車CAR12のヘッドランプHL1よりも高い位置にある場合の画像である。この場合には対向車CAR12のヘッドランプHL1による光点LP2の移動方向は、矢印で示すように画像中の水平線Hに対して右下方に向けられる。 FIG. 18A is an image when the camera is positioned higher than the headlamp HL1 of the oncoming vehicle CAR12 as in the case where the camera is disposed at the upper part of the front window of the host vehicle. In this case, the moving direction of the light spot LP2 by the headlamp HL1 of the oncoming vehicle CAR12 is directed to the lower right with respect to the horizontal line H in the image as indicated by an arrow.
 図18Bは、この実施形態のようにカメラがヘッドランプに一体に配設されていて、対向車CAR12のヘッドランプHL1とほぼ等しい高さあるいは低い位置にある場合の画像である。この場合には対向車CAR12のヘッドランプHL1による光点LP2の移動方向は画像中の水平線Hに沿った右方向、または当該水平線Hよりも幾分だけ右下方に向けられる。 FIG. 18B is an image in the case where the camera is integrated with the headlamp as in this embodiment and is at a height or a position substantially equal to the headlamp HL1 of the oncoming vehicle CAR12. In this case, the moving direction of the light spot LP2 by the headlamp HL1 of the oncoming vehicle CAR12 is directed to the right along the horizontal line H in the image, or slightly lower right than the horizontal line H.
 したがって、実施形態の車両検出部1066では画像中の水平方向、または水平方向よりも右下方向に移動する光点を対向車の光点であると検出することができる。しかし、自車両に設けたカメラ102がロール方向(自動車の前後方向の軸を中心とした回転方向)に傾斜している状態のとき、例えばカメラ102が左下がり方向に傾いて取り付けられている場合、あるいは自車両の積載重量の偏りにより車体が同方向に傾いて走行している場合には、カメラ102で撮像した画像も図18Cのように左下がりに傾いた状態となる。 Therefore, the vehicle detection unit 1066 of the embodiment can detect the light spot moving in the horizontal direction in the image or in the lower right direction than the horizontal direction as the light spot of the oncoming vehicle. However, when the camera 102 provided on the host vehicle is tilted in the roll direction (rotation direction about the longitudinal axis of the automobile), for example, the camera 102 is mounted tilted in the left-down direction. Alternatively, when the vehicle body is tilted in the same direction due to a deviation in the load weight of the host vehicle, the image captured by the camera 102 is also tilted to the lower left as shown in FIG. 18C.
 このような画像では、対向車CAR12のヘッドランプHL1の光点LP2は画像中の水平線Hよりも右上方に移動されることになる。そのため、光点を画像中の移動方向のみで検出した場合には、対向車の光点LP2を車両以外の光点、例えば標識や道路照明灯であると誤って検出してしまうことがある。したがって、車両を検出する際には、画像中における光点の移動方向のみではなく移動速度についても検出を行うことが必要とされている。 In such an image, the light spot LP2 of the headlamp HL1 of the oncoming car CAR12 is moved to the upper right side of the horizontal line H in the image. Therefore, when the light spot is detected only in the moving direction in the image, the light spot LP2 of the oncoming vehicle may be erroneously detected as a light spot other than the vehicle, for example, a sign or a road illumination light. Therefore, when detecting the vehicle, it is necessary to detect not only the moving direction of the light spot in the image but also the moving speed.
 そこで、車両検出部1066は、光点識別部1065において、「非対象光点」として識別されていた光点を参照する。例えば、図18Cの例において、車両検出部1066は、形状が矩形であるとして識別された標識S1の光点LP6を参照し、検出した矩形の横辺、すなわち底辺や上辺の方向を補正水平方向DHとして設定する。その上で、車両検出部1066は、矢印で示す画像中での対向車CAR12の光点LP2の移動方向をこの補正水平方向DHに対する方向として修正する。 Therefore, the vehicle detection unit 1066 refers to the light spot that has been identified as the “non-target light spot” by the light spot identification unit 1065. For example, in the example of FIG. 18C, the vehicle detection unit 1066 refers to the light spot LP6 of the sign S1 identified as having a rectangular shape, and corrects the detected horizontal side, that is, the direction of the bottom side and the top side, in the corrected horizontal direction. Set as DH. Then, the vehicle detection unit 1066 corrects the moving direction of the light spot LP2 of the oncoming vehicle CAR12 in the image indicated by the arrow as the direction with respect to the corrected horizontal direction DH.
 通常、矩形をした標識S1の横辺は水平方向に向けられているので、この修正を行うことにより、カメラ102や車体にロール方向の傾きが生じている場合でも、車両検出部1066は、画像の水平方向を補正水平方向DHに修正することができる。そして、この補正水平方向DHを基準にして対向車CAR12の光点LP6の移動方向を検出すれば、車両検出部1066は、当該光点LP6は水平方向または右下方向に移動していると検出でき、対向車CAR12を正確に検出することが可能になる。すなわち、検出に際して光点の移動速度を検出しなくても、移動方向の検出のみで車両を正確に検出できるようになり、処理の簡略化を図り、検出の迅速化が可能になる。 Usually, since the horizontal side of the rectangular sign S1 is oriented in the horizontal direction, even if the camera 102 or the vehicle body is tilted in the roll direction by performing this correction, the vehicle detection unit 1066 displays an image. Can be corrected to the corrected horizontal direction DH. If the moving direction of the light spot LP6 of the oncoming vehicle CAR12 is detected with reference to the corrected horizontal direction DH, the vehicle detection unit 1066 detects that the light spot LP6 is moving in the horizontal direction or the lower right direction. It is possible to accurately detect the oncoming vehicle CAR12. That is, even if the moving speed of the light spot is not detected at the time of detection, the vehicle can be accurately detected only by detecting the moving direction, thereby simplifying the processing and speeding up the detection.
 この場合、画像中に複数の矩形の光点が識別された場合には、これら複数の矩形の底辺や上辺の平均を取って基準水平方向とするようにしてもよい。あるいは、形状を高い精度で識別することが容易な、最も大きな寸法をした矩形の光点の底辺や上辺を採用してもよい。矩形の光点が識別されていない場合には、他の形状の光点を利用して基準水平方向を設定してもよい。例えば菱形の光点の場合には水平方向の対角線を基準水平方向としてもよい。なお、車高センサ等を備えた自動車であって、自車両のロール角が検出可能な場合には、検出されたロール角に基づいて画像中における光点の移動方向を修正するようにしてもよい。 In this case, when a plurality of rectangular light spots are identified in the image, an average of the bottom and top sides of the plurality of rectangles may be taken as the reference horizontal direction. Alternatively, the bottom side or the top side of the rectangular light spot having the largest dimension, which can easily identify the shape with high accuracy, may be employed. If a rectangular light spot is not identified, the reference horizontal direction may be set using a light spot of another shape. For example, in the case of a diamond-shaped light spot, a horizontal diagonal line may be used as the reference horizontal direction. If the vehicle has a vehicle height sensor or the like and the roll angle of the host vehicle can be detected, the moving direction of the light spot in the image may be corrected based on the detected roll angle. Good.
 本開示において、さらに車両の検出の正確性、特に対向車の検出の正確性を高めるために、光点識別部1065で識別した白線LINEの光点LP8を利用してもよい。この実施形態では、カメラ102はヘッドランプHL1に一体的に設けられており、フロントウインドやサイドミラーに設けられている場合に比較して路面に近い低い位置に設けられているので、路面に描かれている白線LINEをより鮮明に撮像することができる。特に、白線LINEは対向車や自車両のヘッドランプから出射された光を反射した光を撮像して得られる光点であるため、通常では高い輝度で撮像することは難しいが、カメラをヘッドランプに設けることにより高い輝度の光点として撮像できる。 In the present disclosure, the light spot LP8 of the white line LINE identified by the light spot identifying unit 1065 may be used to further improve the accuracy of vehicle detection, particularly the accuracy of oncoming vehicle detection. In this embodiment, the camera 102 is provided integrally with the headlamp HL1, and is provided at a lower position closer to the road surface than when provided on the front window or side mirror. The white line LINE can be imaged more clearly. In particular, since the white line LINE is a light spot obtained by imaging light reflected from the headlamps of the oncoming vehicle and the host vehicle, it is difficult to normally capture images with high brightness. By providing in, it can image as a high-intensity light spot.
 したがって、自車両のヘッドランプで照明されたやや遠方の白線LINEや、対向車CAR12のヘッドランプHL1で照明された近傍の白線の各光点LP8をそれぞれ高い輝度で鮮明に撮像できるようになり、白線LINEの認識が容易かつ正確になる。これに伴い、自車両が走行している道路が直線道路または曲線道路であるかの道路状態を正確に認識できるようになる。 Therefore, it becomes possible to clearly capture each light spot LP8 of the slightly distant white line LINE illuminated by the headlamp of the host vehicle and the adjacent white line illuminated by the headlamp HL1 of the oncoming vehicle CAR12 with high brightness. Recognition of the white line LINE becomes easy and accurate. Accordingly, it becomes possible to accurately recognize a road state as to whether the road on which the vehicle is traveling is a straight road or a curved road.
 道路状態を正確に認識することができると、車両検出部1066では、光点識別部1065において識別した複数の「対象光点」の光点のうち、認識した白線の光点LP8の近傍でかつ当該光点LP8の延長方向に沿った位置に存在し、かつその延長方向に移動する光点のみを車両の光点として検出することが可能になる。したがって、この検出した光点についてのみ光点の移動方向や移動速度等に基づく車両検出を行うことにより、車両検出の正確性が高められ、かつ車両検出に際しての処理工程を簡略化し、検出の迅速化が可能になる。 When the road state can be accurately recognized, the vehicle detection unit 1066 is in the vicinity of the recognized white line light spot LP8 among the light spots of the plurality of “target light spots” identified by the light spot identifying unit 1065, and Only the light spot that exists at the position along the extension direction of the light spot LP8 and moves in the extension direction can be detected as the light spot of the vehicle. Therefore, by performing vehicle detection based only on the detected light spot based on the moving direction and moving speed of the light spot, the accuracy of vehicle detection is improved, and the processing steps for vehicle detection are simplified, and the detection speed is increased. Can be realized.
 また、このように白線による光点LP8を利用して道路状態を認識することにより、画像中における光点の移動方向が白線の光点LP8に近接しかつその延長方向に沿った方向であることを検出するだけでも、車両を正確に検出することが可能になる。すなわち、自車両がロールしてカメラや車体が傾斜している状態においても、図18Cで説明したような補正水平方向を考慮することなく正確な車両検出が可能になり、その分だけ車両検出の処理が複雑になることが回避できる。 Further, by recognizing the road state using the light spot LP8 by the white line in this way, the moving direction of the light spot in the image is a direction close to the light spot LP8 of the white line and along the extending direction thereof. It is possible to accurately detect the vehicle simply by detecting. That is, even when the host vehicle rolls and the camera and the vehicle body are tilted, accurate vehicle detection is possible without considering the corrected horizontal direction as described in FIG. 18C. The complexity of processing can be avoided.
 白線が曲線のとき、すなわち道路状態が曲路のときには、車両検出部1066は、光点識別部1065で識別された際の当該光点LP8の形状から曲率を演算する。画像中の対象光点が横方向に移動しているとき、車両検出部1066は、曲路の方向と、演算した曲率と、に基づいてその移動量を減算あるいは加算する等の補正を行うようにしてもよい。これにより、当該光点から車両を正確に検出することが可能になる。 When the white line is a curve, that is, when the road state is a curved road, the vehicle detection unit 1066 calculates the curvature from the shape of the light spot LP8 when it is identified by the light spot identification unit 1065. When the target light spot in the image is moving in the horizontal direction, the vehicle detection unit 1066 performs correction such as subtraction or addition of the movement amount based on the direction of the curved path and the calculated curvature. It may be. This makes it possible to accurately detect the vehicle from the light spot.
 以上の実施形態では、本開示の車両検出装置を、ヘッドランプのADB制御による配光制御に用いる例を示したが、AHB制御やその他の配光制御を行うのに利用してもよい。また、車両を迅速かつ正確に検出することが可能であることから、自動車の自動運転制御に利用することも可能である。 In the above embodiment, an example in which the vehicle detection device of the present disclosure is used for light distribution control by ADB control of a headlamp has been shown, but it may be used to perform AHB control or other light distribution control. Further, since the vehicle can be detected quickly and accurately, it can also be used for automatic driving control of an automobile.
 また、本開示の車両検出装置は、自動車の側方領域や後方領域を撮像し、自車両の側方や後方の領域に存在する他車両を検出するように構成することも可能である。この場合においては、カメラはサイドミラーやテールランプに一体的に組み込むようにしてもよい。 Also, the vehicle detection device of the present disclosure can be configured to take an image of a side region and a rear region of an automobile and detect other vehicles existing in the side and rear regions of the host vehicle. In this case, the camera may be integrated into a side mirror or tail lamp.
 本出願は、2018年3月15日に出願された日本国特許出願(特願2018-048049号)に開示された内容、及び2018年3月15日に出願された日本国特許出願(特願2018-048050号)に開示された内容を適宜援用する。 This application includes the contents disclosed in the Japanese patent application filed on March 15, 2018 (Japanese Patent Application No. 2018-048049) and the Japanese patent application filed on March 15, 2018 (Japanese Patent Application No. No. 2018-048050) is appropriately incorporated.

Claims (16)

  1.  光照射を行うランプユニットと、少なくとも当該ランプユニットの光照射領域を撮像する撮像手段と、車両に配設された信号バスに接続され、前記ランプユニットと前記撮像手段を制御するための制御手段とを備え、前記撮像手段で撮像した撮像信号を信号処理して前記ランプユニットのエイミング調整を行うようにした車両用ランプであって、前記制御手段は所定のコマンド信号が入力されたときに、前記信号処理した信号を前記信号バスに出力することを特徴とする車両用ランプ。 A lamp unit that irradiates light, an imaging unit that images at least a light irradiation region of the lamp unit, a control unit that is connected to a signal bus disposed in a vehicle and controls the lamp unit and the imaging unit; A vehicle lamp configured to perform an aiming adjustment of the lamp unit by performing signal processing on an image pickup signal picked up by the image pickup means, and when the control means receives a predetermined command signal, A vehicle lamp characterized by outputting a signal-processed signal to the signal bus.
  2.  前記信号処理した信号は撮像信号に基づく画像信号又はエイミング調整信号である請求項1に記載の車両用ランプ。 The vehicle lamp according to claim 1, wherein the signal-processed signal is an image signal based on an imaging signal or an aiming adjustment signal.
  3.  前記信号バスにはエイミング調整機が接続可能であり、当該エイミング調整機は前記信号バスに対して前記コマンド信号を出力するとともに、当該信号バスに出力された前記信号処理された信号を取得し、取得された信号に基づいて前記ランプユニットのエイミング状態を表示し、又は取得された信号に基づいてエイミング調整を実行する請求項2に記載の車両用ランプ。 An aiming adjuster can be connected to the signal bus, and the aiming adjuster outputs the command signal to the signal bus and acquires the signal processed signal output to the signal bus, The vehicle lamp according to claim 2, wherein the aiming state of the lamp unit is displayed based on the acquired signal, or the aiming adjustment is executed based on the acquired signal.
  4.  前記カメラと前記ランプユニットはランプハウジングに内装されており、前記カメラは当該ランプハウジングに対して着脱可能に支持されている請求項1から3のいずれか一項に記載の車両用ランプ。 4. The vehicle lamp according to claim 1, wherein the camera and the lamp unit are housed in a lamp housing, and the camera is detachably supported with respect to the lamp housing.
  5.  車両の両側にそれぞれ車両用ランプが配設され、前記カメラは各車両用ランプの選択された一方の車両用ランプに配設されている請求項4に記載の車両用ランプ。 5. The vehicle lamp according to claim 4, wherein a vehicle lamp is disposed on each side of the vehicle, and the camera is disposed on a selected one of the vehicle lamps.
  6.  前記制御手段は、少なくとも前記カメラで撮像した撮像信号に基づいて他車両を検出する他車両検出手段を備えており、当該他車両検出手段は、自動生成された走行シミュレーションデータに基づいて機械学習された画像解析部を備える請求項1から5のいずれか一項に記載の車両用ランプ。 The control means includes at least another vehicle detection means for detecting another vehicle based on an image signal captured by the camera, and the other vehicle detection means is machine-learned based on automatically generated traveling simulation data. The vehicle lamp according to claim 1, further comprising an image analysis unit.
  7.  前記走行シミュレーションデータは、走行映像データと教師データが自動生成される請求項6に記載の車両用ランプ。 7. The vehicle lamp according to claim 6, wherein the running simulation data includes automatically generated running video data and teacher data.
  8.  車両に搭載されて当該車両の周囲を撮像し、撮像により得られた画像に写された光点の挙動から車両を検出する車両検出方法であって、前記画像中に存在する複数の光点の形状を認識する工程と、認識した形状のうち車両検出対象となる光点を対象光点とし、それ以外の光点を非対象光点としてそれぞれ識別する工程と、画像中の対象光点の挙動を検出し、所定の挙動をする光点を車両として検出する工程を含むことを特徴とする車両検出方法。 A vehicle detection method for detecting a vehicle from the behavior of a light spot that is mounted on a vehicle and that captures the surroundings of the vehicle and that is captured in an image obtained by the imaging, wherein a plurality of light spots present in the image are detected. A step of recognizing a shape, a step of identifying a light spot that is a vehicle detection target in the recognized shape as a target light point, and a light spot other than that as a non-target light point, and a behavior of the target light spot in the image And detecting a light spot having a predetermined behavior as a vehicle.
  9.  前記対象光点の挙動を検出する際に、前記非対象光点の形状を参照する請求項8に記載の車両検出方法。 The vehicle detection method according to claim 8, wherein when detecting the behavior of the target light spot, the shape of the non-target light spot is referred to.
  10.  車両に搭載されて当該車両の周囲を撮像する撮像手段と、前記撮像手段で撮像した画像に写された光点から車両を検出する検出手段を備える車両検出装置であって、前記画像中に存在する複数の光点から車両検出対象となる対象光点と、それ以外の非対象光点を識別する光点識別部と、前記画像中における識別した対象光点の挙動から車両を検出する車両検出部を備え、前記光点識別部は前記複数の光点の形状を認識し、この認識に基づいて前記対象光点と前記非対象光点を識別することを特徴とする車両検出装置。 A vehicle detection apparatus comprising: an imaging unit that is mounted on a vehicle and images the surroundings of the vehicle; and a detection unit that detects a vehicle from a light spot captured in an image captured by the imaging unit, and is present in the image A light spot identifying unit for identifying a target light spot to be detected from a plurality of light spots, a non-target light spot, and a vehicle detection for detecting a vehicle from the behavior of the identified target light spot in the image A vehicle detection device that recognizes shapes of the plurality of light spots and identifies the target light spot and the non-target light spot based on the recognition.
  11.  前記光点識別部は、矩形、菱形又は線状をした光点を非対象光点とし、円形またはこれに近い形状をした光点を対象光点として識別する請求項10に記載の車両検出装置。 The vehicle detection device according to claim 10, wherein the light spot identifying unit identifies a light spot having a rectangular shape, a diamond shape, or a line shape as a non-target light spot, and identifies a light spot having a circular shape or a shape close thereto as a target light spot. .
  12.  前記車両検出部は、前記画像中における前記対象光点の移動方向に基づいて車両を検出する請求項10又は11に記載の車両検出装置。 The vehicle detection device according to claim 10 or 11, wherein the vehicle detection unit detects a vehicle based on a moving direction of the target light spot in the image.
  13.  前記車両検出部は、前記非対象光点で認識された形状に基づいて前記対象光点の移動方向を補正する請求項12に記載の車両検出装置。 The vehicle detection device according to claim 12, wherein the vehicle detection unit corrects a moving direction of the target light spot based on a shape recognized by the non-target light spot.
  14.  前記車両検出部は、非対象光点から認識される形状が矩形のときに、当該矩形の底辺または上辺の方向を基準水平方向に設定し、この基準水平方向に基づいて前記対象光点の移動方向を補正する請求項13に記載の車両検出装置。 When the shape recognized from the non-target light spot is a rectangle, the vehicle detection unit sets the direction of the bottom or top of the rectangle as a reference horizontal direction, and moves the target light spot based on the reference horizontal direction. The vehicle detection device according to claim 13, wherein the direction is corrected.
  15.  前記車両検出部は、線状をした非対象光点から道路の白線を認識し、この白線に沿った移動方向の光点を車両として検出する請求項12に記載の車両検出装置。 The vehicle detection device according to claim 12, wherein the vehicle detection unit recognizes a white line on a road from a linear non-target light spot and detects a light spot in a moving direction along the white line as a vehicle.
  16.  前記撮像手段は、車両のランプに一体に設けられている請求項10から15のいずれか一項に記載の車両検出装置。 The vehicle detection device according to any one of claims 10 to 15, wherein the imaging unit is provided integrally with a lamp of the vehicle.
PCT/JP2019/004994 2018-03-15 2019-02-13 Vehicular lamp, vehicle detection method, and vehicle detection device WO2019176418A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018048050A JP2019156277A (en) 2018-03-15 2018-03-15 Vehicle lamp
JP2018-048050 2018-03-15
JP2018048049A JP2019156276A (en) 2018-03-15 2018-03-15 Vehicle detection method and vehicle detection device
JP2018-048049 2018-03-15

Publications (1)

Publication Number Publication Date
WO2019176418A1 true WO2019176418A1 (en) 2019-09-19

Family

ID=67908305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/004994 WO2019176418A1 (en) 2018-03-15 2019-02-13 Vehicular lamp, vehicle detection method, and vehicle detection device

Country Status (1)

Country Link
WO (1) WO2019176418A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114235343A (en) * 2021-12-03 2022-03-25 常州青葵智能科技有限公司 LED car lamp dynamic image detection system based on LIN bus
CN114240931A (en) * 2021-12-31 2022-03-25 昆山青眼自动化技术有限公司 Full-automatic carpet light type position adjustment device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009147799A1 (en) * 2008-06-04 2009-12-10 株式会社小糸製作所 Headlight aiming system
JP2012020662A (en) * 2010-07-15 2012-02-02 Koito Mfg Co Ltd Vehicle detector and headlight control device equipped with the same
JP2013028239A (en) * 2011-07-27 2013-02-07 Denso Corp Light detection device, light detection program and light control device
JP2013067229A (en) * 2011-09-21 2013-04-18 Denso Corp Device for detecting light, program for detecting light, and device for controlling light
JP2013147138A (en) * 2012-01-19 2013-08-01 Koito Mfg Co Ltd Light distribution controller of vehicle lamp
JP2013244890A (en) * 2012-05-28 2013-12-09 Denso Corp Vehicle light source detection device and vehicle light source detection program
KR20140104857A (en) * 2013-02-21 2014-08-29 주식회사 만도 Image stabilization method for vehicle camera and image processing apparatus usnig the same
JP2015195018A (en) * 2014-03-18 2015-11-05 株式会社リコー Image processor, image processing method, operation support system, and program
JP2015202756A (en) * 2014-04-14 2015-11-16 株式会社小糸製作所 Control device for vehicle lamp
US20160097493A1 (en) * 2014-10-02 2016-04-07 Taylor W. Anderson Method and apparatus for a lighting assembly with an integrated auxiliary electronic component port
JP2017088124A (en) * 2015-11-17 2017-05-25 株式会社小糸製作所 Vehicle lighting system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009147799A1 (en) * 2008-06-04 2009-12-10 株式会社小糸製作所 Headlight aiming system
JP2012020662A (en) * 2010-07-15 2012-02-02 Koito Mfg Co Ltd Vehicle detector and headlight control device equipped with the same
JP2013028239A (en) * 2011-07-27 2013-02-07 Denso Corp Light detection device, light detection program and light control device
JP2013067229A (en) * 2011-09-21 2013-04-18 Denso Corp Device for detecting light, program for detecting light, and device for controlling light
JP2013147138A (en) * 2012-01-19 2013-08-01 Koito Mfg Co Ltd Light distribution controller of vehicle lamp
JP2013244890A (en) * 2012-05-28 2013-12-09 Denso Corp Vehicle light source detection device and vehicle light source detection program
KR20140104857A (en) * 2013-02-21 2014-08-29 주식회사 만도 Image stabilization method for vehicle camera and image processing apparatus usnig the same
JP2015195018A (en) * 2014-03-18 2015-11-05 株式会社リコー Image processor, image processing method, operation support system, and program
JP2015202756A (en) * 2014-04-14 2015-11-16 株式会社小糸製作所 Control device for vehicle lamp
US20160097493A1 (en) * 2014-10-02 2016-04-07 Taylor W. Anderson Method and apparatus for a lighting assembly with an integrated auxiliary electronic component port
JP2017088124A (en) * 2015-11-17 2017-05-25 株式会社小糸製作所 Vehicle lighting system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114235343A (en) * 2021-12-03 2022-03-25 常州青葵智能科技有限公司 LED car lamp dynamic image detection system based on LIN bus
CN114240931A (en) * 2021-12-31 2022-03-25 昆山青眼自动化技术有限公司 Full-automatic carpet light type position adjustment device

Similar Documents

Publication Publication Date Title
JP4654163B2 (en) Vehicle surrounding environment recognition device and system
US6960005B2 (en) Vehicle headlamp apparatus
US7899213B2 (en) Image processing system and vehicle control system
JP4544233B2 (en) Vehicle detection device and headlamp control device
JP4415996B2 (en) In-vehicle image recognition device, light distribution control device, and light distribution control method
US10364956B2 (en) Headlight device, headlight controlling method, and headlight controlling program
CN103782307A (en) Method and device for detecting objects in the area surrounding a vehicle
US9669755B2 (en) Active vision system with subliminally steered and modulated lighting
US9616805B2 (en) Method and device for controlling a headlamp of a vehicle
JPH0769125A (en) Head lamp for vehicle
JP2013147112A (en) Vehicle driving environment recognition apparatus
JP2014515893A (en) Method and apparatus for evaluating an image taken by a camera of a vehicle
JP5065172B2 (en) Vehicle lighting determination device and program
EP2525302A1 (en) Image processing system
JP4980970B2 (en) Image pickup means adjustment device and object detection device
WO2019176418A1 (en) Vehicular lamp, vehicle detection method, and vehicle detection device
JP2013045176A (en) Signal recognition device, candidate point pattern transmitter, candidate point pattern receiver, signal recognition method, and candidate point pattern reception method
US20200369200A1 (en) Vehicle detecting device and vehicle lamp system
US10730427B2 (en) Lighting device
JP4007578B2 (en) Headlamp irradiation range control method and headlamp apparatus
JP7312913B2 (en) Method for controlling lighting system of motor vehicle
JP2007124676A (en) On-vehicle image processor
JP5547580B2 (en) Imaging camera, vehicle detection apparatus and lamp control apparatus using the same
JPH04193641A (en) Obstacle detection device for vehicle
CN111712854B (en) Image processing device and vehicle lamp

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19768208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19768208

Country of ref document: EP

Kind code of ref document: A1