WO2019172117A1 - Sensor system, and image data generating device - Google Patents

Sensor system, and image data generating device Download PDF

Info

Publication number
WO2019172117A1
WO2019172117A1 PCT/JP2019/008084 JP2019008084W WO2019172117A1 WO 2019172117 A1 WO2019172117 A1 WO 2019172117A1 JP 2019008084 W JP2019008084 W JP 2019008084W WO 2019172117 A1 WO2019172117 A1 WO 2019172117A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
signal
vehicle
sensor unit
camera
Prior art date
Application number
PCT/JP2019/008084
Other languages
French (fr)
Japanese (ja)
Inventor
重之 渡邉
裕一 綿野
Original Assignee
株式会社小糸製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社小糸製作所 filed Critical 株式会社小糸製作所
Priority to JP2020504983A priority Critical patent/JP7288895B2/en
Publication of WO2019172117A1 publication Critical patent/WO2019172117A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to a sensor system mounted on a vehicle.
  • the present disclosure also relates to an image data generation device mounted on a vehicle.
  • driving support means a control process that at least partially performs at least one of driving operation (steering operation, acceleration, deceleration), monitoring of driving environment, and backup of driving operation.
  • this means including partial driving assistance such as a collision damage reducing brake function and a lane keeping assist function to a fully automatic driving operation.
  • Patent Document 2 discloses an example of such an electronic mirror technique.
  • the first problem in the present disclosure is to suppress an increase in load related to signal processing required for driving support of a vehicle.
  • the second problem in the present disclosure is to maintain the detection accuracy of a plurality of sensors required for driving support of the vehicle.
  • the third problem in the present disclosure is to provide an electronic mirror technology with improved rear visibility.
  • One aspect for achieving the first problem is a sensor system mounted on a vehicle, At least one sensor unit that detects information outside the vehicle and outputs a signal corresponding to the information; A signal processing device that processes the signal to generate data corresponding to the information; With The signal processing device includes: Obtaining reference height information indicating a reference height determined based on the pitch angle of the vehicle; Based on the reference height information, the data is generated using the signal associated with the region corresponding to the reference height in the detection range of the sensor unit.
  • the detection reference direction of the sensor unit in the vertical direction of the vehicle matches the reference height.
  • the detection reference direction of the sensor unit changes in the vertical direction of the vehicle according to the change in the pitch angle of the vehicle, the detection range of the sensor unit may be set to have redundancy in the vertical direction of the vehicle. It is common.
  • the signal processing device specifies the region using reference height information separately determined based on the pitch angle of the vehicle. Therefore, it is possible to easily and highly reliably specify a region including information necessary for driving support regardless of the pitch angle of the vehicle. Furthermore, since only the signal associated with the region that is part of the detection range having redundancy is subjected to data generation processing for acquiring external information, it relates to signal processing required for vehicle driving support. Increase in load can be suppressed.
  • the above sensor system can be configured as follows.
  • a leveling adjustment mechanism that adjusts the detection reference direction of the sensor unit in the vertical direction of the vehicle based on the reference height information is provided.
  • the above sensor system can be configured as follows.
  • the at least one sensor unit comprises: A first sensor unit that detects first information outside the vehicle and outputs a first signal corresponding to the first information; A second sensor unit that detects second information outside the vehicle and outputs a second signal corresponding to the second information; Contains The signal processing device processes the first signal to generate first data corresponding to the first information, and processes the second signal to generate second data corresponding to the second information.
  • the signal processing device processes, as the first signal, a signal associated with a region corresponding to the reference height in the detection range of the first sensor unit based on the reference height information.
  • a signal associated with a region corresponding to the reference height in the detection range of the two sensor units is processed as the second signal.
  • the generation of the first data based on the first signal output from the first sensor unit and the second signal based on the second signal output from the second sensor unit Two data are generated. That is, the generated first data and second data both include information associated with the reference height. Therefore, the integrated use of both data for driving assistance becomes easy. Both the first data and the second data are generated using only signals associated with a region that is a part of the detection range of each sensor unit. Therefore, even if both data are used in an integrated manner, an increase in the processing load of the signal processing device can be suppressed.
  • the above sensor system can be configured as follows.
  • the signal processing device associates the data with map information.
  • the data and map information generated by the signal processing device can be used for driving support in an integrated manner.
  • an increase in signal processing load related to data generation can be suppressed, an increase in processing load in integrated driving support combining the data and map information can also be suppressed as a whole.
  • the above sensor system can be configured as follows.
  • the data is associated with information corresponding to the reference height in the map information.
  • map information can be converted into two-dimensional information, so that the amount of data used for integrated driving support can be reduced. Therefore, it is possible to further suppress an increase in processing load in integrated driving support.
  • the above sensor system can be configured as follows.
  • a lamp housing that partitions a lamp chamber that houses the lamp unit;
  • the sensor unit is disposed in the lamp chamber.
  • the lamp unit Since the lamp unit has a function of supplying light to the outside of the vehicle, the lamp unit is generally arranged in a place where there is little shielding in the vehicle. By arranging the sensor unit in such a place, information outside the vehicle can be efficiently acquired.
  • the height detection information when the height detection information is acquired from the vehicle auto leveling system, the height detection information can be shared with the lamp unit. In this case, efficient system design is possible.
  • the above sensor system can be configured as follows.
  • the at least one sensor unit is at least one of a LIDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • One aspect for achieving the second problem is a sensor system mounted on a vehicle, A plurality of sensor units that detect information outside the vehicle and each output a signal corresponding to the information; A signal processing device for acquiring the signal; With The plurality of sensor units are: A first sensor unit that detects first information outside the vehicle and outputs a first signal corresponding to the first information; A second sensor unit that detects second information outside the vehicle and outputs a second signal corresponding to the second information; Contains The signal processing device includes: Based on the first signal and the second signal, determine whether the first information and the second information include the same reference target, When it is determined that the first information and the second information include the same reference target, the position of the reference target specified by the first information and the reference specified by the second information A difference from the position of the target is detected.
  • the user when the magnitude of the difference exceeds a predetermined value, the user can be notified.
  • the user can take appropriate measures to correct the misalignment. Therefore, it is possible to maintain the detection accuracy of the plurality of sensor units required for driving support of the vehicle.
  • the above sensor system can be configured as follows.
  • the signal processing device includes: Obtaining the time-dependent change of the position of the reference target specified by the first information and the position of the reference target specified by the second information; A sensor unit that requires correction is specified based on the change over time.
  • the sensor unit that needs to be corrected can be specified by monitoring the change with time of the position of the reference target specified by each sensor unit. Therefore, it is possible to more easily maintain the detection accuracy of the plurality of sensor units required for vehicle driving assistance.
  • the above sensor system can be configured as follows.
  • the plurality of sensor units include a third sensor unit that detects third information outside the vehicle and outputs a third signal corresponding to the third information
  • the signal processing device includes: Based on the first signal, the second signal, and the third signal, determine whether the first information, the second information, and the third information include the same reference target, When it is determined that the first information, the second information, and the third information include the same reference target, the position of the reference target specified by the first information, the second information Based on the difference between the position of the reference target specified by the above and the position of the reference target specified by the third information, the sensor unit that needs to be corrected is specified.
  • the reference target is preferably one that can provide a reference for the position in the height direction. This is because the position reference in the height direction tends to be relatively small in variation due to the running state of the vehicle, and it is easy to suppress an increase in processing load in the signal processing device.
  • the above sensor system can be configured as follows.
  • a lamp housing that partitions a lamp chamber that houses the lamp unit; At least one of the plurality of sensor units is disposed in the lamp chamber.
  • the lamp unit is generally arranged in a place with little shielding because of the function of supplying light to the outside of the vehicle. By arranging the sensor unit in such a place, information outside the vehicle can be efficiently acquired.
  • the above sensor system can be configured as follows.
  • the at least a plurality of sensor units is at least one of a LIDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the “sensor unit” means a component unit of a component that has a desired information detection function and can be circulated by itself.
  • the “lamp unit” means a structural unit of a part that is provided with a desired lighting function and can be circulated by itself.
  • One aspect for achieving the third problem is an image data generation device mounted on a vehicle, An input interface to which a signal corresponding to the image output from at least one camera that acquires an image behind the driver's seat of the vehicle is input; A processor that generates first image data corresponding to a first monitoring image displayed on the display device based on the signal; An output interface for outputting the first image data to the display device; With The processor is Determining a reference object included in the first monitoring image; Determining whether the reference object is included in a predetermined area in the first monitoring image; When it is determined that the reference object is not included in the predetermined area, the second monitoring is performed such that a second monitoring image including the reference object in the predetermined area is displayed on the display device. Generate second image data corresponding to the image, The second image data is output to the display device through the output interface.
  • either the first image data or the second image data is generated according to the position of the reference object, and the first monitoring image or the second monitoring image including the reference object in a predetermined region. Can be continuously displayed on the display device. Therefore, it is possible to avoid a situation in which the driver is prevented from visually recognizing the rear depending on the state of the vehicle. That is, it is possible to provide an electronic mirror technology with further improved rear visibility.
  • the above image data generation device can be configured as follows.
  • the processor is Generating the first image data based on the signal corresponding to the first portion of the image;
  • the second image data is generated based on the signal corresponding to the second portion of the image.
  • the above image data generation device can be configured as follows.
  • the processor is Generating the first image data based on the image when the angle of view of the camera is the first angle of view; Change the angle of view of the camera so that the second angle of view is wider than the first angle of view,
  • the second image data is generated based on the image when the angle of view of the camera is the second angle of view.
  • the second monitoring image is generated by expanding the angle of view only when necessary, it is possible to achieve both securing the visual field and preventing the visibility from being lowered.
  • the above image data generation device can be configured as follows.
  • the processor is Generating the first image data based on the image when the optical axis of the camera is in the first direction; Changing the direction of the optical axis of the camera to a second direction different from the first direction;
  • the second image data is generated based on the image when the optical axis of the camera is in the second direction.
  • the above image data generation device can be configured as follows.
  • the at least one camera includes a first camera and a second camera;
  • the processor is Generating the first image data based on the signal acquired from the first camera;
  • the second image data is generated based on the signal acquired from the second camera.
  • the above image data generation device can be configured as follows.
  • the reference object can be designated by the user via the first monitoring image.
  • the reference object may be a part of a rear part of the vehicle or a part of a vehicle located behind the vehicle.
  • the structure of the sensor system which concerns on 1st embodiment is illustrated. It is a figure explaining the operation example of the sensor system of FIG. It is a figure which illustrates the position of the sensor system in vehicles.
  • the structure of the sensor system which concerns on 2nd embodiment is illustrated. 5 illustrates processing performed by the sensor system of FIG. 6 illustrates a configuration in which the sensor unit in the sensor system of FIG. 4 is disposed in the lamp chamber.
  • the functional structure of the image data generation apparatus which concerns on 3rd embodiment is illustrated.
  • An example of a vehicle on which the above-described image data generation device is mounted is shown.
  • movement flow of said image data generation apparatus is illustrated.
  • the operation result of said image data generation apparatus is illustrated.
  • movement of said image data generation apparatus is shown.
  • movement of said image data generation apparatus is shown.
  • movement of said image data generation apparatus is shown.
  • an arrow F indicates the forward direction of the illustrated structure.
  • Arrow B indicates the backward direction of the illustrated structure.
  • Arrow L indicates the left direction of the illustrated structure.
  • Arrow R indicates the right direction of the illustrated structure.
  • “Left” and “right” used in the following description indicate the left and right directions viewed from the driver's seat.
  • FIG. 1 schematically shows a configuration of a sensor system 1 according to the first embodiment.
  • the sensor system 1 is mounted on a vehicle.
  • the sensor system 1 includes a LiDAR sensor unit 11.
  • the LiDAR sensor unit 11 has a configuration for emitting invisible light and a configuration for detecting return light as a result of reflection of the invisible light at least on an object existing outside the vehicle.
  • the LiDAR sensor unit 11 may include a scanning mechanism that sweeps the invisible light by changing the emission direction (that is, the detection direction) as necessary.
  • infrared light having a wavelength of 905 nm is used as invisible light.
  • the LiDAR sensor unit 11 can acquire the distance to the object associated with the return light based on, for example, the time from when the invisible light is emitted in a certain direction until the return light is detected. Further, by accumulating such distance data in association with the detection position, information related to the shape of the object associated with the return light can be acquired. In addition to or instead of this, information related to attributes such as the material of the object associated with the return light can be acquired based on the difference between the waveforms of the emitted light and the return light.
  • the LiDAR sensor unit 11 is configured to output a signal S1 corresponding to information outside the vehicle detected as described above.
  • the sensor system 1 includes a signal processing device 12.
  • the signal processing device 12 can be realized by a general-purpose microprocessor that operates in cooperation with a general-purpose memory.
  • a general-purpose microprocessor a CPU, an MPU, and a GPU can be exemplified.
  • the general-purpose memory ROM and RAM can be exemplified.
  • the ROM can store a computer program that realizes processing to be described later.
  • the general-purpose microprocessor designates at least a part of a program stored on the ROM, expands it on the RAM, and executes the above-described processing in cooperation with the RAM.
  • the signal processing device 12 may be realized by a dedicated integrated circuit such as a microcontroller, an ASIC, or an FPGA that can execute a computer program that realizes processing to be described later.
  • the signal processing device 12 may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
  • the signal processing device 12 is configured to process the signal S1 output from the LiDAR sensor unit 11 and generate LiDAR data corresponding to the detected information outside the vehicle. LiDAR data is used for driving support of a vehicle.
  • the signal processing device 12 is configured to acquire the reference height information H from an auto leveling system mounted on the vehicle.
  • the auto leveling system is a system that adjusts the direction of the optical axis of the headlamp in the vertical direction of the vehicle based on the pitch angle of the vehicle.
  • a reference height (for example, the direction of the adjusted optical axis) is determined.
  • the reference height information H indicates this reference height.
  • the signal processing device 12 is configured to generate LiDAR data using the signal S1 associated with the region corresponding to the reference height in the detection range of the LiDAR sensor unit 11. Has been. This operation will be described in detail with reference to FIG.
  • FIG. 2A shows an example in which the sensor system 1 is mounted on the front portion of the vehicle 100.
  • the LiDAR sensor unit 11 is configured to detect information at least ahead of the vehicle 100.
  • An arrow D indicates the detection reference direction of the LiDAR sensor unit 11 in the vertical direction of the vehicle 100.
  • FIG. 2 shows the detection range A of the LiDAR sensor unit 11.
  • the symbol h represents the reference height indicated by the reference height information H.
  • the signal S1 output from the LiDAR sensor unit 11 may include information within the detection range A.
  • the signal processing device 12 generates LiDAR data corresponding to information detected in the region P using the signal S1 associated with the region P that is a part of the detection range A.
  • the region P is a region corresponding to the reference height h.
  • the meaning of the expression “corresponding to the reference height h” is not limited to the case where the reference height h determined by the auto leveling system matches the height of the region P in the vertical direction of the vehicle 100. . As long as the reference height h and the height of the region P have a predetermined correspondence, the two may be different.
  • the signal processing device 12 may acquire only the signal corresponding to the region P as the signal S1, and may use the signal S1 for data generation processing, or after acquiring the signal S1 corresponding to the entire detection range A, Only the corresponding signal may be extracted and used for data generation processing.
  • FIG. 2C shows a state in which the front end of the vehicle 100 is tilted upward from the rear end.
  • the detection reference direction D of the LiDAR sensor unit 11 is directed upward from the reference height h. Therefore, as shown in FIG. 2D, the detection range A of the LiDAR sensor unit 11 moves upward from the state shown in FIG. Even in this case, the signal processing device 12 uses the signal S1 associated with the region P that is a part of the detection range A based on the acquired reference height information H, and converts the information detected in the region P into Corresponding LiDAR data is generated.
  • FIG. 2E shows a state in which the front end of the vehicle 100 is inclined downward from the rear end.
  • the detection reference direction D of the LiDAR sensor unit 11 is directed downward from the reference height h. Therefore, as shown in FIG. 2F, the detection range A of the LiDAR sensor unit 11 moves downward from the state shown in FIG.
  • the signal processing device 12 uses the signal S1 associated with the region P that is a part of the detection range A based on the acquired reference height information H, and converts the information detected in the region P into Corresponding LiDAR data is generated.
  • the detection reference direction D of the LiDAR sensor unit 11 in the vertical direction of the vehicle 100 matches the reference height h. Is preferred. However, since the detection reference direction D of the LiDAR sensor unit 11 changes in the vertical direction of the vehicle 100 according to the change in the pitch angle of the vehicle 100, the detection range A of the LiDAR sensor unit 11 is redundant in the vertical direction of the vehicle 100. Generally, it is set to have.
  • the signal processing device 12 specifies the region P using the reference height information H separately determined based on the pitch angle of the vehicle 100 by the auto leveling system. Therefore, regardless of the pitch angle of the vehicle 100, the region P including information required for driving assistance can be easily and highly reliably identified. Furthermore, since only the signal S1 associated with the region P that is a part of the detection range A having redundancy is used for data generation processing for acquiring external information, it is required for driving support of the vehicle 100. An increase in load related to signal processing can be suppressed.
  • the sensor system 1 can include a leveling adjustment mechanism 13.
  • the leveling adjustment mechanism 13 is configured to include an actuator that can change the detection reference direction D of the LiDAR sensor unit 11 in the vertical direction of the vehicle 100.
  • the auto leveling system a configuration similar to a known mechanism for changing the direction of the optical axis of the headlamp in the vertical direction of the vehicle can be employed.
  • the leveling adjustment mechanism 13 can be communicably connected to the signal processing device 12.
  • the leveling adjustment mechanism 13 is configured to adjust the detection reference direction D of the LiDAR sensor unit 11 in the vertical direction of the vehicle based on the reference height information H acquired by the signal processing device 12.
  • the signal processing device 12 when the detection reference direction D is directed upward from the reference height h, the signal processing device 12 recognizes the fact based on the reference height information H. .
  • the signal processing device 12 outputs a signal for driving the leveling adjustment mechanism 13 so as to eliminate the deviation from the reference height h in the detection reference direction D.
  • the leveling adjustment mechanism 13 operates based on the signal, and directs the detection reference direction D of the LiDAR sensor unit 11 downward.
  • the signal processing device 12 indicates that the detection reference direction D of the LiDAR sensor unit 11 is upward.
  • the signal which operates the leveling adjustment mechanism 13 so that it may face is output.
  • the change in the detection reference direction D of the LiDAR sensor unit 11 according to the change in the pitch angle of the vehicle 100 can be suppressed, the change in the position of the region P within the detection range A of the LiDAR sensor unit 11 can be reduced. Can be suppressed. That is, the position of the region P specified by the signal processing device 12 can be the position shown in FIG. 2B regardless of the pitch angle of the vehicle 100. Therefore, the processing load of the signal processing device 12 for specifying the region P can be further suppressed.
  • the sensor system 1 can include a camera unit 14.
  • the camera unit 14 is a device for acquiring image information outside the vehicle.
  • the camera unit 14 is configured to output a signal S2 corresponding to the acquired image information.
  • the camera unit 14 is an example of a sensor unit.
  • the signal processing device 12 is configured to process the signal S2 output from the camera unit 14 and generate camera data corresponding to the acquired image information outside the vehicle.
  • the camera data is used for driving support of the vehicle.
  • the signal processing device 12 is configured to generate camera data using a signal S2 associated with a region corresponding to the reference height h in the field of view of the camera unit 14. ing. That is, the signal processing device 12 generates the camera data corresponding to the image included in the area using the signal S2 associated with the area that is a part of the field of view of the camera unit 14.
  • the field of view of the camera unit 14 is an example of the detection range of the sensor unit.
  • the LiDAR sensor unit 11 is an example of a first sensor unit.
  • the information outside the vehicle 100 detected by the LiDAR sensor unit 11 is an example of first information.
  • the signal S1 output from the LiDAR sensor unit 11 is an example of a first signal.
  • LiDAR data generated by the signal processing device 12 is an example of first data.
  • the camera unit 14 is an example of a second sensor unit.
  • An image outside the vehicle 100 acquired by the camera unit 14 is an example of second information.
  • the signal S2 output from the camera unit 14 is an example of a second signal.
  • the camera data generated by the signal processing device 12 is an example of second data.
  • leveling adjustment mechanism 13 can also be applied to the camera unit 14.
  • data output from a plurality of sensor units of different types is used for driving support in an integrated manner.
  • data output from a plurality of sensor units of the same type arranged at relatively distant positions in the vehicle may be used for driving support in an integrated manner.
  • the sensor system 1 shown in FIG. 1 is arranged in at least two of the left front corner LF, the right front corner RF, the left rear corner LB, and the right rear corner RB of the vehicle 100 shown in FIG. Can be done.
  • a case where two sensor systems 1 are arranged at the left front corner LF and the right front corner RF of the vehicle 100 will be described as an example.
  • the LiDAR sensor unit 11 disposed in the left front corner LF is an example of a first sensor unit.
  • Information outside the vehicle 100 detected by the LiDAR sensor unit 11 disposed in the left front corner LF is an example of first information.
  • the signal S1 output from the LiDAR sensor unit 11 disposed in the left front corner LF is an example of a first signal.
  • the LiDAR data generated by the signal processing device 12 arranged at the left front corner LF is an example of first data.
  • the LiDAR sensor unit 11 disposed in the right front corner RF is an example of the second sensor unit.
  • Information outside the vehicle 100 detected by the LiDAR sensor unit 11 disposed in the right front corner RF is an example of second information.
  • the signal S1 output from the LiDAR sensor unit 11 disposed at the right front corner RF is an example of a second signal.
  • the LiDAR data generated by the signal processing device 12 disposed in the right front corner RF is an example of second data.
  • the LiDAR data generated by the signal processing device 12 arranged in the left front corner LF and the LiDAR data generated by the signal processing device 12 arranged in the right front corner RF are used by the control device 101 for driving support.
  • An ECU is an example of the control device 101.
  • the ECU can be realized by a general-purpose microprocessor that operates in cooperation with a general-purpose memory.
  • a general-purpose microprocessor a CPU, an MPU, and a GPU can be exemplified.
  • As the general-purpose memory ROM and RAM can be exemplified.
  • the ECU may be realized by a dedicated integrated circuit such as a microcontroller, ASIC, or FPGA.
  • the ECU may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
  • the LiDAR data generated at two locations includes information associated with the reference height h. Therefore, the integrated use of both data for driving assistance becomes easy.
  • the LiDAR data generated at two locations is generated using only the signal S1 associated with the region P that is part of the detection range A of each LiDAR sensor unit 11. Therefore, even when both data are used in an integrated manner, an increase in processing load on the control device 101 can be suppressed.
  • the signal processing device 12 can be configured to acquire map information M.
  • the map information M can be information used for the navigation system of the vehicle 100, for example.
  • the map information M may be stored in advance in a storage device mounted on the vehicle 100, or may be downloaded from an external network periodically or as necessary.
  • the signal processing device 12 may be configured to associate the LiDAR data generated based on the signal S1 output from the LiDAR sensor unit 11 with the map information M. For example, when the LiDAR data indicates the presence of an object outside the vehicle 100, it can be determined by collating with the map information M whether the object is a structure such as a guardrail or a traffic sign.
  • LiDAR data and map information M can be used for driving support in an integrated manner.
  • an increase in signal processing load related to generation of LiDAR data can be suppressed, an increase in processing load in integrated driving support combining the LiDAR data and map information M can also be suppressed as a whole.
  • the above map information M can include three-dimensional information.
  • the signal processing apparatus 12 can associate LiDAR data with information corresponding to the reference height h in the map information M. That is, the two-dimensional map information corresponding to the reference height h can be extracted from the map information M including the three-dimensional information and associated with the LiDAR data.
  • the map information data used for the integrated driving support can be reduced, so that an increase in processing load in the integrated driving support can be further suppressed.
  • map information M acquired by the signal processing device 12 may be provided in advance as two-dimensional information.
  • the sensor system 1 can include a left front lamp device 15.
  • the left front lamp device 15 may include a lamp housing 51 and a translucent cover 52.
  • the lamp housing 51 partitions the lamp chamber 53 together with the translucent cover 52.
  • the left front lamp device 15 is mounted on the left front corner LF of the vehicle 100 shown in FIG.
  • the left front lamp device 15 may include a lamp unit 54.
  • the lamp unit 54 is a device that emits visible light to the outside of the vehicle 100.
  • the lamp unit 54 is accommodated in the lamp chamber 53.
  • Examples of the lamp unit 54 include a headlamp unit, a vehicle width lamp unit, a direction indicator lamp unit, and a fog lamp unit.
  • the translucent cover 52 is formed of a material that transmits not only visible light emitted from the lamp unit 54 but also light having a wavelength with which the sensor unit accommodated in the lamp chamber 53 has sensitivity.
  • the front left lamp device 15 Since the front left lamp device 15 has a function of supplying light to the outside of the vehicle 100, the front left lamp device 15 is generally arranged in a place with a small amount of shielding such as the left front corner FB. By arranging the sensor unit in such a place, information outside the vehicle 100 can be efficiently acquired.
  • the height detection information H when the height detection information H is acquired from the auto leveling system of the vehicle 100, the height detection information H can be shared with the lamp unit 54. In this case, efficient system design is possible.
  • a right front lamp device having a symmetrical configuration with the left front lamp device 15 can be mounted on the right front corner RF of the vehicle 100 shown in FIG.
  • a left rear lamp device may be mounted on the left rear corner LB of the vehicle 100.
  • examples of the lamp unit included in the left rear lamp device may include a brake light unit, a taillight unit, a vehicle width light unit, and a reverse light unit.
  • a right rear lamp device having a configuration symmetrical to the left rear lamp device may be mounted on the right rear corner RB of the vehicle 100.
  • the sensor unit described above can be accommodated in a lamp chamber defined by a lamp housing and a light-transmitting cover.
  • the first embodiment is merely an example for facilitating understanding of the present disclosure.
  • the configuration according to the first embodiment can be changed or improved as appropriate without departing from the spirit of the present disclosure.
  • the sensor system 1 includes at least one of the LiDAR sensor unit 11 and the camera unit 14 has been described.
  • the sensor system 1 can be configured to include at least one of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the camera unit can include a visible light camera unit and an infrared camera unit.
  • the millimeter wave sensor unit has a configuration for transmitting a millimeter wave and a configuration for receiving a reflected wave resulting from the reflection of the millimeter wave by an object existing outside the vehicle 100.
  • Examples of the millimeter wave frequency include 24 GHz, 26 GHz, 76 GHz, and 79 GHz.
  • the millimeter wave sensor unit can acquire the distance to the object associated with the reflected wave based on the time from when the millimeter wave is transmitted in a certain direction until the reflected wave is received. Further, by accumulating such distance data in association with the detection position, it is possible to acquire information related to the motion of the object associated with the reflected wave.
  • the reference height information H is acquired from the auto leveling system. However, if information indicating the reference height h is obtained, the reference height information H may be acquired from a vehicle height sensor or the like.
  • At least a part of the functions of the signal processing device 12 described above can be realized by the control device 101 shown in FIG.
  • FIG. 4 shows a configuration of the sensor system 2 according to the second embodiment.
  • the sensor system 2 is mounted on the vehicle 100 shown in FIG.
  • the sensor system 2 includes a plurality of sensor units 20.
  • Each of the plurality of sensor units 20 is a device that detects information outside the vehicle 100 and outputs a signal corresponding to the information.
  • the plurality of sensor units 20 includes a first sensor unit 21 and a second sensor unit 22.
  • the first sensor unit 21 is configured to detect first information outside the vehicle 100 and output a first signal S1 corresponding to the first information.
  • the first sensor unit 21 can be any one of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the second sensor unit 22 is configured to detect second information outside the vehicle 100 and output a second signal S2 corresponding to the second information.
  • the second sensor unit 22 can be any one of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the LiDAR sensor unit has a configuration for emitting non-visible light and a configuration for detecting return light as a result of reflection of the non-visible light on at least an object existing outside the vehicle.
  • the LiDAR sensor unit can include a scanning mechanism that sweeps the invisible light by changing the emission direction (that is, the detection direction) as necessary. For example, infrared light having a wavelength of 905 nm can be used as invisible light.
  • the camera unit is a device for acquiring an image as information outside the vehicle.
  • the image can include at least one of a still image and a moving image.
  • the camera unit may include a camera having sensitivity to visible light, or may include a camera having sensitivity to infrared light.
  • the millimeter wave sensor unit has a configuration for transmitting a millimeter wave and a configuration for receiving a reflected wave resulting from the reflection of the millimeter wave by an object existing outside the vehicle 100.
  • Examples of the millimeter wave frequency include 24 GHz, 26 GHz, 76 GHz, and 79 GHz.
  • the first sensor unit 21 and the second sensor unit 22 may be a plurality of sensor units arranged in different areas in the vehicle 100.
  • the first sensor unit 21 and the second sensor unit 22 may be disposed at the left front corner LF and the right front corner RF of the vehicle 100, respectively.
  • the first sensor unit 21 and the second sensor unit 22 may be disposed at the left front corner LF and the left rear corner LB of the vehicle 100, respectively.
  • the first sensor unit 21 and the second sensor unit 22 may be a plurality of sensor units arranged in substantially the same region in the vehicle 100.
  • both the first sensor unit 21 and the second sensor unit 22 can be disposed at the left front corner LF of the vehicle 100.
  • the sensor system 2 includes a signal processing device 30.
  • the signal processing device 30 can be arranged at an arbitrary position in the vehicle 100.
  • the signal processing device 30 can be realized by a general-purpose microprocessor that operates in cooperation with a general-purpose memory.
  • a general-purpose microprocessor a CPU, an MPU, and a GPU can be exemplified.
  • the general-purpose memory ROM and RAM can be exemplified.
  • the ROM can store a computer program that realizes processing to be described later.
  • the general-purpose microprocessor designates at least a part of a program stored on the ROM, expands it on the RAM, and executes the above-described processing in cooperation with the RAM.
  • the signal processing device 30 may be realized by a dedicated integrated circuit such as a microcontroller, ASIC, or FPGA that can execute a computer program that realizes processing to be described later.
  • the signal processing device 30 may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
  • the signal processing device 30 acquires the first signal S1 from the first sensor unit 21. In other words, the signal processing device 30 acquires the first information detected by the first sensor unit 21 by receiving the first signal S1 (STEP 21).
  • the signal processing device 30 acquires the second signal S ⁇ b> 2 from the second sensor unit 22.
  • the signal processing device 30 acquires the second information detected by the second sensor unit 22 by receiving the second signal S2 (STEP 22).
  • the order in which the processing in STEP 21 and the processing in STEP 22 are performed may be reversed.
  • the processing of STEP 21 and the processing of STEP 22 may be performed simultaneously.
  • the signal processing device 30 determines whether the first information and the second information include the same feature based on the first signal S1 and the second signal S2 (STEP 23).
  • the signal processing device 30 determines whether the feature can be a reference target (STEP 24).
  • the “reference target” means a feature that can be detected by the sensor unit 20 and can provide reference position information.
  • the reference target include a license plate, a guardrail, a sound barrier, a traffic light, a traffic sign, and a center line of a preceding vehicle.
  • the height from the road surface and the distance from the road shoulder are legally determined, and a feature whose position can be identified with relatively high accuracy by detecting its presence can be a reference target.
  • the feature may not always be a reference target. For example, even if a license plate of a preceding vehicle is detected as a feature, if the position is not determined due to a relative position change with the preceding vehicle, it is determined that the license plate cannot be a reference target. That is, it can be a condition that it is determined that the feature can be a reference target when a certain position information can be read from the detected feature exceeds a predetermined value.
  • the signal processing device 30 determines the reference target specified by the first information and the reference specified by the second information. A difference from the position of the target is detected (STEP 25).
  • the plurality of sensor units 20 mounted on the vehicle may be displaced due to vibration during traveling or the passage of time.
  • the occurrence of such misalignment is a phenomenon in which the same reference target is detected by a plurality of sensor units 20, but the positions of the specified reference targets are different among the plurality of sensor units 20. Connected. Therefore, by detecting the difference, it is possible to grasp that a positional deviation has occurred in at least one of the plurality of sensor units 20.
  • the user can be notified.
  • the user can take appropriate measures to correct the misalignment. Therefore, the detection accuracy of the plurality of sensor units 20 required for driving support of the vehicle 100 can be maintained.
  • the left front LiDAR unit as the first sensor unit 21 is arranged at the left front corner LF of the vehicle 100, and the left rear LiDAR unit as the second sensor unit 22 is The case where it arrange
  • the front left LiDAR unit acquires information on an object existing in an area including the left side of the vehicle 100.
  • the information is an example of first information.
  • the left front LiDAR unit outputs a first signal S1 corresponding to the first information.
  • the signal processing device 30 acquires the first signal S1 (STEP 21).
  • the left rear LiDAR unit acquires information on an object existing in an area including the left side of the vehicle 100.
  • the information is an example of second information.
  • the left rear LiDAR unit outputs a second signal S2 corresponding to the second information.
  • the signal processing device 30 acquires the second signal S2 (STEP 22).
  • the signal processing device 30 performs information processing based on the first signal S1 and the second signal S2, thereby determining whether the same information is included in the first information and the second information (STEP 23).
  • the signal processing device 30 determines whether the guardrail can be a reference target (STEP24).
  • the signal processing device 30 detects a difference between the position (height) of the upper end of the guard rail identified through the left front LiDAR unit and the position (height) of the upper end of the guard rail identified through the left rear LiDAR unit ( (STEP 25).
  • a notification indicating that a positional deviation has occurred in at least one of the left front LiDAR unit and the left rear LiDAR unit is made.
  • the left front camera unit as the first sensor unit 21 is arranged at the left front corner LF of the vehicle 100
  • the right front camera unit as the second sensor unit 22 is the front right of the vehicle 100.
  • positions to corner RF is mentioned.
  • the left front camera unit acquires the first image including the front of the vehicle 100.
  • the first image is an example of first information.
  • the left front camera unit outputs a first signal S1 corresponding to the first image.
  • the signal processing device 30 acquires the first signal S1 (STEP 21).
  • the front right camera unit acquires a second image including the front of the vehicle 100.
  • the second image is an example of second information.
  • the right front camera unit outputs a second signal S2 corresponding to the second image.
  • the signal processing device 30 acquires the second signal S2 (STEP 22).
  • the signal processing device 30 performs image processing based on the first signal S1 and the second signal S2, thereby determining whether or not the same feature is included in the first image and the second image (STEP 23).
  • the signal processing device 30 determines whether the center line can be a reference target (STEP 24).
  • the signal processing device 30 detects a difference between the position of the center line specified through the left front camera unit and the position of the center line specified through the right front camera unit (STEP 25).
  • a notification indicating that a positional deviation has occurred in at least one of the left front camera unit and the right front camera unit is made.
  • the front left LiDAR unit acquires information on an object existing in an area including the front of the vehicle 100.
  • the information is an example of first information.
  • the left front LiDAR unit outputs a first signal S1 corresponding to the first information.
  • the signal processing device 30 acquires the first signal S1 (STEP 21).
  • the left front camera unit acquires a second image including the front of the vehicle 100.
  • the second image is an example of second information.
  • the right front camera unit outputs a second signal S2 corresponding to the second image.
  • the signal processing device 30 acquires the second signal S2 (STEP 22).
  • the signal processing device 30 performs information processing based on the first signal S1 and the second signal S2, thereby determining whether the same information is included in the first information and the second information (STEP 23).
  • the signal processing device 30 determines whether the traffic signal can be a reference target (STEP 24).
  • the signal processing device 30 detects the difference between the position of the traffic light identified through the left front LiDAR unit and the position of the traffic light identified through the left front camera unit (STEP 25).
  • a notification indicating that a positional deviation has occurred in at least one of the left front LiDAR unit and the left front camera unit is made.
  • the reference target is preferably one that can provide a reference for the position in the height direction. This is because the position reference in the height direction tends to be relatively small in variation due to the traveling state of the vehicle 100 and easily suppress an increase in processing load in the signal processing device 30.
  • the signal processing device 30 can acquire the time-dependent change in the position of the reference target specified based on the first information and the position of the reference target specified based on the second information. (STEP 26). Specifically, the signal processing device 30 repeats the processing from STEP 21 to STEP 25 at a predetermined timing, and changes the position of the reference target specified based on the first information in the latest processing and the first information in the previous processing. The position of the reference target identified based on the comparison is compared. Similarly, the position of the reference target specified based on the second information in the latest process is compared with the position of the reference target specified based on the second information in the previous process. Examples of the predetermined timing include the elapse of a predetermined time from the end of the previous process, the time when the user inputs a process execution instruction, and the like.
  • the signal processing device 30 specifies the sensor unit 20 that needs to be corrected based on the acquired temporal change (STEP 27). For example, when the position of the reference target specified by the first sensor unit 21 does not change over time and the position of the reference target specified by the second sensor unit 22 changes over time, the second sensor unit 22 It is determined that there is a cause of misalignment on the side and correction is necessary. The identified sensor unit can be notified to the user.
  • the sensor unit 20 that needs to be corrected can be specified by monitoring the change with time of the position of the reference target specified by each sensor unit 20. Therefore, the detection accuracy of the plurality of sensor units 20 required for driving support of the vehicle 100 can be more easily maintained.
  • the plurality of sensor units 20 may include a third sensor unit 23.
  • the third sensor unit 23 may be configured to detect third information outside the vehicle 100 and output a third signal S3 corresponding to the third information.
  • the third sensor unit 23 can be any of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the third sensor unit 23 can be arranged in a different area in the vehicle 100 from the first sensor unit 21 and the second sensor unit 22.
  • the third sensor unit 23 is configured to It can be arranged in the rear corner RB.
  • the third sensor unit 23 can be disposed in substantially the same region of the vehicle 100 as at least one of the first sensor unit 21 and the second sensor unit 22.
  • the signal processing device 30 acquires the third signal S ⁇ b> 3 from the third sensor unit 23.
  • the signal processing device 30 acquires the third information detected by the third sensor unit 23 by receiving the third signal S3 (STEP 28).
  • the order of STEP 21, STEP 22, and STEP 28 is arbitrary.
  • STEP28 may be performed simultaneously with at least one of STEP21 and STEP22.
  • the signal processing device 30 determines whether the first information, the second information, and the third information include the same feature based on the first signal S1, the second signal S2, and the third signal S3. Determine (STEP 23).
  • the processing by the signal processing device 30 ends.
  • the signal processing device 30 determines whether the feature can be a reference target. (STEP 24).
  • the signal processing device 30 determines the position of the reference target specified by the first information and the reference specified by the second information. A difference between the position of the target and the position of the reference target specified by the third information is detected (STEP 25).
  • the signal processing device 30 identifies the sensor unit 20 that needs to be corrected based on the difference identified in STEP 25 (STEP 27). For example, when the position of the reference target specified by the first sensor unit 21 and the second sensor unit 22 is the same and only the position of the reference target specified by the third sensor unit 23 is different, There is a high probability that the sensor unit 23 is displaced. The identified sensor unit 20 can be left to the user.
  • the sensor unit 20 that needs to be corrected can be specified without repeating the process of specifying the position of the reference target. Therefore, the detection accuracy of the plurality of sensor units 20 required for driving support of the vehicle 100 can be more easily maintained.
  • the sensor system 2 can include a left front lamp device 40.
  • the left front lamp device 40 may include a lamp housing 41 and a translucent cover 42.
  • the lamp housing 41 partitions the lamp chamber 43 together with the translucent cover 42.
  • the front left lamp device 40 is mounted on the front left corner LF of the vehicle 100 shown in FIG.
  • the left front lamp device 40 may include a lamp unit 44.
  • the lamp unit 44 is a device that emits visible light to the outside of the vehicle 100.
  • the lamp unit 44 is accommodated in the lamp chamber 43. Examples of the lamp unit 44 include a headlamp unit, a vehicle width lamp unit, a direction indicator lamp unit, and a fog lamp unit.
  • At least one sensor unit 20 is arranged in the lamp chamber 43. Since the lamp unit 44 has a function of supplying light to the outside of the vehicle 100, the lamp unit 44 is generally disposed in a place with a small amount of shielding such as the left front corner FB. By arranging the sensor unit 20 in such a place, information outside the vehicle 100 can be efficiently acquired.
  • a right front lamp device having a symmetrical configuration with the left front lamp device 40 can be mounted on the right front corner RF of the vehicle 100 shown in FIG.
  • a left rear lamp device may be mounted on the left rear corner LB of the vehicle 100.
  • examples of the lamp unit included in the left rear lamp device may include a brake light unit, a taillight unit, a vehicle width light unit, and a reverse light unit.
  • a right rear lamp device having a configuration symmetrical to the left rear lamp device may be mounted on the right rear corner RB of the vehicle 100.
  • at least one sensor unit 20 can be disposed in a lamp chamber defined by a lamp housing.
  • the second embodiment is merely an example for facilitating understanding of the present disclosure.
  • the configuration according to the second embodiment can be changed or improved as appropriate without departing from the spirit of the present disclosure.
  • FIG. 7 shows a functional configuration of the image data generation apparatus 301 according to the third embodiment.
  • the image data generation device 301 is mounted on a vehicle.
  • FIG. 8 shows an example of a vehicle 400 on which the image data generation device 301 is mounted.
  • the vehicle 400 is a towed vehicle having a tractor portion 401 and a trailer portion 402.
  • the tractor portion 401 includes a driver seat 403.
  • the vehicle 400 includes a camera 404.
  • the camera 404 is a device for acquiring an image behind the driver seat 403.
  • the camera 404 is configured to output a camera signal corresponding to the acquired image.
  • the image data generation apparatus 301 includes an input interface 311.
  • a camera signal CS output from the camera 404 is input to the input interface 311.
  • the image data generation device 301 further includes a processor 312, an output interface 313, and a communication bus 314.
  • the input interface 311, the processor 312, and the output interface 313 can exchange signals and data via the communication bus 314.
  • the processor 312 is configured to execute the processing shown in FIG.
  • the processor 312 acquires the camera signal CS input to the input interface 311 (STEP 31).
  • the expression “obtain camera signal CS” means that the camera signal CS input to the input interface 311 is in a state in which processing described later can be performed via an appropriate circuit configuration.
  • the processor 312 generates the first image data D1 based on the camera signal CS (STEP 32). As shown in FIG. 7, the first image data D ⁇ b> 1 is transmitted to the display device 405 mounted on the vehicle 400 via the output interface 313.
  • the display device 405 may be disposed in the passenger compartment of the vehicle 400 or may be disposed at the position of the side door mirror.
  • the first image data D1 is data corresponding to the first monitoring image I1 displayed on the display device 405.
  • FIG. 10A shows an example of the first monitoring image I1.
  • an image behind the driver's seat 403 acquired by the camera 404 is constantly displayed on the display device 405.
  • the driver acquires information behind the driver's seat 403 through the first monitoring image I1 displayed on the display device 405.
  • an arrow X indicates the direction of the optical axis of the camera 404.
  • the tractor portion 401 can take a posture that is largely bent with respect to the trailer portion 402 as indicated by a two-dot chain line in FIG.
  • the optical axis of the camera 404 may face the side wall of the trailer portion 402, which may hinder the driver's rearward visual recognition.
  • the processor 312 determines a reference object included in the first monitoring image I1 (STEP 33).
  • the “reference object” needs to be always included in the first monitoring image I1 in order for the driver to continuously acquire information behind the driver's seat 403, and image recognition is compared as a target object. It is defined as a thing that is easy.
  • the rear end edge 402a of the trailer portion 402 of the vehicle 400 is used as a reference object.
  • the processor 312 determines the trailing edge 402a as a reference object using, for example, an edge extraction technique.
  • the rear end edge 402a of the trailer portion 402 is an example of a rear portion of the vehicle 400.
  • “Rear part” means a part of the vehicle 400 that is located behind the driver's seat 403.
  • the processor 312 determines whether the reference object determined in STEP 33 is included in a predetermined area in the first monitoring image I1 (STEP 34).
  • the predetermined area may be the entire first monitoring image I1 shown in FIG. 10A.
  • the predetermined area may be the right side of the boundary line BD indicated by the one-dot chain line in the drawing. It may be defined as part of the image I1.
  • the first monitoring image I1 is continuously displayed on the display device 405.
  • the predetermined area is the entire first monitoring image I1
  • it is determined that the rear edge 402a as the reference object is included in the predetermined area.
  • the predetermined region is a region on the right side of the boundary line BD, it is determined that the rear edge 402a as the reference object is not included in the predetermined region. (N in STEP 34).
  • the processor 312 If it is determined that the reference object is not included in the predetermined area in the first monitoring image I1, the processor 312 generates the second image data D2 as shown in FIG. 9 (STEP 35).
  • the second image data D2 is data for causing the display device 405 to display the second monitoring image I2 in which the reference object is included in a predetermined area. As illustrated in FIG. 7, the second image data D2 is transmitted to the display device 405 mounted on the vehicle 400 via the output interface 313.
  • FIG. 10B shows an example of the second monitoring image I2.
  • a predetermined area can be defined similarly to the first monitoring image I1.
  • a region on the right side of the boundary line BD is a predetermined region. It can be seen that the trailing edge 402a of the trailer portion 402 as a reference object is included in a predetermined region. The driver can continue to acquire information behind the driver seat 403 through the second monitoring image I2 displayed on the display device 405.
  • either the first image data D1 or the second image data D2 is generated according to the position of the reference object, and the first monitoring image I1 or the first image including the reference object in a predetermined region. It is possible to continue displaying the second monitoring image I2 on the display device 405. Therefore, it is possible to avoid a situation in which the driver is prevented from visually recognizing the rear depending on the state of the vehicle 400. That is, it is possible to provide an electronic mirror technology with further improved rear visibility.
  • FIG. 11 is a diagram for explaining the first specific example.
  • a symbol I0 indicates the entire image acquired by the camera 404.
  • the first monitoring image I1 and the second monitoring image I2 can be different parts in the image. That is, the processor 312 generates the first image data D1 based on the camera signal CS corresponding to the first portion of the image acquired by the camera 404. Similarly, the processor 312 generates the second image data D2 based on the camera signal CS corresponding to the second portion of the image acquired by the camera 404.
  • the first monitoring image I1 and the second monitoring image I2 are generated by clipping the minimum necessary area from the original image, it is possible to achieve both securing the visual field and preventing the deterioration of the visibility. .
  • FIG. 12 is a diagram for explaining a second specific example.
  • the angle of view of the camera 404 can be changed.
  • a known mechanism for changing the angle of view is provided in the camera 404.
  • the angle of view of the camera 404 can be changed by the processor 312 sending a control signal S to the camera 404 through the output interface 313 as shown in FIG.
  • the processor 312 generates the first image data D1 based on the image acquired when the angle of view of the camera 404 is the first angle of view ⁇ 1.
  • the processor 312 generates the second image data D2 based on the image acquired when the angle of view of the camera 404 is the second angle of view ⁇ 2.
  • the second field angle ⁇ 2 is wider than the first field angle ⁇ 1. That is, the second monitoring image I2 is displayed on the display device 405 as a wider-angle image.
  • the trailing edge 402a of the trailer portion 402 serving as the reference object exists in the field of view of the first angle of view ⁇ 1. Accordingly, the first monitoring image I1 is generated accordingly.
  • the rear edge 402a is out of the field of view of the first angle of view ⁇ 1.
  • the processor 312 performs control to expand the angle of view of the camera 404 to the second angle of view ⁇ 2. As a result, the rear edge 402a is within the field of view of the second angle of view ⁇ 2, and an appropriate second monitoring image I2 is obtained.
  • the camera 404 with a wide angle of view is used from the beginning, it is inevitable that an object displayed in the monitoring image displayed on the display device 405 becomes small while a wide range can be visually recognized.
  • the second monitoring image I2 is generated by widening the angle of view only when necessary, it is possible to ensure both the field of view and the prevention of deterioration in visibility.
  • FIG. 13 is a diagram for explaining a third specific example.
  • the direction of the optical axis of the camera 404 can be changed.
  • a known swivel mechanism that changes the direction of the optical axis is provided in the camera 404.
  • the direction of the optical axis of the camera 404 can be changed by the processor 312 sending a control signal S to the camera 404 via the output interface 313 as shown in FIG.
  • the processor 312 generates the first image data D1 based on the image acquired when the optical axis of the camera 404 is oriented in the first direction X1.
  • the processor 312 generates the second image data D2 based on the image acquired when the direction of the optical axis of the camera 404 is in the second direction X2 different from the first direction X1. That is, the imaging target located in the center differs between the first monitoring image I1 and the second monitoring image I2.
  • the rear end edge 402a of the trailer portion 402 serving as the reference object exists in the field of view in which the direction of the optical axis is the first direction X1. Accordingly, the first monitoring image I1 is generated accordingly.
  • the processor 312 performs control to change the direction of the optical axis of the camera 404 from the first direction X1 to the second direction X2. As a result, the rear edge 402a is within the field of view in which the direction of the optical axis is the second direction X2, and an appropriate second monitoring image I2 is obtained.
  • FIG. 14 is a diagram for explaining a fourth specific example.
  • a first camera 404 a and a second camera 404 b are mounted on the vehicle 400 as the camera 404.
  • the direction of the optical axis of the first camera 404a is different from the direction of the optical axis of the second camera 404b.
  • the processor 312 generates the first image data D1 based on the image acquired by the first camera 404a.
  • the processor 312 generates the second image data D2 based on the image acquired by the second camera 404b. Switching of the operating camera can be performed by the processor 312 transmitting the control signal S through the output interface 313.
  • the trailing edge 402a of the trailer portion 402 serving as the reference object exists in the field of view of the first camera 404a. Accordingly, the first monitoring image I1 is generated accordingly.
  • the processor 312 performs control to switch the camera used for image acquisition from the first camera 404a to the second camera 404b. As a result, the rear edge 402a is within the field of view of the second camera 404b, and an appropriate second monitoring image I2 is obtained.
  • the input interface 311 of the image data generation apparatus 301 can accept an input from the user interface 406.
  • the user interface 406 is provided in the passenger compartment of the vehicle 400 and can accept tactile operation instructions, voice input instructions, line-of-sight input instructions, etc. to buttons, touch panel devices, and the like.
  • the reference object used for the determination in STEP 34 in FIG. 9 can be designated by the user via the first monitoring image I1 displayed on the display device 405.
  • the user interface 406 is a touch panel device provided on the display device 405
  • the appropriate location included in the first monitoring image I 1 is touched to make the location a reference object.
  • the functions of the processor 312 described above can be realized by a general-purpose microprocessor that operates in cooperation with a memory.
  • a general-purpose microprocessor a CPU, an MPU, and a GPU can be exemplified.
  • Examples of general-purpose memory include ROM and RAM.
  • the ROM can store a computer program for executing the above processing.
  • the general-purpose microprocessor can specify at least a part of a program stored in the ROM, expand it on the RAM, and execute the above-described processing in cooperation with the RAM.
  • the functions of the processor 312 described above may be realized by a dedicated integrated circuit such as a microcontroller, ASIC, or FPGA that can execute a computer program that implements processing to be described later.
  • the functions of the processor 312 described above may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
  • the third embodiment is merely an example for facilitating understanding of the present disclosure.
  • the configuration according to the third embodiment may be changed or improved as appropriate without departing from the spirit of the present disclosure.
  • a part of the rear portion of the vehicle 400 is designated as the reference object.
  • a part of the vehicle located behind the vehicle 400 may be designated as the reference object when the automatic platooning is performed.
  • the number and position of the cameras 404 mounted on the vehicle 400 can be appropriately determined according to the specifications of the vehicle 400.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Studio Devices (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A LiDAR sensor unit (11) detects information relating to the exterior of a vehicle, and outputs a signal (S1) corresponding to said information. A signal processing device (12) processes the signal (S1) to generate LiDAR data corresponding to the information. The signal processing device (12) acquires reference height information (H) indicating a reference height determined on the basis of a pitch angle of the vehicle. On the basis of the reference height information (H), the signal processing device (12) generates the LiDAR data using the signal (S1) associated with a region corresponding to the reference height, within a detection zone of the LiDAR sensor unit (11).

Description

センサシステム、および画像データ生成装置Sensor system and image data generation device
 本開示は、車両に搭載されるセンサシステムに関連する。 The present disclosure relates to a sensor system mounted on a vehicle.
 本開示は、車両に搭載される画像データ生成装置にも関連する。 The present disclosure also relates to an image data generation device mounted on a vehicle.
 車両の運転支援技術を実現するためには、当該車両の外部の情報を検出するためのセンサを車体に搭載する必要がある。そのようなセンサの例としては、LiDAR(Light Detection and Ranging)センサやカメラが挙げられる(例えば、特許文献1を参照)。車両の運転支援技術が高度化するにつれ、必要とされる情報処理に係る負荷も増大する。 In order to realize a vehicle driving support technology, it is necessary to mount a sensor for detecting information outside the vehicle on the vehicle body. Examples of such a sensor include a LiDAR (Light Detection and Ranging) sensor and a camera (see, for example, Patent Document 1). As vehicle driving support technology becomes more sophisticated, the load on information processing required also increases.
 本明細書において「運転支援」とは、運転操作(ハンドル操作、加速、減速)、走行環境の監視、および運転操作のバックアップの少なくとも一つを少なくとも部分的に行なう制御処理を意味する。すなわち、衝突被害軽減ブレーキ機能やレーンキープアシスト機能のような部分的な運転支援から完全自動運転動作までを含む意味である。 In this specification, “driving support” means a control process that at least partially performs at least one of driving operation (steering operation, acceleration, deceleration), monitoring of driving environment, and backup of driving operation. In other words, this means including partial driving assistance such as a collision damage reducing brake function and a lane keeping assist function to a fully automatic driving operation.
 サイドミラーやバックミラーに代えて、車両に搭載されたカメラにより撮影された画像を車室内に設置された表示装置に表示させる電子ミラー技術が知られている。特許文献2は、そのような電子ミラー技術の一例を開示している。 Instead of side mirrors and rearview mirrors, an electronic mirror technology is known in which an image taken by a camera mounted on a vehicle is displayed on a display device installed in the vehicle interior. Patent Document 2 discloses an example of such an electronic mirror technique.
日本国特許出願公開2010-185769号公報Japanese Patent Application Publication No. 2010-185769 日本国特許出願公開2017-047868号公報Japanese Patent Application Publication No. 2017-047868
 本開示における第一の課題は、車両の運転支援に必要とされる信号処理に係る負荷の増大を抑制することである。 The first problem in the present disclosure is to suppress an increase in load related to signal processing required for driving support of a vehicle.
 本開示における第二の課題は、車両の運転支援に必要とされる複数のセンサの検出精度を維持することである。 The second problem in the present disclosure is to maintain the detection accuracy of a plurality of sensors required for driving support of the vehicle.
 本開示における第三の課題は、後方視認性がより向上された電子ミラー技術を提供することである。 The third problem in the present disclosure is to provide an electronic mirror technology with improved rear visibility.
 上記の第一の課題を達成するための一態様は、車両に搭載されるセンサシステムであって、
 前記車両の外部の情報を検出し、当該情報に対応する信号を出力する少なくとも一つのセンサユニットと、
 前記信号を処理して前記情報に対応するデータを生成する信号処理装置と、
を備えており、
 前記信号処理装置は、
  前記車両のピッチ角に基づいて定められた基準高さを示す基準高さ情報を取得し、
  前記基準高さ情報に基づいて、前記センサユニットの検出範囲のうち前記基準高さに対応する領域に関連付けられた前記信号を用いて前記データを生成する。
One aspect for achieving the first problem is a sensor system mounted on a vehicle,
At least one sensor unit that detects information outside the vehicle and outputs a signal corresponding to the information;
A signal processing device that processes the signal to generate data corresponding to the information;
With
The signal processing device includes:
Obtaining reference height information indicating a reference height determined based on the pitch angle of the vehicle;
Based on the reference height information, the data is generated using the signal associated with the region corresponding to the reference height in the detection range of the sensor unit.
 運転支援のために例えば基準高さにおける車両の外部の情報が必要とされる場合、車両の上下方向におけるセンサユニットの検出基準方向が基準高さと一致していることが好ましい。しかしながら、車両のピッチ角度の変化に応じてセンサユニットの検出基準方向が車両の上下方向に変化するので、センサユニットの検出範囲は、車両の上下方向に冗長性を有するように設定されることが一般的である。 For example, when information outside the vehicle at the reference height is required for driving support, it is preferable that the detection reference direction of the sensor unit in the vertical direction of the vehicle matches the reference height. However, since the detection reference direction of the sensor unit changes in the vertical direction of the vehicle according to the change in the pitch angle of the vehicle, the detection range of the sensor unit may be set to have redundancy in the vertical direction of the vehicle. It is common.
 上記のような構成によれば、信号処理装置は、車両のピッチ角に基づいて別途定められる基準高さ情報を利用して上記の領域を特定する。したがって、車両のピッチ角に依らず、運転支援に必要とされる情報を含む領域を容易かつ高い信頼性とともに特定できる。さらに、冗長性を有する検出範囲の一部である領域に関連付けられた信号のみが外部情報を取得するためのデータ生成処理に供されるので、車両の運転支援に必要とされる信号処理に係る負荷の増大を抑制できる。 According to the configuration as described above, the signal processing device specifies the region using reference height information separately determined based on the pitch angle of the vehicle. Therefore, it is possible to easily and highly reliably specify a region including information necessary for driving support regardless of the pitch angle of the vehicle. Furthermore, since only the signal associated with the region that is part of the detection range having redundancy is subjected to data generation processing for acquiring external information, it relates to signal processing required for vehicle driving support. Increase in load can be suppressed.
 上記のセンサシステムは、以下のように構成されうる。
 前記基準高さ情報に基づいて、前記車両の上下方向における前記センサユニットの検出基準方向を調節するレベリング調節機構を備えている。
The above sensor system can be configured as follows.
A leveling adjustment mechanism that adjusts the detection reference direction of the sensor unit in the vertical direction of the vehicle based on the reference height information is provided.
 このような構成によれば、車両のピッチ角変化に応じたセンサユニットの検出基準方向の変化を抑制できるので、運転支援に必要とされる情報を含む領域の位置の変化を抑制できる。したがって、当該領域を特定するための信号処理装置の処理負荷をさらに抑制できる。 According to such a configuration, since the change in the detection reference direction of the sensor unit according to the change in the pitch angle of the vehicle can be suppressed, the change in the position of the region including the information required for driving support can be suppressed. Therefore, it is possible to further suppress the processing load of the signal processing device for specifying the region.
 上記のセンサシステムは、以下のように構成されうる。
 前記少なくとも一つのセンサユニットは、
  前記車両の外部の第一情報を検出し、当該第一情報に対応する第一信号を出力する第一センサユニットと、
  前記車両の外部の第二情報を検出し、当該第二情報に対応する第二信号を出力する第二センサユニットと、
を含んでおり、
 前記信号処理装置は、前記第一信号を処理して前記第一情報に対応する第一データを生成するとともに、前記第二信号を処理して前記第二情報に対応する第二データを生成し、
 前記信号処理装置は、前記基準高さ情報に基づいて、前記第一センサユニットの検出範囲のうち前記基準高さに対応する領域に関連付けられた信号を前記第一信号として処理するとともに、前記第二センサユニットの検出範囲のうち前記基準高さに対応する領域に関連付けられた信号を前記第二信号として処理する。
The above sensor system can be configured as follows.
The at least one sensor unit comprises:
A first sensor unit that detects first information outside the vehicle and outputs a first signal corresponding to the first information;
A second sensor unit that detects second information outside the vehicle and outputs a second signal corresponding to the second information;
Contains
The signal processing device processes the first signal to generate first data corresponding to the first information, and processes the second signal to generate second data corresponding to the second information. ,
The signal processing device processes, as the first signal, a signal associated with a region corresponding to the reference height in the detection range of the first sensor unit based on the reference height information. A signal associated with a region corresponding to the reference height in the detection range of the two sensor units is processed as the second signal.
 上記の構成においては、共通の基準高さ情報に基づいて、第一センサユニットから出力される第一信号に基づく第一データの生成と、第二センサユニットから出力される第二信号に基づく第二データの生成とがなされる。すなわち、生成される第一データと第二データは、ともに基準高さに関連付けられた情報を含んでいる。したがって、運転支援に対する両データの統合的な利用が容易になる。また、第一データと第二データは、ともに各センサユニットの検出範囲の一部である領域に関連付けられた信号のみを用いて生成される。よって、両データが統合的に利用される場合であっても、信号処理装置の処理負荷の増大を抑制できる。 In the above configuration, based on the common reference height information, the generation of the first data based on the first signal output from the first sensor unit and the second signal based on the second signal output from the second sensor unit. Two data are generated. That is, the generated first data and second data both include information associated with the reference height. Therefore, the integrated use of both data for driving assistance becomes easy. Both the first data and the second data are generated using only signals associated with a region that is a part of the detection range of each sensor unit. Therefore, even if both data are used in an integrated manner, an increase in the processing load of the signal processing device can be suppressed.
 上記のセンサシステムは、以下のように構成されうる。
 前記信号処理装置は、前記データを地図情報と関連付ける。
The above sensor system can be configured as follows.
The signal processing device associates the data with map information.
 このような構成によれば、信号処理装置により生成されたデータと地図情報を、統合的に運転支援に利用できる。前述のように、データの生成に係る信号処理負荷の増大を抑制できるので、当該データと地図情報を組み合わせた統合的な運転支援における処理負荷の増大もまた、全体として抑制できる。 According to such a configuration, the data and map information generated by the signal processing device can be used for driving support in an integrated manner. As described above, since an increase in signal processing load related to data generation can be suppressed, an increase in processing load in integrated driving support combining the data and map information can also be suppressed as a whole.
 この場合、上記のセンサシステムは、以下のように構成されうる。
 前記データは、前記地図情報のうち前記基準高さに対応する情報に関連付けられる。
In this case, the above sensor system can be configured as follows.
The data is associated with information corresponding to the reference height in the map information.
 このような構成によれば、地図情報を二次元情報にできるので、統合的な運転支援に利用されるデータの量を減らすことができる。したがって、統合的な運転支援における処理負荷の増大をさらに抑制できる。 According to such a configuration, map information can be converted into two-dimensional information, so that the amount of data used for integrated driving support can be reduced. Therefore, it is possible to further suppress an increase in processing load in integrated driving support.
 上記のセンサシステムは、以下のように構成されうる。
 ランプユニットを収容する灯室を区画しているランプハウジングを備えており、
 前記センサユニットは、前記灯室内に配置されている。
The above sensor system can be configured as follows.
A lamp housing that partitions a lamp chamber that houses the lamp unit;
The sensor unit is disposed in the lamp chamber.
 ランプユニットは、車両の外部に光を供給するという機能ゆえに、車両における遮蔽物の少ない場所に配置されることが一般的である。このような場所にセンサユニットも配置されることにより、車両の外部の情報を効率的に取得できる。 Since the lamp unit has a function of supplying light to the outside of the vehicle, the lamp unit is generally arranged in a place where there is little shielding in the vehicle. By arranging the sensor unit in such a place, information outside the vehicle can be efficiently acquired.
 また、高さ検出情報を車両のオートレベリングシステムから取得する場合、高さ検出情報をランプユニットと共用しうる。この場合、効率的なシステムの設計が可能である。 Also, when the height detection information is acquired from the vehicle auto leveling system, the height detection information can be shared with the lamp unit. In this case, efficient system design is possible.
 上記のセンサシステムは、以下のように構成されうる。
 前記少なくとも一つのセンサユニットは、LIDARセンサユニット、カメラユニット、およびミリ波センサユニットの少なくとも一つである。
The above sensor system can be configured as follows.
The at least one sensor unit is at least one of a LIDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
 これらのセンサユニットは、車両の外部の情報を取得するために有用である一方、取得される情報に対応するデータ量が非常に多いことが知られている。しかしながら、前述のように、センサユニットの検出範囲の一部である領域に関連付けられた信号のみが外部情報を取得するためのデータ生成処理に供される。よって、これらのセンサユニットを使用しつつも、車両の運転支援に必要とされる信号処理に係る負荷の増大を抑制できる。 While these sensor units are useful for acquiring information outside the vehicle, it is known that the amount of data corresponding to the acquired information is very large. However, as described above, only a signal associated with an area that is a part of the detection range of the sensor unit is subjected to a data generation process for acquiring external information. Therefore, while using these sensor units, it is possible to suppress an increase in load related to signal processing required for driving support of the vehicle.
 上記の第二の課題を達成するための一態様は、車両に搭載されるセンサシステムであって、
 前記車両の外部の情報を検出し、各々が当該情報に対応する信号を出力する複数のセンサユニットと、
 前記信号を取得する信号処理装置と、
を備えており、
 前記複数のセンサユニットは、
  前記車両の外部の第一情報を検出し、当該第一情報に対応する第一信号を出力する第一センサユニットと、
  前記車両の外部の第二情報を検出し、当該第二情報に対応する第二信号を出力する第二センサユニットと、
を含んでおり、
 前記信号処理装置は、
  前記第一信号と前記第二信号に基づいて、前記第一情報と前記第二情報が同一の基準目標物を含んでいるかを判定し、
  前記第一情報と前記第二情報が同一の基準目標物を含んでいると判定された場合、前記第一情報により特定される当該基準目標物の位置と前記第二情報により特定される当該基準目標物の位置との差異を検出する。
One aspect for achieving the second problem is a sensor system mounted on a vehicle,
A plurality of sensor units that detect information outside the vehicle and each output a signal corresponding to the information;
A signal processing device for acquiring the signal;
With
The plurality of sensor units are:
A first sensor unit that detects first information outside the vehicle and outputs a first signal corresponding to the first information;
A second sensor unit that detects second information outside the vehicle and outputs a second signal corresponding to the second information;
Contains
The signal processing device includes:
Based on the first signal and the second signal, determine whether the first information and the second information include the same reference target,
When it is determined that the first information and the second information include the same reference target, the position of the reference target specified by the first information and the reference specified by the second information A difference from the position of the target is detected.
 車両に搭載された複数のセンサユニットは、走行中の振動や時間の経過により位置ずれを生じる場合がある。このような位置ずれの発生は、複数のセンサユニットにより同一の基準目標物が検出されていながら、特定された当該基準目標物の位置が当該複数のセンサユニットの間で相違するという現象に繋がる。したがって、上記の差異を検出することによって、当該複数のセンサユニットの少なくとも一つに位置ずれが発生していることを把握できる。 * Several sensor units mounted on a vehicle may be displaced due to vibration during driving or passage of time. The occurrence of such misalignment leads to a phenomenon that the position of the identified reference target is different among the plurality of sensor units while the same reference target is detected by the plurality of sensor units. Therefore, by detecting the difference, it is possible to grasp that a positional deviation has occurred in at least one of the plurality of sensor units.
 例えば、差異の大きさが所定値を上回る場合、ユーザへの報知がなされうる。ユーザは、位置ずれを補正するための然るべき対応を行なえる。したがって、車両の運転支援に必要とされる複数のセンサユニットの検出精度を維持できる。 For example, when the magnitude of the difference exceeds a predetermined value, the user can be notified. The user can take appropriate measures to correct the misalignment. Therefore, it is possible to maintain the detection accuracy of the plurality of sensor units required for driving support of the vehicle.
 上記のセンサシステムは、以下のように構成されうる。
 前記信号処理装置は、
  前記第一情報により特定される前記基準目標物の位置と前記第二情報により特定される当該基準目標物の位置の経時変化を取得し、
  前記経時変化に基づいて補正が必要なセンサユニットを特定する。
The above sensor system can be configured as follows.
The signal processing device includes:
Obtaining the time-dependent change of the position of the reference target specified by the first information and the position of the reference target specified by the second information;
A sensor unit that requires correction is specified based on the change over time.
 前述の処理を一度だけ行なって特定される基準目標物の位置の差異に基づいて判断されるのは、第一センサユニットと第二センサユニットの少なくとも一方に位置ずれが生じているという事実のみである。上記のように、各センサユニットにより特定される基準目標物の位置の経時変化を監視することにより、補正が必要なセンサユニットを特定できる。したがって、車両の運転支援に必要とされる複数のセンサユニットの検出精度をより容易に維持できる。 Only the fact that there is a displacement in at least one of the first sensor unit and the second sensor unit is determined based on the difference in the position of the reference target that is specified by performing the above process only once. is there. As described above, the sensor unit that needs to be corrected can be specified by monitoring the change with time of the position of the reference target specified by each sensor unit. Therefore, it is possible to more easily maintain the detection accuracy of the plurality of sensor units required for vehicle driving assistance.
 あるいは、上記のセンサシステムは、以下のように構成されうる。
 前記複数のセンサユニットは、前記車両の外部の第三情報を検出し、当該第三情報に対応する第三信号を出力する第三センサユニットを含んでおり、
 前記信号処理装置は、
  前記第一信号、前記第二信号、および前記第三信号に基づいて、前記第一情報、前記第二情報、および前記第三情報が同一の基準目標物を含んでいるかを判定し、
  前記第一情報、前記第二情報、および前記第三情報が同一の基準目標物を含んでいると判定された場合、前記第一情報により特定される当該基準目標物の位置、前記第二情報により特定される当該基準目標物の位置、および前記第三情報により特定される当該基準目標物の位置の間の差異に基づいて、補正が必要なセンサユニットを特定する。
Alternatively, the above sensor system can be configured as follows.
The plurality of sensor units include a third sensor unit that detects third information outside the vehicle and outputs a third signal corresponding to the third information,
The signal processing device includes:
Based on the first signal, the second signal, and the third signal, determine whether the first information, the second information, and the third information include the same reference target,
When it is determined that the first information, the second information, and the third information include the same reference target, the position of the reference target specified by the first information, the second information Based on the difference between the position of the reference target specified by the above and the position of the reference target specified by the third information, the sensor unit that needs to be corrected is specified.
 このような構成によれば、基準目標物の位置を特定する処理を繰り返さずとも、例えば多数決の原理に基づいて補正が必要なセンサユニットを特定できる。したがって、車両の運転支援に必要とされる複数のセンサユニットの検出精度をより容易に維持できる。 According to such a configuration, it is possible to identify a sensor unit that needs to be corrected based on the principle of majority decision, for example, without repeating the process of specifying the position of the reference target. Therefore, it is possible to more easily maintain the detection accuracy of the plurality of sensor units required for vehicle driving assistance.
 基準目標物は、高さ方向における位置の基準を提供できるものであることが好ましい。高さ方向における位置の基準は、車両の走行状態による変動が比較的小さい傾向にあり、信号処理装置における処理負荷の増大を抑制しやすいからである。 The reference target is preferably one that can provide a reference for the position in the height direction. This is because the position reference in the height direction tends to be relatively small in variation due to the running state of the vehicle, and it is easy to suppress an increase in processing load in the signal processing device.
 上記のセンサシステムは、以下のように構成されうる。
 ランプユニットを収容する灯室を区画しているランプハウジングを備えており、
 前記複数のセンサユニットの少なくとも一つは、前記灯室内に配置されている。
The above sensor system can be configured as follows.
A lamp housing that partitions a lamp chamber that houses the lamp unit;
At least one of the plurality of sensor units is disposed in the lamp chamber.
 ランプユニットは、車両の外部に光を供給するという機能ゆえに、遮蔽物の少ない場所に配置されることが一般的である。このような場所にセンサユニットも配置されることにより、車両の外部の情報を効率的に取得できる。 The lamp unit is generally arranged in a place with little shielding because of the function of supplying light to the outside of the vehicle. By arranging the sensor unit in such a place, information outside the vehicle can be efficiently acquired.
 上記のセンサシステムは、以下のように構成されうる。
 前記少なくとも複数のセンサユニットは、LIDARセンサユニット、カメラユニット、およびミリ波センサユニットの少なくとも一つである。
The above sensor system can be configured as follows.
The at least a plurality of sensor units is at least one of a LIDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
 本明細書において、「センサユニット」とは、所望の情報検出機能を備えつつ、それ自身が単体で流通可能な部品の構成単位を意味する。 In the present specification, the “sensor unit” means a component unit of a component that has a desired information detection function and can be circulated by itself.
 本明細書において、「ランプユニット」とは、所望の照明機能を備えつつ、それ自身が単体で流通可能な部品の構成単位を意味する。 In the present specification, the “lamp unit” means a structural unit of a part that is provided with a desired lighting function and can be circulated by itself.
 上記第三の課題を達成するための一態様は、車両に搭載される画像データ生成装置であって、
 前記車両の運転席よりも後方の画像を取得する少なくとも一つのカメラから出力された当該画像に対応する信号が入力される入力インターフェースと、
 前記信号に基づいて、表示装置に表示される第一監視画像に対応する第一画像データを生成するプロセッサと、
 前記表示装置へ前記第一画像データを出力する出力インターフェースと、
を備えており、
 前記プロセッサは、
  前記第一監視画像に含まれる基準物を決定し、
  前記基準物が前記第一監視画像における所定の領域内に含まれているかを判断し、
  前記基準物が前記所定の領域内に含まれていないと判断された場合、当該基準物を当該所定の領域内に含む第二監視画像が前記表示装置に表示されるように、当該第二監視画像に対応する第二画像データを生成し、
  前記出力インターフェースを通じて前記第二画像データを前記表示装置へ出力する。
One aspect for achieving the third problem is an image data generation device mounted on a vehicle,
An input interface to which a signal corresponding to the image output from at least one camera that acquires an image behind the driver's seat of the vehicle is input;
A processor that generates first image data corresponding to a first monitoring image displayed on the display device based on the signal;
An output interface for outputting the first image data to the display device;
With
The processor is
Determining a reference object included in the first monitoring image;
Determining whether the reference object is included in a predetermined area in the first monitoring image;
When it is determined that the reference object is not included in the predetermined area, the second monitoring is performed such that a second monitoring image including the reference object in the predetermined area is displayed on the display device. Generate second image data corresponding to the image,
The second image data is output to the display device through the output interface.
 このような構成によれば、基準物の位置に応じて第一画像データと第二画像データのいずれかが生成され、基準物を所定の領域内に含んだ第一監視画像または第二監視画像を表示装置に表示させ続けることが可能である。したがって、車両の状態によって運転者が後方の視認を妨げられる事態を回避できる。すなわち、後方視認性がより向上された電子ミラー技術を提供できる。 According to such a configuration, either the first image data or the second image data is generated according to the position of the reference object, and the first monitoring image or the second monitoring image including the reference object in a predetermined region. Can be continuously displayed on the display device. Therefore, it is possible to avoid a situation in which the driver is prevented from visually recognizing the rear depending on the state of the vehicle. That is, it is possible to provide an electronic mirror technology with further improved rear visibility.
 上記の画像データ生成装置は、以下のように構成されうる。
 前記プロセッサは、
  前記画像の第一部分に対応する前記信号に基づいて前記第一画像データを生成し、
  前記画像の第二部分に対応する前記信号に基づいて前記第二画像データを生成する。
The above image data generation device can be configured as follows.
The processor is
Generating the first image data based on the signal corresponding to the first portion of the image;
The second image data is generated based on the signal corresponding to the second portion of the image.
 カメラにより取得された原画像全体を表示装置に表示させようとした場合、広い範囲を視認できる一方で監視画像内に表示される物体が小さくなることが避けられない。上記のような構成によれば、原画像から必要最低限の領域をクリッピングすることによって第一監視画像と第二監視画像を生成するので、視野の確保と視認性の低下防止を両立できる。 When an entire original image acquired by a camera is displayed on a display device, it is inevitable that an object displayed in a monitoring image becomes small while a wide range can be visually recognized. According to the above configuration, since the first monitoring image and the second monitoring image are generated by clipping the minimum necessary area from the original image, it is possible to achieve both securing the visual field and preventing the deterioration of the visibility.
 あるいは、上記の画像データ生成装置は、以下のように構成されうる。
 前記プロセッサは、
  前記カメラの画角が第一画角であるときの前記画像に基づいて前記第一画像データを生成し、
  前記第一画角よりも広い第二画角となるように前記カメラの画角を変更し、
  前記カメラの画角が前記第二画角であるときの前記画像に基づいて前記第二画像データを生成する。
Alternatively, the above image data generation device can be configured as follows.
The processor is
Generating the first image data based on the image when the angle of view of the camera is the first angle of view;
Change the angle of view of the camera so that the second angle of view is wider than the first angle of view,
The second image data is generated based on the image when the angle of view of the camera is the second angle of view.
 初めから画角の広いカメラを用いた場合、広い範囲を視認できる一方で表示装置に表示される監視画像内に表示される物体が小さくなることが避けられない。上記のような構成によれば、必要なときにのみ画角を広げて第二監視画像を生成するので、視野の確保と視認性の低下防止を両立できる。 When using a camera with a wide angle of view from the beginning, it is inevitable that an object displayed in the monitoring image displayed on the display device becomes small while a wide range can be visually recognized. According to the configuration as described above, since the second monitoring image is generated by expanding the angle of view only when necessary, it is possible to achieve both securing the visual field and preventing the visibility from being lowered.
 あるいは、上記の画像データ生成装置は、以下のように構成されうる。
 前記プロセッサは、
  前記カメラの光軸が第一方向を向いているときの前記画像に基づいて前記第一画像データを生成し、
  前記カメラの光軸の向きを前記第一方向とは異なる第二方向へ変更し、
  前記カメラの光軸が前記第二方向を向いているときの前記画像に基づいて前記第二画像データを生成する。
Alternatively, the above image data generation device can be configured as follows.
The processor is
Generating the first image data based on the image when the optical axis of the camera is in the first direction;
Changing the direction of the optical axis of the camera to a second direction different from the first direction;
The second image data is generated based on the image when the optical axis of the camera is in the second direction.
 初めから画角の広いカメラを用いた場合、広い範囲を視認できる一方で表示装置に表示される監視画像内に表示される物体が小さくなることが避けられない。上記のような構成によれば、画角を変えずとも基準物が所定の範囲に含まれている監視画像を生成し続けることができる。監視画像内に表示される物体が適切に視認できる程度に画角を定めればよいので、視野の確保と視認性の低下防止を両立できる。 When using a camera with a wide angle of view from the beginning, it is inevitable that an object displayed in the monitoring image displayed on the display device becomes small while a wide range can be visually recognized. According to the above configuration, it is possible to continue to generate a monitoring image in which the reference object is included in a predetermined range without changing the angle of view. Since the angle of view only needs to be set to such an extent that an object displayed in the monitoring image can be properly visually recognized, it is possible to achieve both securing a visual field and preventing a reduction in visibility.
 あるいは、上記の画像データ生成装置は、以下のように構成されうる。
 前記少なくとも一つのカメラは、第一カメラと第二カメラを含んでおり、
 前記プロセッサは、
  前記第一カメラから取得した前記信号に基づいて前記第一画像データを生成し、
  前記第二カメラから取得した前記信号に基づいて前記第二画像データを生成する。
Alternatively, the above image data generation device can be configured as follows.
The at least one camera includes a first camera and a second camera;
The processor is
Generating the first image data based on the signal acquired from the first camera;
The second image data is generated based on the signal acquired from the second camera.
 初めから画角の広いカメラを用いた場合、広い範囲を視認できる一方で表示装置に表示される監視画像内に表示される物体が小さくなることが避けられない。上記のような構成によれば、画角を変えずとも基準物が所定の範囲に含まれている監視画像を生成し続けることができる。監視画像内に表示される物体が適切に視認できる程度に各カメラの画角を定めればよいので、視野の確保と視認性の低下防止を両立できる。 When using a camera with a wide angle of view from the beginning, it is inevitable that an object displayed in the monitoring image displayed on the display device becomes small while a wide range can be visually recognized. According to the above configuration, it is possible to continue to generate a monitoring image in which the reference object is included in a predetermined range without changing the angle of view. Since the angle of view of each camera has only to be determined to such an extent that an object displayed in the monitoring image can be properly visually recognized, both the securing of the visual field and the prevention of deterioration in visibility can be achieved.
 上記の画像データ生成装置は、以下のように構成されうる。
 前記基準物は、前記第一監視画像を介してユーザにより指定可能である。
The above image data generation device can be configured as follows.
The reference object can be designated by the user via the first monitoring image.
 このような構成によれば、第二監視画像を生成するための基準物の設定について柔軟性と自由度を高めることができる。 According to such a configuration, it is possible to increase flexibility and flexibility in setting a reference object for generating the second monitoring image.
 前記基準物は、前記車両の後部の一部、または前記車両の後方に位置する車両の一部でありうる。 The reference object may be a part of a rear part of the vehicle or a part of a vehicle located behind the vehicle.
第一実施形態に係るセンサシステムの構成を例示している。The structure of the sensor system which concerns on 1st embodiment is illustrated. 図1のセンサシステムの動作例を説明する図である。It is a figure explaining the operation example of the sensor system of FIG. 車両におけるセンサシステムの位置を例示する図である。It is a figure which illustrates the position of the sensor system in vehicles. 第二実施形態に係るセンサシステムの構成を例示している。The structure of the sensor system which concerns on 2nd embodiment is illustrated. 図4のセンサシステムによって行なわれる処理を例示している。5 illustrates processing performed by the sensor system of FIG. 図4のセンサシステムにおけるセンサユニットが灯室内に配置された構成を例示している。6 illustrates a configuration in which the sensor unit in the sensor system of FIG. 4 is disposed in the lamp chamber. 第三実施形態に係る画像データ生成装置の機能構成を例示している。The functional structure of the image data generation apparatus which concerns on 3rd embodiment is illustrated. 上記の画像データ生成装置が搭載される車両の一例を示している。An example of a vehicle on which the above-described image data generation device is mounted is shown. 上記の画像データ生成装置の動作フローを例示している。The operation | movement flow of said image data generation apparatus is illustrated. 上記の画像データ生成装置の動作結果を例示している。The operation result of said image data generation apparatus is illustrated. 上記の画像データ生成装置の動作の第一の例を示している。The 1st example of operation | movement of said image data generation apparatus is shown. 上記の画像データ生成装置の動作の第二の例を示している。The 2nd example of operation | movement of said image data generation apparatus is shown. 上記の画像データ生成装置の動作の第三の例を示している。The 3rd example of operation | movement of said image data generation apparatus is shown. 上記の画像データ生成装置の動作の第四の例を示している。The 4th example of operation | movement of said image data generation apparatus is shown.
 添付の図面を参照しつつ、実施形態の例について以下詳細に説明する。以下の説明に用いる各図面では、各部材を認識可能な大きさとするために縮尺を適宜変更している。 DETAILED DESCRIPTION Hereinafter, exemplary embodiments will be described in detail with reference to the accompanying drawings. In each drawing used in the following description, the scale is appropriately changed to make each member a recognizable size.
 添付の図面において、矢印Fは、図示された構造の前方向を示している。矢印Bは、図示された構造の後方向を示している。矢印Lは、図示された構造の左方向を示している。矢印Rは、図示された構造の右方向を示している。以降の説明に用いる「左」および「右」は、運転席から見た左右の方向を示している。 In the accompanying drawings, an arrow F indicates the forward direction of the illustrated structure. Arrow B indicates the backward direction of the illustrated structure. Arrow L indicates the left direction of the illustrated structure. Arrow R indicates the right direction of the illustrated structure. “Left” and “right” used in the following description indicate the left and right directions viewed from the driver's seat.
 図1は、第一実施形態に係るセンサシステム1の構成を模式的に示している。センサシステム1は、車両に搭載される。 FIG. 1 schematically shows a configuration of a sensor system 1 according to the first embodiment. The sensor system 1 is mounted on a vehicle.
 センサシステム1は、LiDARセンサユニット11を備えている。LiDARセンサユニット11は、非可視光を出射する構成、および当該非可視光が少なくとも車両の外部に存在する物体に反射した結果の戻り光を検出する構成を備えている。LiDARセンサユニット11は、必要に応じて出射方向(すなわち検出方向)を変更して当該非可視光を掃引する走査機構を備えうる。本実施形態においては、非可視光として波長905nmの赤外光が使用されている。 The sensor system 1 includes a LiDAR sensor unit 11. The LiDAR sensor unit 11 has a configuration for emitting invisible light and a configuration for detecting return light as a result of reflection of the invisible light at least on an object existing outside the vehicle. The LiDAR sensor unit 11 may include a scanning mechanism that sweeps the invisible light by changing the emission direction (that is, the detection direction) as necessary. In the present embodiment, infrared light having a wavelength of 905 nm is used as invisible light.
 LiDARセンサユニット11は、例えば、ある方向へ非可視光を出射したタイミングから戻り光を検出するまでの時間に基づいて、当該戻り光に関連付けられた物体までの距離を取得できる。また、そのような距離データを検出位置と関連付けて集積することにより、戻り光に関連付けられた物体の形状に係る情報を取得できる。これに加えてあるいは代えて、出射光と戻り光の波形の相違に基づいて、戻り光に関連付けられた物体の材質などの属性に係る情報を取得できる。 The LiDAR sensor unit 11 can acquire the distance to the object associated with the return light based on, for example, the time from when the invisible light is emitted in a certain direction until the return light is detected. Further, by accumulating such distance data in association with the detection position, information related to the shape of the object associated with the return light can be acquired. In addition to or instead of this, information related to attributes such as the material of the object associated with the return light can be acquired based on the difference between the waveforms of the emitted light and the return light.
 LiDARセンサユニット11は、上記のようにして検出された車両の外部の情報に対応する信号S1を出力するように構成されている。 The LiDAR sensor unit 11 is configured to output a signal S1 corresponding to information outside the vehicle detected as described above.
 センサシステム1は、信号処理装置12を備えている。信号処理装置12は、汎用メモリと協働して動作する汎用マイクロプロセッサにより実現されうる。汎用マイクロプロセッサとしては、CPU、MPU、GPUが例示されうる。汎用メモリとしては、ROMやRAMが例示されうる。この場合、ROMには、後述する処理を実現するコンピュータプログラムが記憶されうる。汎用マイクロプロセッサは、ROM上に記憶されたプログラムの少なくとも一部を指定してRAM上に展開し、RAMと協働して上述した処理を実行する。信号処理装置12は、後述する処理を実現するコンピュータプログラムを実行可能なマイクロコントローラ、ASIC、FPGAなどの専用集積回路によって実現されてもよい。信号処理装置12は、汎用マイクロプロセッサと専用集積回路の組合せによって実現されてもよい。 The sensor system 1 includes a signal processing device 12. The signal processing device 12 can be realized by a general-purpose microprocessor that operates in cooperation with a general-purpose memory. As the general-purpose microprocessor, a CPU, an MPU, and a GPU can be exemplified. As the general-purpose memory, ROM and RAM can be exemplified. In this case, the ROM can store a computer program that realizes processing to be described later. The general-purpose microprocessor designates at least a part of a program stored on the ROM, expands it on the RAM, and executes the above-described processing in cooperation with the RAM. The signal processing device 12 may be realized by a dedicated integrated circuit such as a microcontroller, an ASIC, or an FPGA that can execute a computer program that realizes processing to be described later. The signal processing device 12 may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
 具体的には、信号処理装置12は、LiDARセンサユニット11から出力された信号S1を処理し、検出された車両の外部の情報に対応するLiDARデータを生成するように構成されている。LiDARデータは、車両の運転支援に利用される。 Specifically, the signal processing device 12 is configured to process the signal S1 output from the LiDAR sensor unit 11 and generate LiDAR data corresponding to the detected information outside the vehicle. LiDAR data is used for driving support of a vehicle.
 信号処理装置12は、車両に搭載されているオートレベリングシステムから基準高さ情報Hを取得するように構成されている。オートレベリングシステムは、車両の上下方向におけるヘッドランプの光軸の向きを、当該車両のピッチ角に基づいて調節するシステムである。オートレベリングシステムがヘッドランプの光軸の向きを調節する過程で、基準高さ(例えば調節後の光軸の向き)が定められる。基準高さ情報Hは、この基準高さを示している。 The signal processing device 12 is configured to acquire the reference height information H from an auto leveling system mounted on the vehicle. The auto leveling system is a system that adjusts the direction of the optical axis of the headlamp in the vertical direction of the vehicle based on the pitch angle of the vehicle. In the process in which the auto leveling system adjusts the direction of the optical axis of the headlamp, a reference height (for example, the direction of the adjusted optical axis) is determined. The reference height information H indicates this reference height.
 信号処理装置12は、取得した基準高さ情報Hに基づいて、LiDARセンサユニット11の検出範囲のうち基準高さに対応する領域に関連付けられた信号S1を用いてLiDARデータを生成するように構成されている。図2を参照しつつ、この動作について詳細に説明する。 Based on the acquired reference height information H, the signal processing device 12 is configured to generate LiDAR data using the signal S1 associated with the region corresponding to the reference height in the detection range of the LiDAR sensor unit 11. Has been. This operation will be described in detail with reference to FIG.
 図2の(A)は、センサシステム1が車両100の前部に搭載されている例を示している。この場合、LiDARセンサユニット11は、車両100の少なくとも前方の情報を検出するように構成される。矢印Dは、車両100の上下方向におけるLiDARセンサユニット11の検出基準方向を示している。 2A shows an example in which the sensor system 1 is mounted on the front portion of the vehicle 100. FIG. In this case, the LiDAR sensor unit 11 is configured to detect information at least ahead of the vehicle 100. An arrow D indicates the detection reference direction of the LiDAR sensor unit 11 in the vertical direction of the vehicle 100.
 図2の(B)は、LiDARセンサユニット11の検出範囲Aを示している。符号hは、基準高さ情報Hにより示される基準高さを表している。LiDARセンサユニット11から出力される信号S1には、検出範囲A内の情報が含まれうる。信号処理装置12は、検出範囲Aの一部である領域Pに関連付けられた信号S1を用い、領域P内で検出された情報に対応するLiDARデータを生成する。領域Pは、基準高さhに対応する領域である。 (B) of FIG. 2 shows the detection range A of the LiDAR sensor unit 11. The symbol h represents the reference height indicated by the reference height information H. The signal S1 output from the LiDAR sensor unit 11 may include information within the detection range A. The signal processing device 12 generates LiDAR data corresponding to information detected in the region P using the signal S1 associated with the region P that is a part of the detection range A. The region P is a region corresponding to the reference height h.
 本明細書において「基準高さhに対応する」という表現の意味は、オートレベリングシステムによって定められた基準高さhと車両100の上下方向における領域Pの高さが一致する場合に限られない。基準高さhと領域Pの高さが所定の対応関係を有していれば、両者が異なっていてもよい。 In this specification, the meaning of the expression “corresponding to the reference height h” is not limited to the case where the reference height h determined by the auto leveling system matches the height of the region P in the vertical direction of the vehicle 100. . As long as the reference height h and the height of the region P have a predetermined correspondence, the two may be different.
 信号処理装置12は、領域Pに対応する信号のみを信号S1として取得し、当該信号S1をデータ生成処理に供してもよいし、検出範囲A全体に対応する信号S1を取得した後に領域Pに対応する信号のみを抽出してデータ生成処理に供してもよい。 The signal processing device 12 may acquire only the signal corresponding to the region P as the signal S1, and may use the signal S1 for data generation processing, or after acquiring the signal S1 corresponding to the entire detection range A, Only the corresponding signal may be extracted and used for data generation processing.
 図2の(C)は、車両100の前端が後端よりも上方に傾斜した状態を示している。LiDARセンサユニット11の検出基準方向Dは、基準高さhよりも上方を向いている。したがって、図2の(D)に示されるように、LiDARセンサユニット11の検出範囲Aは、図2の(B)に示される状態よりも上方に移動する。この場合においても、信号処理装置12は、取得した基準高さ情報Hに基づいて、検出範囲Aの一部である領域Pに関連付けられた信号S1を用い、領域P内で検出された情報に対応するLiDARデータを生成する。 FIG. 2C shows a state in which the front end of the vehicle 100 is tilted upward from the rear end. The detection reference direction D of the LiDAR sensor unit 11 is directed upward from the reference height h. Therefore, as shown in FIG. 2D, the detection range A of the LiDAR sensor unit 11 moves upward from the state shown in FIG. Even in this case, the signal processing device 12 uses the signal S1 associated with the region P that is a part of the detection range A based on the acquired reference height information H, and converts the information detected in the region P into Corresponding LiDAR data is generated.
 図2の(E)は、車両100の前端が後端よりも下方に傾斜した状態を示している。LiDARセンサユニット11の検出基準方向Dは、基準高さhよりも下方を向いている。したがって、図2の(F)に示されるように、LiDARセンサユニット11の検出範囲Aは、図2の(B)に示される状態よりも下方に移動する。この場合においても、信号処理装置12は、取得した基準高さ情報Hに基づいて、検出範囲Aの一部である領域Pに関連付けられた信号S1を用い、領域P内で検出された情報に対応するLiDARデータを生成する。 FIG. 2E shows a state in which the front end of the vehicle 100 is inclined downward from the rear end. The detection reference direction D of the LiDAR sensor unit 11 is directed downward from the reference height h. Therefore, as shown in FIG. 2F, the detection range A of the LiDAR sensor unit 11 moves downward from the state shown in FIG. Even in this case, the signal processing device 12 uses the signal S1 associated with the region P that is a part of the detection range A based on the acquired reference height information H, and converts the information detected in the region P into Corresponding LiDAR data is generated.
 運転支援のために例えば基準高さhにおける車両100の外部の情報が必要とされる場合、車両100の上下方向におけるLiDARセンサユニット11の検出基準方向Dが基準高さhと一致していることが好ましい。しかしながら、車両100のピッチ角度の変化に応じてLiDARセンサユニット11の検出基準方向Dが車両100の上下方向に変化するので、LiDARセンサユニット11の検出範囲Aは、車両100の上下方向に冗長性を有するように設定されることが一般的である。 For example, when information outside the vehicle 100 at the reference height h is required for driving support, the detection reference direction D of the LiDAR sensor unit 11 in the vertical direction of the vehicle 100 matches the reference height h. Is preferred. However, since the detection reference direction D of the LiDAR sensor unit 11 changes in the vertical direction of the vehicle 100 according to the change in the pitch angle of the vehicle 100, the detection range A of the LiDAR sensor unit 11 is redundant in the vertical direction of the vehicle 100. Generally, it is set to have.
 上記のような構成によれば、信号処理装置12は、オートレベリングシステムにより車両100のピッチ角に基づいて別途定められる基準高さ情報Hを利用して領域Pを特定する。したがって、車両100のピッチ角に依らず、運転支援に必要とされる情報を含む領域Pを容易かつ高い信頼性とともに特定できる。さらに、冗長性を有する検出範囲Aの一部である領域Pに関連付けられた信号S1のみが外部情報を取得するためのデータ生成処理に供されるので、車両100の運転支援に必要とされる信号処理に係る負荷の増大を抑制できる。 According to the configuration as described above, the signal processing device 12 specifies the region P using the reference height information H separately determined based on the pitch angle of the vehicle 100 by the auto leveling system. Therefore, regardless of the pitch angle of the vehicle 100, the region P including information required for driving assistance can be easily and highly reliably identified. Furthermore, since only the signal S1 associated with the region P that is a part of the detection range A having redundancy is used for data generation processing for acquiring external information, it is required for driving support of the vehicle 100. An increase in load related to signal processing can be suppressed.
 図1に示されるように、センサシステム1は、レベリング調節機構13を備えうる。レベリング調節機構13は、LiDARセンサユニット11の検出基準方向Dを車両100の上下方向に変化させることが可能なアクチュエータを含むように構成される。オートレベリングシステムにおいてヘッドランプの光軸の向きを車両の上下方向に変化させる周知の機構と同様の構成が採用されうる。 As shown in FIG. 1, the sensor system 1 can include a leveling adjustment mechanism 13. The leveling adjustment mechanism 13 is configured to include an actuator that can change the detection reference direction D of the LiDAR sensor unit 11 in the vertical direction of the vehicle 100. In the auto leveling system, a configuration similar to a known mechanism for changing the direction of the optical axis of the headlamp in the vertical direction of the vehicle can be employed.
 レベリング調節機構13は、信号処理装置12と通信可能に接続されうる。レベリング調節機構13は、信号処理装置12により取得された基準高さ情報Hに基づいて、車両の上下方向におけるLiDARセンサユニット11の検出基準方向Dを調節するように構成される。 The leveling adjustment mechanism 13 can be communicably connected to the signal processing device 12. The leveling adjustment mechanism 13 is configured to adjust the detection reference direction D of the LiDAR sensor unit 11 in the vertical direction of the vehicle based on the reference height information H acquired by the signal processing device 12.
 例えば、図2の(C)に示されるように検出基準方向Dが基準高さhよりも上方を向いている場合、信号処理装置12は、基準高さ情報Hに基づいて当該事実を認識する。信号処理装置12は、検出基準方向Dの基準高さhからのずれを解消するようにレベリング調節機構13を駆動するための信号を出力する。レベリング調節機構13は、当該信号に基づいて動作し、LiDARセンサユニット11の検出基準方向Dを下方に向ける。 For example, as illustrated in FIG. 2C, when the detection reference direction D is directed upward from the reference height h, the signal processing device 12 recognizes the fact based on the reference height information H. . The signal processing device 12 outputs a signal for driving the leveling adjustment mechanism 13 so as to eliminate the deviation from the reference height h in the detection reference direction D. The leveling adjustment mechanism 13 operates based on the signal, and directs the detection reference direction D of the LiDAR sensor unit 11 downward.
 逆に、図2の(E)に示されるように検出基準方向Dが基準高さhよりも下方を向いている場合、信号処理装置12は、LiDARセンサユニット11の検出基準方向Dが上方を向くようにレベリング調節機構13を動作させる信号を出力する。 On the other hand, when the detection reference direction D is directed downward from the reference height h as shown in FIG. 2E, the signal processing device 12 indicates that the detection reference direction D of the LiDAR sensor unit 11 is upward. The signal which operates the leveling adjustment mechanism 13 so that it may face is output.
 このような構成によれば、車両100のピッチ角変化に応じたLiDARセンサユニット11の検出基準方向Dの変化を抑制できるので、LiDARセンサユニット11の検出範囲A内における領域Pの位置の変化を抑制できる。すなわち、信号処理装置12により特定される領域Pの位置は、車両100のピッチ角に依らず図2の(B)に示される位置とされうる。したがって、領域Pを特定するための信号処理装置12の処理負荷をさらに抑制できる。 According to such a configuration, since the change in the detection reference direction D of the LiDAR sensor unit 11 according to the change in the pitch angle of the vehicle 100 can be suppressed, the change in the position of the region P within the detection range A of the LiDAR sensor unit 11 can be reduced. Can be suppressed. That is, the position of the region P specified by the signal processing device 12 can be the position shown in FIG. 2B regardless of the pitch angle of the vehicle 100. Therefore, the processing load of the signal processing device 12 for specifying the region P can be further suppressed.
 図1に示されるように、センサシステム1は、カメラユニット14を備えうる。カメラユニット14は、車両の外部の画像情報を取得するための装置である。カメラユニット14は、取得した画像情報に対応する信号S2を出力するように構成される。カメラユニット14は、センサユニットの一例である。 As shown in FIG. 1, the sensor system 1 can include a camera unit 14. The camera unit 14 is a device for acquiring image information outside the vehicle. The camera unit 14 is configured to output a signal S2 corresponding to the acquired image information. The camera unit 14 is an example of a sensor unit.
 信号処理装置12は、カメラユニット14から出力された信号S2を処理し、取得された車両の外部の画像情報に対応するカメラデータを生成するように構成されている。カメラデータは、車両の運転支援に利用される。 The signal processing device 12 is configured to process the signal S2 output from the camera unit 14 and generate camera data corresponding to the acquired image information outside the vehicle. The camera data is used for driving support of the vehicle.
 信号処理装置12は、取得した基準高さ情報Hに基づいて、カメラユニット14の視野のうち基準高さhに対応する領域に関連付けられた信号S2を用いてカメラデータを生成するように構成されている。すなわち、信号処理装置12は、カメラユニット14の視野の一部である領域に関連付けられた信号S2を用い、当該領域内に含まれる画像に対応するカメラデータを生成する。カメラユニット14の視野は、センサユニットの検出範囲の一例である。 Based on the acquired reference height information H, the signal processing device 12 is configured to generate camera data using a signal S2 associated with a region corresponding to the reference height h in the field of view of the camera unit 14. ing. That is, the signal processing device 12 generates the camera data corresponding to the image included in the area using the signal S2 associated with the area that is a part of the field of view of the camera unit 14. The field of view of the camera unit 14 is an example of the detection range of the sensor unit.
 この場合、LiDARセンサユニット11は、第一センサユニットの一例である。LiDARセンサユニット11により検出される車両100の外部の情報は、第一情報の一例である。LiDARセンサユニット11から出力される信号S1は、第一信号の一例である。信号処理装置12により生成されるLiDARデータは、第一データの一例である。 In this case, the LiDAR sensor unit 11 is an example of a first sensor unit. The information outside the vehicle 100 detected by the LiDAR sensor unit 11 is an example of first information. The signal S1 output from the LiDAR sensor unit 11 is an example of a first signal. LiDAR data generated by the signal processing device 12 is an example of first data.
 この場合、カメラユニット14は、第二センサユニットの一例である。カメラユニット14により取得される車両100の外部の画像は、第二情報の一例である。カメラユニット14から出力される信号S2は、第二信号の一例である。信号処理装置12により生成されるカメラデータは、第二データの一例である。 In this case, the camera unit 14 is an example of a second sensor unit. An image outside the vehicle 100 acquired by the camera unit 14 is an example of second information. The signal S2 output from the camera unit 14 is an example of a second signal. The camera data generated by the signal processing device 12 is an example of second data.
 上記の構成においては、共通の基準高さ情報Hに基づいて、LiDARセンサユニット11から出力される信号S1に基づくLiDARデータの生成と、カメラユニット14から出力される信号S2に基づくカメラデータの生成とがなされる。すなわち、生成されるLiDARデータとカメラデータは、ともに基準高さhに関連付けられた情報を含んでいる。したがって、運転支援に対する両データの統合的な利用が容易になる。また、LiDARデータとカメラデータは、ともに各センサユニットの検出範囲の一部である領域に関連付けられた信号のみを用いて生成される。よって、両データが統合的に利用される場合であっても、信号処理装置12の処理負荷の増大を抑制できる。 In the above configuration, generation of LiDAR data based on the signal S1 output from the LiDAR sensor unit 11 and generation of camera data based on the signal S2 output from the camera unit 14 based on the common reference height information H And is made. That is, the generated LiDAR data and camera data both include information associated with the reference height h. Therefore, the integrated use of both data for driving assistance becomes easy. Moreover, both LiDAR data and camera data are generated using only signals associated with a region that is a part of the detection range of each sensor unit. Therefore, even if both data are used in an integrated manner, an increase in processing load on the signal processing device 12 can be suppressed.
 上記のレベリング調節機構13に係る説明は、カメラユニット14に対しても適用可能である。 The above description relating to the leveling adjustment mechanism 13 can also be applied to the camera unit 14.
 上記の例においては、種別の異なる複数のセンサユニットから出力されたデータが、統合的に運転支援に利用されている。しかしながら、車両における比較的離れた位置に配置された種別が同じ複数のセンサユニットから出力されたデータが、統合的に運転支援に利用されてもよい。 In the above example, data output from a plurality of sensor units of different types is used for driving support in an integrated manner. However, data output from a plurality of sensor units of the same type arranged at relatively distant positions in the vehicle may be used for driving support in an integrated manner.
 例えば、図1に示されるセンサシステム1は、図3に示される車両100の左前隅部LF、右前隅部RF、左後隅部LB、および右後隅部RBのうち、少なくとも二箇所に配置されうる。ここでは、二つのセンサシステム1が車両100の左前隅部LFと右前隅部RFに配置される場合を例に挙げる。 For example, the sensor system 1 shown in FIG. 1 is arranged in at least two of the left front corner LF, the right front corner RF, the left rear corner LB, and the right rear corner RB of the vehicle 100 shown in FIG. Can be done. Here, a case where two sensor systems 1 are arranged at the left front corner LF and the right front corner RF of the vehicle 100 will be described as an example.
 本例においては、左前隅部LFに配置されたLiDARセンサユニット11は、第一センサユニットの一例である。左前隅部LFに配置されたLiDARセンサユニット11により検出される車両100の外部の情報は、第一情報の一例である。左前隅部LFに配置されたLiDARセンサユニット11から出力される信号S1は、第一信号の一例である。左前隅部LFに配置された信号処理装置12により生成されるLiDARデータは、第一データの一例である。 In this example, the LiDAR sensor unit 11 disposed in the left front corner LF is an example of a first sensor unit. Information outside the vehicle 100 detected by the LiDAR sensor unit 11 disposed in the left front corner LF is an example of first information. The signal S1 output from the LiDAR sensor unit 11 disposed in the left front corner LF is an example of a first signal. The LiDAR data generated by the signal processing device 12 arranged at the left front corner LF is an example of first data.
 本例においては、右前隅部RFに配置されたLiDARセンサユニット11は、第二センサユニットの一例である。右前隅部RFに配置されたLiDARセンサユニット11により検出される車両100の外部の情報は、第二情報の一例である。右前隅部RFに配置されたLiDARセンサユニット11から出力される信号S1は、第二信号の一例である。右前隅部RFに配置された信号処理装置12により生成されるLiDARデータは、第二データの一例である。 In this example, the LiDAR sensor unit 11 disposed in the right front corner RF is an example of the second sensor unit. Information outside the vehicle 100 detected by the LiDAR sensor unit 11 disposed in the right front corner RF is an example of second information. The signal S1 output from the LiDAR sensor unit 11 disposed at the right front corner RF is an example of a second signal. The LiDAR data generated by the signal processing device 12 disposed in the right front corner RF is an example of second data.
 左前隅部LFに配置された信号処理装置12により生成されるLiDARデータと、右前隅部RFに配置された信号処理装置12により生成されるLiDARデータは、制御装置101によって運転支援に利用される。制御装置101の例としては、ECUが挙げられる。ECUは、汎用メモリと協働して動作する汎用マイクロプロセッサにより実現されうる。汎用マイクロプロセッサとしては、CPU、MPU、GPUが例示されうる。汎用メモリとしては、ROMやRAMが例示されうる。ECUは、マイクロコントローラ、ASIC、FPGAなどの専用集積回路によって実現されてもよい。ECUは、汎用マイクロプロセッサと専用集積回路の組合せによって実現されてもよい。 The LiDAR data generated by the signal processing device 12 arranged in the left front corner LF and the LiDAR data generated by the signal processing device 12 arranged in the right front corner RF are used by the control device 101 for driving support. . An ECU is an example of the control device 101. The ECU can be realized by a general-purpose microprocessor that operates in cooperation with a general-purpose memory. As the general-purpose microprocessor, a CPU, an MPU, and a GPU can be exemplified. As the general-purpose memory, ROM and RAM can be exemplified. The ECU may be realized by a dedicated integrated circuit such as a microcontroller, ASIC, or FPGA. The ECU may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
 上記の構成においては、共通の基準高さ情報Hに基づいて、左前隅部LFに配置されたLiDARセンサユニット11から出力される信号S1に基づくLiDARデータの生成と、右前隅部RFに配置されたLiDARセンサユニット11から出力される信号S1に基づくLiDARデータの生成とがなされる。すなわち、二箇所で生成されるLiDARデータは、ともに基準高さhに関連付けられた情報を含んでいる。したがって、運転支援に対する両データの統合的な利用が容易になる。また、二箇所で生成されるLiDARデータは、ともに各LiDARセンサユニット11の検出範囲Aの一部である領域Pに関連付けられた信号S1のみを用いて生成される。よって、両データが統合的に利用される場合であっても、制御装置101の処理負荷の増大を抑制できる。 In the above configuration, based on the common reference height information H, generation of LiDAR data based on the signal S1 output from the LiDAR sensor unit 11 disposed at the left front corner LF, and placement at the right front corner RF LiDAR data is generated based on the signal S1 output from the LiDAR sensor unit 11. That is, the LiDAR data generated at two locations includes information associated with the reference height h. Therefore, the integrated use of both data for driving assistance becomes easy. The LiDAR data generated at two locations is generated using only the signal S1 associated with the region P that is part of the detection range A of each LiDAR sensor unit 11. Therefore, even when both data are used in an integrated manner, an increase in processing load on the control device 101 can be suppressed.
 図1に示されるように、信号処理装置12は、地図情報Mを取得可能に構成されうる。地図情報Mは、例えば車両100のナビゲーションシステムに使用される情報でありうる。地図情報Mは、予め車両100に搭載されている記憶装置に格納されていてもよいし、定期的、あるいは必要に応じて外部ネットワークからダウンロードされたものであってもよい。 As shown in FIG. 1, the signal processing device 12 can be configured to acquire map information M. The map information M can be information used for the navigation system of the vehicle 100, for example. The map information M may be stored in advance in a storage device mounted on the vehicle 100, or may be downloaded from an external network periodically or as necessary.
 信号処理装置12は、LiDARセンサユニット11から出力される信号S1に基づいて生成されたLiDARデータを、地図情報Mと関連付けるように構成されうる。例えばLiDARデータが車両100の外部に物体の存在を示している場合、当該物体がガードレールや交通標識などの構造体であるか否かが、地図情報Mとの照合を行なうことにより判定されうる。 The signal processing device 12 may be configured to associate the LiDAR data generated based on the signal S1 output from the LiDAR sensor unit 11 with the map information M. For example, when the LiDAR data indicates the presence of an object outside the vehicle 100, it can be determined by collating with the map information M whether the object is a structure such as a guardrail or a traffic sign.
 このような構成によれば、LiDARデータと地図情報Mを、統合的に運転支援に利用できる。前述のように、LiDARデータの生成に係る信号処理負荷の増大を抑制できるので、当該LiDARデータと地図情報Mを組み合わせた統合的な運転支援における処理負荷の増大もまた、全体として抑制できる。 According to such a configuration, LiDAR data and map information M can be used for driving support in an integrated manner. As described above, since an increase in signal processing load related to generation of LiDAR data can be suppressed, an increase in processing load in integrated driving support combining the LiDAR data and map information M can also be suppressed as a whole.
 上記の地図情報Mは三次元情報を含みうる。この場合、信号処理装置12は、地図情報Mのうち基準高さhに対応する情報にLiDARデータを関連付けうる。すなわち、基準高さhに対応する二次元地図情報が三次元情報を含む地図情報Mから抽出され、LiDARデータと関連付けられうる。 The above map information M can include three-dimensional information. In this case, the signal processing apparatus 12 can associate LiDAR data with information corresponding to the reference height h in the map information M. That is, the two-dimensional map information corresponding to the reference height h can be extracted from the map information M including the three-dimensional information and associated with the LiDAR data.
 このような構成によれば、統合的な運転支援に利用される地図情報のデータを減らすことができるので、統合的な運転支援における処理負荷の増大をさらに抑制できる。 According to such a configuration, the map information data used for the integrated driving support can be reduced, so that an increase in processing load in the integrated driving support can be further suppressed.
 なお、信号処理装置12により取得される地図情報Mは、予め二次元情報として提供されてもよい。 Note that the map information M acquired by the signal processing device 12 may be provided in advance as two-dimensional information.
 図1に示されるように、センサシステム1は、左前ランプ装置15を備えうる。左前ランプ装置15は、ランプハウジング51と透光カバー52を備えうる。ランプハウジング51は、透光カバー52とともに灯室53を区画する。左前ランプ装置15は、図3に示される車両100の左前隅部LFに搭載される。 As shown in FIG. 1, the sensor system 1 can include a left front lamp device 15. The left front lamp device 15 may include a lamp housing 51 and a translucent cover 52. The lamp housing 51 partitions the lamp chamber 53 together with the translucent cover 52. The left front lamp device 15 is mounted on the left front corner LF of the vehicle 100 shown in FIG.
 図1に示されるように、左前ランプ装置15は、ランプユニット54を備えうる。ランプユニット54は、車両100の外方へ可視光を出射する装置である。ランプユニット54は、灯室53に収容される。ランプユニット54としては、前照灯ユニット、車幅灯ユニット、方向指示灯ユニット、霧灯ユニットなどが例示されうる。 As shown in FIG. 1, the left front lamp device 15 may include a lamp unit 54. The lamp unit 54 is a device that emits visible light to the outside of the vehicle 100. The lamp unit 54 is accommodated in the lamp chamber 53. Examples of the lamp unit 54 include a headlamp unit, a vehicle width lamp unit, a direction indicator lamp unit, and a fog lamp unit.
 上述したLiDARセンサユニット11とカメラユニット14の少なくとも一方は、灯室53に収容されうる。したがって、透光カバー52は、ランプユニット54から出射される可視光だけでなく、灯室53に収容されるセンサユニットが感度を有する波長の光を透過させる材料により形成される。 At least one of the LiDAR sensor unit 11 and the camera unit 14 described above can be accommodated in the lamp chamber 53. Therefore, the translucent cover 52 is formed of a material that transmits not only visible light emitted from the lamp unit 54 but also light having a wavelength with which the sensor unit accommodated in the lamp chamber 53 has sensitivity.
 左前ランプ装置15は、車両100の外部に光を供給するという機能ゆえに、上記した左前隅部FBなど、遮蔽物の少ない場所に配置されることが一般的である。このような場所にセンサユニットも配置されることにより、車両100の外部の情報を効率的に取得できる。 Since the front left lamp device 15 has a function of supplying light to the outside of the vehicle 100, the front left lamp device 15 is generally arranged in a place with a small amount of shielding such as the left front corner FB. By arranging the sensor unit in such a place, information outside the vehicle 100 can be efficiently acquired.
 また、高さ検出情報Hを車両100のオートレベリングシステムから取得する場合、高さ検出情報Hをランプユニット54と共用しうる。この場合、効率的なシステムの設計が可能である。 Further, when the height detection information H is acquired from the auto leveling system of the vehicle 100, the height detection information H can be shared with the lamp unit 54. In this case, efficient system design is possible.
 したがって、図3に示される車両100の右前隅部RFには、左前ランプ装置15と左右対称の構成を有する右前ランプ装置が搭載されうる。車両100の左後隅部LBには、左後ランプ装置が搭載されうる。この場合、左後ランプ装置が備えるランプユニットとしては、制動灯ユニット、尾灯ユニット、車幅灯ユニット、後進灯ユニットなどが例示されうる。車両100の右後隅部RBには、左後ランプ装置と左右対称の構成を有する右後ランプ装置が搭載されうる。いずれのランプ装置においても、上述したセンサユニットは、ランプハウジングと透光カバーにより区画される灯室に収容されうる。 Therefore, a right front lamp device having a symmetrical configuration with the left front lamp device 15 can be mounted on the right front corner RF of the vehicle 100 shown in FIG. A left rear lamp device may be mounted on the left rear corner LB of the vehicle 100. In this case, examples of the lamp unit included in the left rear lamp device may include a brake light unit, a taillight unit, a vehicle width light unit, and a reverse light unit. A right rear lamp device having a configuration symmetrical to the left rear lamp device may be mounted on the right rear corner RB of the vehicle 100. In any lamp device, the sensor unit described above can be accommodated in a lamp chamber defined by a lamp housing and a light-transmitting cover.
 第一実施形態は、本開示の理解を容易にするための例示にすぎない。第一実施形態に係る構成は、本開示の趣旨を逸脱しなければ、適宜に変更・改良されうる。 The first embodiment is merely an example for facilitating understanding of the present disclosure. The configuration according to the first embodiment can be changed or improved as appropriate without departing from the spirit of the present disclosure.
 第一実施形態においては、センサシステム1がLiDARセンサユニット11とカメラユニット14の少なくとも一方を備える例を示した。しかしながら、センサシステム1は、LiDARセンサユニット、カメラユニット、およびミリ波センサユニットの少なくとも一つを備えるように構成されうる。なお、カメラユニットは、可視光カメラユニットと赤外線カメラユニットを含みうる。 In the first embodiment, an example in which the sensor system 1 includes at least one of the LiDAR sensor unit 11 and the camera unit 14 has been described. However, the sensor system 1 can be configured to include at least one of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit. The camera unit can include a visible light camera unit and an infrared camera unit.
 ミリ波センサユニットは、ミリ波を発信する構成、および当該ミリ波が車両100の外部に存在する物体に反射した結果の反射波を受信する構成を備えている。ミリ波の周波数の例としては、24GHz、26GHz、76GHz、79GHzなどが挙げられる。ミリ波センサユニットは、例えば、ある方向へミリ波を発信したタイミングから反射波を受信するまでの時間に基づいて、当該反射波に関連付けられた物体までの距離を取得できる。また、そのような距離データを検出位置と関連付けて集積することにより、反射波に関連付けられた物体の動きに係る情報を取得できる。 The millimeter wave sensor unit has a configuration for transmitting a millimeter wave and a configuration for receiving a reflected wave resulting from the reflection of the millimeter wave by an object existing outside the vehicle 100. Examples of the millimeter wave frequency include 24 GHz, 26 GHz, 76 GHz, and 79 GHz. For example, the millimeter wave sensor unit can acquire the distance to the object associated with the reflected wave based on the time from when the millimeter wave is transmitted in a certain direction until the reflected wave is received. Further, by accumulating such distance data in association with the detection position, it is possible to acquire information related to the motion of the object associated with the reflected wave.
 これらのセンサユニットは、車両100の外部の情報を取得するために有用である一方、取得される情報に対応するデータ量が非常に多いことが知られている。しかしながら、前述のように、センサユニットの検出範囲の一部である領域に関連付けられた信号のみが外部情報を取得するためのデータ生成処理に供される。よって、これらのセンサユニットを使用しつつも、車両100の運転支援に必要とされる信号処理に係る負荷の増大を抑制できる。 While these sensor units are useful for acquiring information outside the vehicle 100, it is known that the amount of data corresponding to the acquired information is very large. However, as described above, only a signal associated with an area that is a part of the detection range of the sensor unit is subjected to a data generation process for acquiring external information. Therefore, while using these sensor units, it is possible to suppress an increase in load related to signal processing required for driving support of the vehicle 100.
 第一実施形態においては、基準高さ情報Hをオートレベリングシステムから取得している。しかしながら、基準高さhを示す情報が得られるのであれば、基準高さ情報Hは、車高センサなどから取得されてもよい。 In the first embodiment, the reference height information H is acquired from the auto leveling system. However, if information indicating the reference height h is obtained, the reference height information H may be acquired from a vehicle height sensor or the like.
 前述した信号処理装置12の機能の少なくとも一部は、図3に示される制御装置101によって実現されうる。 At least a part of the functions of the signal processing device 12 described above can be realized by the control device 101 shown in FIG.
 図4は、第二実施形態に係るセンサシステム2の構成を示している。センサシステム2は、図3に示される車両100に搭載される。 FIG. 4 shows a configuration of the sensor system 2 according to the second embodiment. The sensor system 2 is mounted on the vehicle 100 shown in FIG.
 図4に示されるように、センサシステム2は、複数のセンサユニット20を含んでいる。複数のセンサユニット20の各々は、車両100の外部の情報を検出し、当該情報に対応する信号を出力する装置である。複数のセンサユニット20は、第一センサユニット21と第二センサユニット22を含んでいる。 As shown in FIG. 4, the sensor system 2 includes a plurality of sensor units 20. Each of the plurality of sensor units 20 is a device that detects information outside the vehicle 100 and outputs a signal corresponding to the information. The plurality of sensor units 20 includes a first sensor unit 21 and a second sensor unit 22.
 第一センサユニット21は、車両100の外部の第一情報を検出し、当該第一情報に対応する第一信号S1を出力するように構成されている。第一センサユニット21は、LiDARセンサユニット、カメラユニット、ミリ波センサユニットのいずれかでありうる。 The first sensor unit 21 is configured to detect first information outside the vehicle 100 and output a first signal S1 corresponding to the first information. The first sensor unit 21 can be any one of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
 第二センサユニット22は、車両100の外部の第二情報を検出し、当該第二情報に対応する第二信号S2を出力するように構成されている。第二センサユニット22は、LiDARセンサユニット、カメラユニット、ミリ波センサユニットのいずれかでありうる。 The second sensor unit 22 is configured to detect second information outside the vehicle 100 and output a second signal S2 corresponding to the second information. The second sensor unit 22 can be any one of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
 LiDARセンサユニットは、非可視光を出射する構成、および当該非可視光が少なくとも車両の外部に存在する物体に反射した結果の戻り光を検出する構成を備えている。LiDARセンサユニットは、必要に応じて出射方向(すなわち検出方向)を変更して当該非可視光を掃引する走査機構を備えうる。例えば、非可視光として波長905nmの赤外光が使用されうる。 The LiDAR sensor unit has a configuration for emitting non-visible light and a configuration for detecting return light as a result of reflection of the non-visible light on at least an object existing outside the vehicle. The LiDAR sensor unit can include a scanning mechanism that sweeps the invisible light by changing the emission direction (that is, the detection direction) as necessary. For example, infrared light having a wavelength of 905 nm can be used as invisible light.
 カメラユニットは、車両の外部の情報として画像を取得するための装置である。画像は、静止画像と動画像の少なくとも一方を含みうる。カメラユニットは、可視光に感度を有するカメラを備えていてもよいし、赤外光に感度を有するカメラを備えていてもよい。 The camera unit is a device for acquiring an image as information outside the vehicle. The image can include at least one of a still image and a moving image. The camera unit may include a camera having sensitivity to visible light, or may include a camera having sensitivity to infrared light.
 ミリ波センサユニットは、ミリ波を発信する構成、および当該ミリ波が車両100の外部に存在する物体に反射した結果の反射波を受信する構成を備えている。ミリ波の周波数の例としては、24GHz、26GHz、76GHz、79GHzなどが挙げられる。 The millimeter wave sensor unit has a configuration for transmitting a millimeter wave and a configuration for receiving a reflected wave resulting from the reflection of the millimeter wave by an object existing outside the vehicle 100. Examples of the millimeter wave frequency include 24 GHz, 26 GHz, 76 GHz, and 79 GHz.
 第一センサユニット21と第二センサユニット22は、車両100における異なる領域に配置された複数のセンサユニットでありうる。例えば、第一センサユニット21と第二センサユニット22は、それぞれ車両100の左前隅部LFと右前隅部RFに配置されうる。あるいは、第一センサユニット21と第二センサユニット22は、それぞれ車両100の左前隅部LFと左後隅部LBに配置されうる。 The first sensor unit 21 and the second sensor unit 22 may be a plurality of sensor units arranged in different areas in the vehicle 100. For example, the first sensor unit 21 and the second sensor unit 22 may be disposed at the left front corner LF and the right front corner RF of the vehicle 100, respectively. Alternatively, the first sensor unit 21 and the second sensor unit 22 may be disposed at the left front corner LF and the left rear corner LB of the vehicle 100, respectively.
 あるいは、第一センサユニット21と第二センサユニット22は、車両100における実質的に同一の領域に配置された複数のセンサユニットでありうる。例えば、第一センサユニット21と第二センサユニット22は、ともに車両100の左前隅部LFに配置されうる。 Alternatively, the first sensor unit 21 and the second sensor unit 22 may be a plurality of sensor units arranged in substantially the same region in the vehicle 100. For example, both the first sensor unit 21 and the second sensor unit 22 can be disposed at the left front corner LF of the vehicle 100.
 センサシステム2は、信号処理装置30を備えている。信号処理装置30は、車両100における任意の位置に配置されうる。 The sensor system 2 includes a signal processing device 30. The signal processing device 30 can be arranged at an arbitrary position in the vehicle 100.
 信号処理装置30は、汎用メモリと協働して動作する汎用マイクロプロセッサにより実現されうる。汎用マイクロプロセッサとしては、CPU、MPU、GPUが例示されうる。汎用メモリとしては、ROMやRAMが例示されうる。この場合、ROMには、後述する処理を実現するコンピュータプログラムが記憶されうる。汎用マイクロプロセッサは、ROM上に記憶されたプログラムの少なくとも一部を指定してRAM上に展開し、RAMと協働して上述した処理を実行する。信号処理装置30は、後述する処理を実現するコンピュータプログラムを実行可能なマイクロコントローラ、ASIC、FPGAなどの専用集積回路によって実現されてもよい。信号処理装置30は、汎用マイクロプロセッサと専用集積回路の組合せによって実現されてもよい。 The signal processing device 30 can be realized by a general-purpose microprocessor that operates in cooperation with a general-purpose memory. As the general-purpose microprocessor, a CPU, an MPU, and a GPU can be exemplified. As the general-purpose memory, ROM and RAM can be exemplified. In this case, the ROM can store a computer program that realizes processing to be described later. The general-purpose microprocessor designates at least a part of a program stored on the ROM, expands it on the RAM, and executes the above-described processing in cooperation with the RAM. The signal processing device 30 may be realized by a dedicated integrated circuit such as a microcontroller, ASIC, or FPGA that can execute a computer program that realizes processing to be described later. The signal processing device 30 may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
 まず、信号処理装置30は、第一センサユニット21から第一信号S1を取得する。換言すると、信号処理装置30は、第一信号S1を受信することにより、第一センサユニット21が検出した第一情報を取得する(STEP21)。 First, the signal processing device 30 acquires the first signal S1 from the first sensor unit 21. In other words, the signal processing device 30 acquires the first information detected by the first sensor unit 21 by receiving the first signal S1 (STEP 21).
 続いて、信号処理装置30は、第二センサユニット22から第二信号S2を取得する。換言すると、信号処理装置30は、第二信号S2を受信することにより、第二センサユニット22が検出した第二情報を取得する(STEP22)。 Subsequently, the signal processing device 30 acquires the second signal S <b> 2 from the second sensor unit 22. In other words, the signal processing device 30 acquires the second information detected by the second sensor unit 22 by receiving the second signal S2 (STEP 22).
 STEP21の処理とSTEP22の処理が行なわれる順序は逆であってもよい。STEP21の処理とSTEP22の処理は、同時に行なわれてもよい。 The order in which the processing in STEP 21 and the processing in STEP 22 are performed may be reversed. The processing of STEP 21 and the processing of STEP 22 may be performed simultaneously.
 次に、信号処理装置30は、第一信号S1と第二信号S2に基づいて、第一情報と第二情報が同一の特徴物を含んでいるかを判定する(STEP23)。 Next, the signal processing device 30 determines whether the first information and the second information include the same feature based on the first signal S1 and the second signal S2 (STEP 23).
 第一情報と第二情報が同一の特徴物を含んでいないと判定されると(STEP23においてN)、信号処理装置30による処理は終了する。 If it is determined that the first information and the second information do not contain the same feature (N in STEP 23), the processing by the signal processing device 30 ends.
 第一情報と第二情報が同一の特徴物を含んでいると判定されると(STEP23においてY)、信号処理装置30は、当該特徴物が基準目標物になりうるか判断する(STEP24)。 If it is determined that the first information and the second information contain the same feature (Y in STEP 23), the signal processing device 30 determines whether the feature can be a reference target (STEP 24).
 本明細書において「基準目標物」とは、センサユニット20により検出可能かつ基準位置情報を提供可能な特徴物を意味する。基準目標物としては、前走車のナンバープレート、ガードレール、防音壁、信号機、交通標識、センターラインなどが例示されうる。すなわち、路面からの高さや路肩からの距離が法的に定められており、その存在が検出されることによりその位置が比較的高い確度で特定できる特徴物が、基準目標物になりうる。 In this specification, the “reference target” means a feature that can be detected by the sensor unit 20 and can provide reference position information. Examples of the reference target include a license plate, a guardrail, a sound barrier, a traffic light, a traffic sign, and a center line of a preceding vehicle. In other words, the height from the road surface and the distance from the road shoulder are legally determined, and a feature whose position can be identified with relatively high accuracy by detecting its presence can be a reference target.
 両センサユニット20によって検出された情報に上記のような特徴物が含まれていたとしても、当該特徴物が常に基準目標物になりうるとは限らない。例えば、特徴物として前走車両のナンバープレートが検出されても、前走車両との相対位置変化によりその位置が定まらなければ、当該ナンバープレートは基準目標物になりえないと判断される。すなわち、検出された特徴物から一定の位置情報を読み取り可能な時間が所定値を超えることが、当該特徴物が基準目標物になりうると判断される条件とされうる。 Even if the information detected by the two sensor units 20 includes the above-described feature, the feature may not always be a reference target. For example, even if a license plate of a preceding vehicle is detected as a feature, if the position is not determined due to a relative position change with the preceding vehicle, it is determined that the license plate cannot be a reference target. That is, it can be a condition that it is determined that the feature can be a reference target when a certain position information can be read from the detected feature exceeds a predetermined value.
 検出された特徴物が基準目標物になりえないと判断されると(STEP24においてN)、信号処理装置30による処理は終了する。 If it is determined that the detected feature cannot be a reference target (N in STEP 24), the processing by the signal processing device 30 ends.
 検出された特徴物が基準目標物になりうると判断されると(STEP24においてY)、信号処理装置30は、第一情報により特定される基準目標物の位置と第二情報により特定される基準目標物の位置との差異を検出する(STEP25)。 When it is determined that the detected feature can be the reference target (Y in STEP 24), the signal processing device 30 determines the reference target specified by the first information and the reference specified by the second information. A difference from the position of the target is detected (STEP 25).
 車両に搭載された複数のセンサユニット20は、走行中の振動や時間の経過により位置ずれを生じる場合がある。このような位置ずれの発生は、複数のセンサユニット20により同一の基準目標物が検出されていながら、特定された当該基準目標物の位置が当該複数のセンサユニット20の間で相違するという現象に繋がる。したがって、上記の差異を検出することによって、当該複数のセンサユニット20の少なくとも一つに位置ずれが発生していることを把握できる。 The plurality of sensor units 20 mounted on the vehicle may be displaced due to vibration during traveling or the passage of time. The occurrence of such misalignment is a phenomenon in which the same reference target is detected by a plurality of sensor units 20, but the positions of the specified reference targets are different among the plurality of sensor units 20. Connected. Therefore, by detecting the difference, it is possible to grasp that a positional deviation has occurred in at least one of the plurality of sensor units 20.
 例えば、差異の大きさが所定値を上回る場合、ユーザへの報知がなされうる。ユーザは、位置ずれを補正するための然るべき対応を行なえる。したがって、車両100の運転支援に必要とされる複数のセンサユニット20の検出精度を維持できる。 For example, when the magnitude of the difference exceeds a predetermined value, the user can be notified. The user can take appropriate measures to correct the misalignment. Therefore, the detection accuracy of the plurality of sensor units 20 required for driving support of the vehicle 100 can be maintained.
 上記の処理の第一の具体例として、第一センサユニット21としての左前LiDARユニットが車両100の左前隅部LFに配置されており、第二センサユニット22としての左後LiDARユニットが車両100の左後隅部LBに配置されている場合を挙げる。 As a first specific example of the above processing, the left front LiDAR unit as the first sensor unit 21 is arranged at the left front corner LF of the vehicle 100, and the left rear LiDAR unit as the second sensor unit 22 is The case where it arrange | positions in the left back corner part LB is given.
 左前LiDARユニットは、車両100の左方を含む領域に存在する物体の情報を取得する。当該情報は、第一情報の一例である。左前LiDARユニットは、第一情報に対応する第一信号S1を出力する。信号処理装置30は、第一信号S1を取得する(STEP21)。 The front left LiDAR unit acquires information on an object existing in an area including the left side of the vehicle 100. The information is an example of first information. The left front LiDAR unit outputs a first signal S1 corresponding to the first information. The signal processing device 30 acquires the first signal S1 (STEP 21).
 左後LiDARユニットは、車両100の左方を含む領域に存在する物体の情報を取得する。当該情報は、第二情報の一例である。左後LiDARユニットは、第二情報に対応する第二信号S2を出力する。信号処理装置30は、第二信号S2を取得する(STEP22)。 The left rear LiDAR unit acquires information on an object existing in an area including the left side of the vehicle 100. The information is an example of second information. The left rear LiDAR unit outputs a second signal S2 corresponding to the second information. The signal processing device 30 acquires the second signal S2 (STEP 22).
 信号処理装置30は、第一信号S1と第二信号S2に基づいて情報処理を行なうことにより、第一情報と第二情報に同一の特徴物が含まれているかを判断する(STEP23)。ガードレールが特徴物として検出された場合(STEP23においてY)、信号処理装置30は、当該ガードレールが基準目標物になりうるかを判断する(STEP24)。 The signal processing device 30 performs information processing based on the first signal S1 and the second signal S2, thereby determining whether the same information is included in the first information and the second information (STEP 23). When the guardrail is detected as a feature (Y in STEP23), the signal processing device 30 determines whether the guardrail can be a reference target (STEP24).
 例えば一定高さのガードレールの上端が所定時間にわたって検出された場合、当該ガードレールの上端が基準目標物になりうると判断される(STEP24においてY)。この場合、信号処理装置30は、左前LiDARユニットを通じて特定されたガードレールの上端の位置(高さ)と、左後LiDARユニットを通じて特定されたガードレールの上端の位置(高さ)の差異を検出する(STEP25)。 For example, when the upper end of the guard rail having a certain height is detected for a predetermined time, it is determined that the upper end of the guard rail can be a reference target (Y in STEP 24). In this case, the signal processing device 30 detects a difference between the position (height) of the upper end of the guard rail identified through the left front LiDAR unit and the position (height) of the upper end of the guard rail identified through the left rear LiDAR unit ( (STEP 25).
 差異の大きさが所定値を上回る場合、左前LiDARユニットと左後LiDARユニットの少なくとも一方に位置ずれが生じていることを示す報知がなされる。 When the magnitude of the difference exceeds a predetermined value, a notification indicating that a positional deviation has occurred in at least one of the left front LiDAR unit and the left rear LiDAR unit is made.
 上記の処理の第二の具体例として、第一センサユニット21としての左前カメラユニットが車両100の左前隅部LFに配置されており、第二センサユニット22としての右前カメラユニットが車両100の右前隅部RFに配置されている場合を挙げる。 As a second specific example of the above processing, the left front camera unit as the first sensor unit 21 is arranged at the left front corner LF of the vehicle 100, and the right front camera unit as the second sensor unit 22 is the front right of the vehicle 100. The case where it arrange | positions to corner RF is mentioned.
 左前カメラユニットは、車両100の前方を含む第一画像を取得する。第一画像は、第一情報の一例である。左前カメラユニットは、第一画像に対応する第一信号S1を出力する。信号処理装置30は、第一信号S1を取得する(STEP21)。 The left front camera unit acquires the first image including the front of the vehicle 100. The first image is an example of first information. The left front camera unit outputs a first signal S1 corresponding to the first image. The signal processing device 30 acquires the first signal S1 (STEP 21).
 右前カメラユニットは、車両100の前方を含む第二画像を取得する。第二画像は、第二情報の一例である。右前カメラユニットは、第二画像に対応する第二信号S2を出力する。信号処理装置30は、第二信号S2を取得する(STEP22)。 The front right camera unit acquires a second image including the front of the vehicle 100. The second image is an example of second information. The right front camera unit outputs a second signal S2 corresponding to the second image. The signal processing device 30 acquires the second signal S2 (STEP 22).
 信号処理装置30は、第一信号S1と第二信号S2に基づいて画像処理を行なうことにより、第一画像と第二画像に同一の特徴物が含まれているかを判断する(STEP23)。センターラインが特徴物として検出された場合(STEP23においてY)、信号処理装置30は、当該センターラインが基準目標物になりうるか判断する(STEP24)。 The signal processing device 30 performs image processing based on the first signal S1 and the second signal S2, thereby determining whether or not the same feature is included in the first image and the second image (STEP 23). When the center line is detected as a characteristic object (Y in STEP 23), the signal processing device 30 determines whether the center line can be a reference target (STEP 24).
 例えば路肩からの距離が一定のセンターラインが所定時間にわたって検出された場合、当該センターラインが基準目標物になりうると判断される(STEP24においてY)。この場合、信号処理装置30は、左前カメラユニットを通じて特定されたセンターラインの位置と、右前カメラユニットを通じて特定されたセンターラインの位置の差異を検出する(STEP25)。 For example, when a center line having a constant distance from the shoulder is detected for a predetermined time, it is determined that the center line can be a reference target (Y in STEP 24). In this case, the signal processing device 30 detects a difference between the position of the center line specified through the left front camera unit and the position of the center line specified through the right front camera unit (STEP 25).
 差異の大きさが所定値を上回る場合、左前カメラユニットと右前カメラユニットの少なくとも一方に位置ずれが生じていることを示す報知がなされる。 If the magnitude of the difference exceeds a predetermined value, a notification indicating that a positional deviation has occurred in at least one of the left front camera unit and the right front camera unit is made.
 上記の処理の第三の具体例として、第一センサユニット21としての左前LiDARユニットと第二センサユニット22としての左前カメラユニットが車両100の左前隅部LFに配置されている場合を挙げる。 As a third specific example of the above processing, a case where a left front LiDAR unit as the first sensor unit 21 and a left front camera unit as the second sensor unit 22 are arranged in the left front corner LF of the vehicle 100 is given.
 左前LiDARユニットは、車両100の前方を含む領域に存在する物体の情報を取得する。当該情報は、第一情報の一例である。左前LiDARユニットは、第一情報に対応する第一信号S1を出力する。信号処理装置30は、第一信号S1を取得する(STEP21)。 The front left LiDAR unit acquires information on an object existing in an area including the front of the vehicle 100. The information is an example of first information. The left front LiDAR unit outputs a first signal S1 corresponding to the first information. The signal processing device 30 acquires the first signal S1 (STEP 21).
 左前カメラユニットは、車両100の前方を含む第二画像を取得する。第二画像は、第二情報の一例である。右前カメラユニットは、第二画像に対応する第二信号S2を出力する。信号処理装置30は、第二信号S2を取得する(STEP22)。 The left front camera unit acquires a second image including the front of the vehicle 100. The second image is an example of second information. The right front camera unit outputs a second signal S2 corresponding to the second image. The signal processing device 30 acquires the second signal S2 (STEP 22).
 信号処理装置30は、第一信号S1と第二信号S2に基づいて情報処理を行なうことにより、第一情報と第二情報に同一の特徴物が含まれているかを判断する(STEP23)。信号機が特徴物として検出された場合(STEP23においてY)、信号処理装置30は、当該信号機が基準目標物になりうるか判断する(STEP24)。 The signal processing device 30 performs information processing based on the first signal S1 and the second signal S2, thereby determining whether the same information is included in the first information and the second information (STEP 23). When the traffic signal is detected as a characteristic object (Y in STEP 23), the signal processing device 30 determines whether the traffic signal can be a reference target (STEP 24).
 例えば交差点での停車時において特定の信号機が所定時間にわたって検出された場合、当該信号機が基準目標物になりうると判断される(STEP24においてY)。この場合、信号処理装置30は、左前LiDARユニットを通じて特定された信号機の位置と、左前カメラユニットを通じて特定された信号機の位置の差異を検出する(STEP25)。 For example, when a specific traffic signal is detected for a predetermined time when the vehicle stops at an intersection, it is determined that the traffic signal can be a reference target (Y in STEP 24). In this case, the signal processing device 30 detects the difference between the position of the traffic light identified through the left front LiDAR unit and the position of the traffic light identified through the left front camera unit (STEP 25).
 差異の大きさが所定値を上回る場合、左前LiDARユニットと左前カメラユニットの少なくとも一方に位置ずれが生じていることを示す報知がなされる。 When the magnitude of the difference exceeds a predetermined value, a notification indicating that a positional deviation has occurred in at least one of the left front LiDAR unit and the left front camera unit is made.
 なお、基準目標物は、高さ方向における位置の基準を提供できるものであることが好ましい。高さ方向における位置の基準は、車両100の走行状態による変動が比較的小さい傾向にあり、信号処理装置30における処理負荷の増大を抑制しやすいからである。 It should be noted that the reference target is preferably one that can provide a reference for the position in the height direction. This is because the position reference in the height direction tends to be relatively small in variation due to the traveling state of the vehicle 100 and easily suppress an increase in processing load in the signal processing device 30.
 図5に示されるように、信号処理装置30は、第一情報に基づいて特定された基準目標物の位置と第二情報に基づいて特定された基準目標物の位置の経時変化を取得しうる(STEP26)。具体的には、信号処理装置30は、所定のタイミングでSTEP21からSTEP25の処理を繰り返し、最新の処理で第一情報に基づいて特定された基準目標物の位置と前回の処理で第一情報に基づいて特定された基準目標物の位置とが比較される。同様に、最新の処理で第二情報に基づいて特定された基準目標物の位置と前回の処理で第二情報に基づいて特定された基準目標物の位置とが比較される。所定のタイミングとしては、前回の処理終了から一定時間の経過、ユーザによる処理実行指示の入力時などが例示されうる。 As shown in FIG. 5, the signal processing device 30 can acquire the time-dependent change in the position of the reference target specified based on the first information and the position of the reference target specified based on the second information. (STEP 26). Specifically, the signal processing device 30 repeats the processing from STEP 21 to STEP 25 at a predetermined timing, and changes the position of the reference target specified based on the first information in the latest processing and the first information in the previous processing. The position of the reference target identified based on the comparison is compared. Similarly, the position of the reference target specified based on the second information in the latest process is compared with the position of the reference target specified based on the second information in the previous process. Examples of the predetermined timing include the elapse of a predetermined time from the end of the previous process, the time when the user inputs a process execution instruction, and the like.
 この場合、信号処理装置30は、取得された経時変化に基づいて補正が必要なセンサユニット20を特定する(STEP27)。例えば、第一センサユニット21により特定される基準目標物の位置が経時変化せず、第二センサユニット22により特定される基準目標物の位置が経時変化している場合、第二センサユニット22の側に位置ずれの原因があり、補正を要すると判断される。特定されたセンサユニットは、ユーザに報知されうる。 In this case, the signal processing device 30 specifies the sensor unit 20 that needs to be corrected based on the acquired temporal change (STEP 27). For example, when the position of the reference target specified by the first sensor unit 21 does not change over time and the position of the reference target specified by the second sensor unit 22 changes over time, the second sensor unit 22 It is determined that there is a cause of misalignment on the side and correction is necessary. The identified sensor unit can be notified to the user.
 STEP21からSTEP25の処理を一度だけ行なって特定される基準目標物の位置の差異に基づいて判断されるのは、第一センサユニット21と第二センサユニット22の少なくとも一方に位置ずれが生じているという事実のみである。上記のように、各センサユニット20により特定される基準目標物の位置の経時変化を監視することにより、補正が必要なセンサユニット20を特定できる。したがって、車両100の運転支援に必要とされる複数のセンサユニット20の検出精度をより容易に維持できる。 It is determined that at least one of the first sensor unit 21 and the second sensor unit 22 is displaced based on the difference in the position of the reference target specified by performing the processing from STEP 21 to STEP 25 only once. It is only the fact that. As described above, the sensor unit 20 that needs to be corrected can be specified by monitoring the change with time of the position of the reference target specified by each sensor unit 20. Therefore, the detection accuracy of the plurality of sensor units 20 required for driving support of the vehicle 100 can be more easily maintained.
 図4に示されるように、複数のセンサユニット20は、第三センサユニット23を含みうる。第三センサユニット23は、車両100の外部の第三情報を検出し、当該第三情報に対応する第三信号S3を出力するように構成されうる。第三センサユニット23は、LiDARセンサユニット、カメラユニット、ミリ波センサユニットのいずれかでありうる。 As shown in FIG. 4, the plurality of sensor units 20 may include a third sensor unit 23. The third sensor unit 23 may be configured to detect third information outside the vehicle 100 and output a third signal S3 corresponding to the third information. The third sensor unit 23 can be any of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
 第三センサユニット23は、第一センサユニット21および第二センサユニット22とは車両100における異なる領域に配置されうる。例えば、第一センサユニット21と第二センサユニット22がそれぞれ車両100の左前隅部LFと右前隅部RFに配置されている場合、第三センサユニット23は、車両の左後隅部LBまたは右後隅部RBに配置されうる。あるいは、第三センサユニット23は、第一センサユニット21および第二センサユニット22の少なくとも一方と車両100における実質的に同一の領域に配置されうる。 The third sensor unit 23 can be arranged in a different area in the vehicle 100 from the first sensor unit 21 and the second sensor unit 22. For example, when the first sensor unit 21 and the second sensor unit 22 are disposed at the left front corner LF and the right front corner RF of the vehicle 100, respectively, the third sensor unit 23 is configured to It can be arranged in the rear corner RB. Alternatively, the third sensor unit 23 can be disposed in substantially the same region of the vehicle 100 as at least one of the first sensor unit 21 and the second sensor unit 22.
 この場合、図5に示されるように、信号処理装置30は、第三センサユニット23から第三信号S3を取得する。換言すると、信号処理装置30は、第三信号S3を受信することにより、第三センサユニット23が検出した第三情報を取得する(STEP28)。STEP21、STEP22、およびSTEP28の順序は任意である。STEP28は、STEP21およびSTEP22の少なくとも一方と同時に行なわれてもよい。 In this case, as shown in FIG. 5, the signal processing device 30 acquires the third signal S <b> 3 from the third sensor unit 23. In other words, the signal processing device 30 acquires the third information detected by the third sensor unit 23 by receiving the third signal S3 (STEP 28). The order of STEP 21, STEP 22, and STEP 28 is arbitrary. STEP28 may be performed simultaneously with at least one of STEP21 and STEP22.
 次に、信号処理装置30は、第一信号S1、第二信号S2、および第三信号S3に基づいて、第一情報、第二情報、および第三情報が同一の特徴物を含んでいるかを判定する(STEP23)。 Next, the signal processing device 30 determines whether the first information, the second information, and the third information include the same feature based on the first signal S1, the second signal S2, and the third signal S3. Determine (STEP 23).
 第一情報、第二情報、および第三情報が同一の特徴物を含んでいないと判定されると(STEP23においてN)、信号処理装置30による処理は終了する。 If it is determined that the first information, the second information, and the third information do not include the same feature (N in STEP 23), the processing by the signal processing device 30 ends.
 第一情報、第二情報、および第三情報が同一の特徴物を含んでいると判定されると(STEP23においてY)、信号処理装置30は、当該特徴物が基準目標物になりうるか判断する(STEP24)。 When it is determined that the first information, the second information, and the third information include the same feature (Y in STEP 23), the signal processing device 30 determines whether the feature can be a reference target. (STEP 24).
 検出された特徴物が基準目標物になりえないと判断されると(STEP24においてN)、信号処理装置30による処理は終了する。 If it is determined that the detected feature cannot be a reference target (N in STEP 24), the processing by the signal processing device 30 ends.
 検出された特徴物が基準目標物になりうると判断されると(STEP24においてY)、信号処理装置30は、第一情報により特定される基準目標物の位置、第二情報により特定される基準目標物の位置、および第三情報により特定される基準目標物の位置の間の差異を検出する(STEP25)。 If it is determined that the detected feature can be the reference target (Y in STEP 24), the signal processing device 30 determines the position of the reference target specified by the first information and the reference specified by the second information. A difference between the position of the target and the position of the reference target specified by the third information is detected (STEP 25).
 続いて信号処理装置30は、STEP25で特定された差異に基づいて補正が必要なセンサユニット20を特定する(STEP27)。例えば、第一センサユニット21と第二センサユニット22により特定された基準目標物の位置が同じであり、第三センサユニット23により特定された基準目標物の位置のみが異なっている場合、第三センサユニット23に位置ずれが生じている蓋然性が高い。特定されたセンサユニット20は、ユーザに放置されうる。 Subsequently, the signal processing device 30 identifies the sensor unit 20 that needs to be corrected based on the difference identified in STEP 25 (STEP 27). For example, when the position of the reference target specified by the first sensor unit 21 and the second sensor unit 22 is the same and only the position of the reference target specified by the third sensor unit 23 is different, There is a high probability that the sensor unit 23 is displaced. The identified sensor unit 20 can be left to the user.
 このような構成によれば、基準目標物の位置を特定する処理を繰り返さずとも、補正が必要なセンサユニット20を特定できる。したがって、車両100の運転支援に必要とされる複数のセンサユニット20の検出精度をより容易に維持できる。 According to such a configuration, the sensor unit 20 that needs to be corrected can be specified without repeating the process of specifying the position of the reference target. Therefore, the detection accuracy of the plurality of sensor units 20 required for driving support of the vehicle 100 can be more easily maintained.
 しかしながら、上記の例において第一センサユニット21と第二センサユニット22に位置ずれが生じている可能性も否定できない。したがって、STEP27を参照して説明したように、各センサユニット20を通じて特定された基準目標物の位置の経時変化を監視する処理を組み合わせれば、補正が必要なセンサユニット20の判断精度を向上できる。 However, the possibility that the first sensor unit 21 and the second sensor unit 22 are misaligned in the above example cannot be denied. Therefore, as described with reference to STEP 27, by combining the process of monitoring the temporal change in the position of the reference target specified through each sensor unit 20, the determination accuracy of the sensor unit 20 that needs to be corrected can be improved. .
 図6に示されるように、センサシステム2は、左前ランプ装置40を備えうる。左前ランプ装置40は、ランプハウジング41と透光カバー42を備えうる。ランプハウジング41は、透光カバー42とともに灯室43を区画する。左前ランプ装置40は、図3に示される車両100の左前隅部LFに搭載される。 As shown in FIG. 6, the sensor system 2 can include a left front lamp device 40. The left front lamp device 40 may include a lamp housing 41 and a translucent cover 42. The lamp housing 41 partitions the lamp chamber 43 together with the translucent cover 42. The front left lamp device 40 is mounted on the front left corner LF of the vehicle 100 shown in FIG.
 図6に示されるように、左前ランプ装置40は、ランプユニット44を備えうる。ランプユニット44は、車両100の外方へ可視光を出射する装置である。ランプユニット44は、灯室43に収容される。ランプユニット44としては、前照灯ユニット、車幅灯ユニット、方向指示灯ユニット、霧灯ユニットなどが例示されうる。 As shown in FIG. 6, the left front lamp device 40 may include a lamp unit 44. The lamp unit 44 is a device that emits visible light to the outside of the vehicle 100. The lamp unit 44 is accommodated in the lamp chamber 43. Examples of the lamp unit 44 include a headlamp unit, a vehicle width lamp unit, a direction indicator lamp unit, and a fog lamp unit.
 この場合、少なくとも一つのセンサユニット20は、灯室43内に配置される。ランプユニット44は、車両100の外部に光を供給するという機能ゆえに、上記した左前隅部FBなど、遮蔽物の少ない場所に配置されることが一般的である。このような場所にセンサユニット20も配置されることにより、車両100の外部の情報を効率的に取得できる。 In this case, at least one sensor unit 20 is arranged in the lamp chamber 43. Since the lamp unit 44 has a function of supplying light to the outside of the vehicle 100, the lamp unit 44 is generally disposed in a place with a small amount of shielding such as the left front corner FB. By arranging the sensor unit 20 in such a place, information outside the vehicle 100 can be efficiently acquired.
 したがって、図3に示される車両100の右前隅部RFには、左前ランプ装置40と左右対称の構成を有する右前ランプ装置が搭載されうる。車両100の左後隅部LBには、左後ランプ装置が搭載されうる。この場合、左後ランプ装置が備えるランプユニットとしては、制動灯ユニット、尾灯ユニット、車幅灯ユニット、後進灯ユニットなどが例示されうる。車両100の右後隅部RBには、左後ランプ装置と左右対称の構成を有する右後ランプ装置が搭載されうる。いずれのランプ装置においても、少なくとも一つのセンサユニット20は、ランプハウジングにより区画される灯室内に配置されうる。 Therefore, a right front lamp device having a symmetrical configuration with the left front lamp device 40 can be mounted on the right front corner RF of the vehicle 100 shown in FIG. A left rear lamp device may be mounted on the left rear corner LB of the vehicle 100. In this case, examples of the lamp unit included in the left rear lamp device may include a brake light unit, a taillight unit, a vehicle width light unit, and a reverse light unit. A right rear lamp device having a configuration symmetrical to the left rear lamp device may be mounted on the right rear corner RB of the vehicle 100. In any lamp device, at least one sensor unit 20 can be disposed in a lamp chamber defined by a lamp housing.
 第二実施形態は、本開示の理解を容易にするための例示にすぎない。第二実施形態に係る構成は、本開示の趣旨を逸脱しなければ、適宜に変更・改良されうる。 The second embodiment is merely an example for facilitating understanding of the present disclosure. The configuration according to the second embodiment can be changed or improved as appropriate without departing from the spirit of the present disclosure.
 図7は、第三実施形態に係る画像データ生成装置301の機能構成を示している。画像データ生成装置301は、車両に搭載される。 FIG. 7 shows a functional configuration of the image data generation apparatus 301 according to the third embodiment. The image data generation device 301 is mounted on a vehicle.
 図8は、画像データ生成装置301が搭載される車両400の一例を示している。車両400は、トラクタ部分401とトレーラ部分402を有している牽引自動車である。トラクタ部分401は、運転席403を備えている。 FIG. 8 shows an example of a vehicle 400 on which the image data generation device 301 is mounted. The vehicle 400 is a towed vehicle having a tractor portion 401 and a trailer portion 402. The tractor portion 401 includes a driver seat 403.
 車両400は、カメラ404を備えている。カメラ404は、運転席403よりも後方の画像を取得するための装置である。カメラ404は、取得した画像に対応するカメラ信号を出力するように構成されている。 The vehicle 400 includes a camera 404. The camera 404 is a device for acquiring an image behind the driver seat 403. The camera 404 is configured to output a camera signal corresponding to the acquired image.
 図7に示されるように、画像データ生成装置301は、入力インターフェース311を備えている。入力インターフェース311には、カメラ404から出力されたカメラ信号CSが入力される。 As shown in FIG. 7, the image data generation apparatus 301 includes an input interface 311. A camera signal CS output from the camera 404 is input to the input interface 311.
 画像データ生成装置301は、プロセッサ312、出力インターフェース313、および通信バス314をさらに備えている。入力インターフェース311、プロセッサ312、および出力インターフェース313は、通信バス314を介して信号やデータのやり取りが可能とされている。 The image data generation device 301 further includes a processor 312, an output interface 313, and a communication bus 314. The input interface 311, the processor 312, and the output interface 313 can exchange signals and data via the communication bus 314.
 プロセッサ312は、図9に示される処理を実行するように構成されている。 The processor 312 is configured to execute the processing shown in FIG.
 まず、プロセッサ312は、入力インターフェース311に入力されたカメラ信号CSを取得する(STEP31)。「カメラ信号CSを取得する」という表現は、入力インターフェース311に入力されたカメラ信号CSを、適宜の回路構成を介して後述する処理が可能な状態にすることを意味する。 First, the processor 312 acquires the camera signal CS input to the input interface 311 (STEP 31). The expression “obtain camera signal CS” means that the camera signal CS input to the input interface 311 is in a state in which processing described later can be performed via an appropriate circuit configuration.
 次に、プロセッサ312は、カメラ信号CSに基づいて第一画像データD1を生成する(STEP32)。図7に示されるように、第一画像データD1は、出力インターフェース313を介して車両400に搭載された表示装置405へ送信される。表示装置405は、車両400の車室内に配置されてもよいし、サイドドアミラーの位置に配置されてもよい。 Next, the processor 312 generates the first image data D1 based on the camera signal CS (STEP 32). As shown in FIG. 7, the first image data D <b> 1 is transmitted to the display device 405 mounted on the vehicle 400 via the output interface 313. The display device 405 may be disposed in the passenger compartment of the vehicle 400 or may be disposed at the position of the side door mirror.
 第一画像データD1は、表示装置405に表示される第一監視画像I1に対応するデータである。図10の(A)は、第一監視画像I1の一例を示している。 The first image data D1 is data corresponding to the first monitoring image I1 displayed on the display device 405. FIG. 10A shows an example of the first monitoring image I1.
 すなわち、カメラ404によって取得された運転席403よりも後方の画像が、表示装置405に恒常的に表示される。運転者は、表示装置405に表示された第一監視画像I1を通じて運転席403よりも後方の情報を取得する。 That is, an image behind the driver's seat 403 acquired by the camera 404 is constantly displayed on the display device 405. The driver acquires information behind the driver's seat 403 through the first monitoring image I1 displayed on the display device 405.
 図8において、矢印Xは、カメラ404の光軸の向きを示している。自動牽引車においては、同図に二点鎖線で示されるように、トラクタ部分401がトレーラ部分402に対して大きく屈曲する姿勢をとりうる。このとき、カメラ404の光軸がトレーラ部分402の側壁と対向してしまい、運転者による後方の視認が妨げられる場合がある。 8, an arrow X indicates the direction of the optical axis of the camera 404. In the automatic towing vehicle, the tractor portion 401 can take a posture that is largely bent with respect to the trailer portion 402 as indicated by a two-dot chain line in FIG. At this time, the optical axis of the camera 404 may face the side wall of the trailer portion 402, which may hinder the driver's rearward visual recognition.
 このような事態に対処するため、図9に示されるように、プロセッサ312は、第一監視画像I1に含まれる基準物を決定する(STEP33)。「基準物」は、運転者が運転席403よりも後方の情報を継続的に取得するために、第一監視画像I1に常に含まれていることを要し、かつ目標物として画像認識が比較的容易である物として定められる。本例においては、図10の(A)に示されるように、車両400のトレーラ部分402の後端縁402aが基準物とされる。プロセッサ312は、例えばエッジ抽出技術などを利用して後端縁402aを基準物として決定する。 In order to deal with such a situation, as shown in FIG. 9, the processor 312 determines a reference object included in the first monitoring image I1 (STEP 33). The “reference object” needs to be always included in the first monitoring image I1 in order for the driver to continuously acquire information behind the driver's seat 403, and image recognition is compared as a target object. It is defined as a thing that is easy. In this example, as shown in FIG. 10A, the rear end edge 402a of the trailer portion 402 of the vehicle 400 is used as a reference object. The processor 312 determines the trailing edge 402a as a reference object using, for example, an edge extraction technique.
 トレーラ部分402の後端縁402aは、車両400の後部の一例である。「後部」とは、車両400における運転席403よりも後方に位置する部分を意味する。 The rear end edge 402a of the trailer portion 402 is an example of a rear portion of the vehicle 400. “Rear part” means a part of the vehicle 400 that is located behind the driver's seat 403.
 次に、図9に示されるように、プロセッサ312は、STEP33で決定された基準物が第一監視画像I1における所定の領域内に含まれているかを判断する(STEP34)。所定の領域は、図10の(A)に示される第一監視画像I1全体であってもよいし、例えば図中に一点鎖線で示される境界線BDよりも右方の領域として、第一監視画像I1の一部として定義されてもよい。 Next, as shown in FIG. 9, the processor 312 determines whether the reference object determined in STEP 33 is included in a predetermined area in the first monitoring image I1 (STEP 34). The predetermined area may be the entire first monitoring image I1 shown in FIG. 10A. For example, the predetermined area may be the right side of the boundary line BD indicated by the one-dot chain line in the drawing. It may be defined as part of the image I1.
 基準物としての後端縁402aが所定領域内に含まれていれば(STEP34においてY)、第一監視画像I1が表示装置405に表示され続ける。図10の(A)に示される例において所定の領域が第一監視画像I1全体である場合、基準物としての後端縁402aは、所定領域内に含まれていると判断される。図10の(A)に示される例において所定の領域が境界線BDよりも右方の領域である場合、基準物としての後端縁402aは、所定の領域に含まれていないと判断される(STEP34においてN)。 If the rear edge 402a as the reference object is included in the predetermined area (Y in STEP 34), the first monitoring image I1 is continuously displayed on the display device 405. In the example shown in FIG. 10A, when the predetermined area is the entire first monitoring image I1, it is determined that the rear edge 402a as the reference object is included in the predetermined area. In the example shown in FIG. 10A, when the predetermined region is a region on the right side of the boundary line BD, it is determined that the rear edge 402a as the reference object is not included in the predetermined region. (N in STEP 34).
 第一監視画像I1における所定の領域に基準物が含まれていないと判断されると、図9に示されるように、プロセッサ312は、第二画像データD2を生成する(STEP35)。第二画像データD2は、基準物が所定の領域に含まれている第二監視画像I2を表示装置405に表示させるためのデータである。図7に示されるように、第二画像データD2は、出力インターフェース313を介して車両400に搭載された表示装置405へ送信される。図10の(B)は、第二監視画像I2の一例を示している。 If it is determined that the reference object is not included in the predetermined area in the first monitoring image I1, the processor 312 generates the second image data D2 as shown in FIG. 9 (STEP 35). The second image data D2 is data for causing the display device 405 to display the second monitoring image I2 in which the reference object is included in a predetermined area. As illustrated in FIG. 7, the second image data D2 is transmitted to the display device 405 mounted on the vehicle 400 via the output interface 313. FIG. 10B shows an example of the second monitoring image I2.
 第二監視画像I2においても、第一監視画像I1と同様に所定の領域が定義されうる。図示の例においては、境界線BDよりも右方の領域が所定の領域とされている。基準物としてのトレーラ部分402の後端縁402aが、所定の領域に含まれていることが判る。運転者は、表示装置405に表示された第二監視画像I2を通じて、引き続き運転席403よりも後方の情報を取得できる。 Also in the second monitoring image I2, a predetermined area can be defined similarly to the first monitoring image I1. In the illustrated example, a region on the right side of the boundary line BD is a predetermined region. It can be seen that the trailing edge 402a of the trailer portion 402 as a reference object is included in a predetermined region. The driver can continue to acquire information behind the driver seat 403 through the second monitoring image I2 displayed on the display device 405.
 このような構成によれば、基準物の位置に応じて第一画像データD1と第二画像データD2のいずれかが生成され、基準物を所定の領域内に含んだ第一監視画像I1または第二監視画像I2を表示装置405に表示させ続けることが可能である。したがって、車両400の状態によって運転者が後方の視認を妨げられる事態を回避できる。すなわち、後方視認性がより向上された電子ミラー技術を提供できる。 According to such a configuration, either the first image data D1 or the second image data D2 is generated according to the position of the reference object, and the first monitoring image I1 or the first image including the reference object in a predetermined region. It is possible to continue displaying the second monitoring image I2 on the display device 405. Therefore, it is possible to avoid a situation in which the driver is prevented from visually recognizing the rear depending on the state of the vehicle 400. That is, it is possible to provide an electronic mirror technology with further improved rear visibility.
 次に、図11から図14を参照しつつ、上記のように第二画像データD2を生成する手法について幾つかの具体例を示す。 Next, some specific examples of the method for generating the second image data D2 as described above will be described with reference to FIGS.
 図11は、第一の具体例を説明するための図である。符号I0は、カメラ404により取得される画像の全体を示している。第一監視画像I1と第二監視画像I2は、当該画像における異なる部分とされうる。すなわち、プロセッサ312は、カメラ404により取得された画像の第一部分に対応するカメラ信号CSに基づいて第一画像データD1を生成する。同様に、プロセッサ312は、カメラ404により取得された画像の第二部分に対応するカメラ信号CSに基づいて第二画像データD2を生成する。 FIG. 11 is a diagram for explaining the first specific example. A symbol I0 indicates the entire image acquired by the camera 404. The first monitoring image I1 and the second monitoring image I2 can be different parts in the image. That is, the processor 312 generates the first image data D1 based on the camera signal CS corresponding to the first portion of the image acquired by the camera 404. Similarly, the processor 312 generates the second image data D2 based on the camera signal CS corresponding to the second portion of the image acquired by the camera 404.
 カメラ404により取得された原画像全体を表示装置405に表示させようとした場合、広い範囲を視認できる一方で監視画像内に表示される物体が小さくなることが避けられない。上記のような構成によれば、原画像から必要最低限の領域をクリッピングすることによって第一監視画像I1と第二監視画像I2を生成するので、視野の確保と視認性の低下防止を両立できる。 When an entire original image acquired by the camera 404 is displayed on the display device 405, it is inevitable that an object displayed in the monitoring image becomes small while a wide range can be visually recognized. According to the above configuration, since the first monitoring image I1 and the second monitoring image I2 are generated by clipping the minimum necessary area from the original image, it is possible to achieve both securing the visual field and preventing the deterioration of the visibility. .
 図12は、第二の具体例を説明するための図である。本例においては、カメラ404の画角が変更可能とされている。具体的には、画角を変更するための周知の機構がカメラ404内に設けられている。カメラ404の画角の変更は、図7に示されるように、プロセッサ312が出力インターフェース313を通じてカメラ404へ制御信号Sを送信することにより行なわれうる。 FIG. 12 is a diagram for explaining a second specific example. In this example, the angle of view of the camera 404 can be changed. Specifically, a known mechanism for changing the angle of view is provided in the camera 404. The angle of view of the camera 404 can be changed by the processor 312 sending a control signal S to the camera 404 through the output interface 313 as shown in FIG.
 具体的には、プロセッサ312は、カメラ404の画角が第一画角θ1であるときに取得された画像に基づいて第一画像データD1を生成する。他方、プロセッサ312は、カメラ404の画角が第二画角θ2であるときに取得された画像に基づいて第二画像データD2を生成する。第二画角θ2は、第一画角θ1よりも広い。すなわち、第二監視画像I2は、より広角の画像として表示装置405に表示される。 Specifically, the processor 312 generates the first image data D1 based on the image acquired when the angle of view of the camera 404 is the first angle of view θ1. On the other hand, the processor 312 generates the second image data D2 based on the image acquired when the angle of view of the camera 404 is the second angle of view θ2. The second field angle θ2 is wider than the first field angle θ1. That is, the second monitoring image I2 is displayed on the display device 405 as a wider-angle image.
 図12において実線で示される位置にカメラ404があるときは、基準物としてのトレーラ部分402の後端縁402aは、第一画角θ1の視野内に存在している。したがって、第一監視画像I1の生成が然るべくなされる。しかしながら、同図において二点鎖線で示される位置にカメラ404があるときは、第一画角θ1の視野内から後端縁402aが外れてしまう。この場合、プロセッサ312がカメラ404の画角を第二画角θ2まで広げる制御を行なう。結果として、後端縁402aが第二画角θ2の視野内に収まり、適切な第二監視画像I2が得られる。 When the camera 404 is at the position indicated by the solid line in FIG. 12, the trailing edge 402a of the trailer portion 402 serving as the reference object exists in the field of view of the first angle of view θ1. Accordingly, the first monitoring image I1 is generated accordingly. However, when the camera 404 is located at the position indicated by the two-dot chain line in the figure, the rear edge 402a is out of the field of view of the first angle of view θ1. In this case, the processor 312 performs control to expand the angle of view of the camera 404 to the second angle of view θ2. As a result, the rear edge 402a is within the field of view of the second angle of view θ2, and an appropriate second monitoring image I2 is obtained.
 初めから画角の広いカメラ404を用いた場合、広い範囲を視認できる一方で表示装置405に表示される監視画像内に表示される物体が小さくなることが避けられない。上記のような構成によれば、必要なときにのみ画角を広げて第二監視画像I2を生成するので、視野の確保と視認性の低下防止を両立できる。 When the camera 404 with a wide angle of view is used from the beginning, it is inevitable that an object displayed in the monitoring image displayed on the display device 405 becomes small while a wide range can be visually recognized. According to the configuration as described above, since the second monitoring image I2 is generated by widening the angle of view only when necessary, it is possible to ensure both the field of view and the prevention of deterioration in visibility.
 図13は、第三の具体例を説明するための図である。本例においては、カメラ404の光軸の向きが変更可能とされている。具体的には、光軸の向きを変更する周知のスイブル機構がカメラ404に設けられている。カメラ404の光軸の向きの変更は、図7に示されるように、プロセッサ312が出力インターフェース313を通じてカメラ404へ制御信号Sを送信することにより行なわれうる。 FIG. 13 is a diagram for explaining a third specific example. In this example, the direction of the optical axis of the camera 404 can be changed. Specifically, a known swivel mechanism that changes the direction of the optical axis is provided in the camera 404. The direction of the optical axis of the camera 404 can be changed by the processor 312 sending a control signal S to the camera 404 via the output interface 313 as shown in FIG.
 具体的には、プロセッサ312は、カメラ404の光軸が第一方向X1を向いているときに取得された画像に基づいて第一画像データD1を生成する。他方、プロセッサ312は、カメラ404の光軸の向きが第一方向X1とは異なる第二方向X2を向いているときに取得された画像に基づいて第二画像データD2を生成する。すなわち、第一監視画像I1と第二監視画像I2とでは、中心に位置する撮像対象が異なる。 Specifically, the processor 312 generates the first image data D1 based on the image acquired when the optical axis of the camera 404 is oriented in the first direction X1. On the other hand, the processor 312 generates the second image data D2 based on the image acquired when the direction of the optical axis of the camera 404 is in the second direction X2 different from the first direction X1. That is, the imaging target located in the center differs between the first monitoring image I1 and the second monitoring image I2.
 図13において実線で示される位置にカメラ404があるときは、基準物としてのトレーラ部分402の後端縁402aは、光軸の向きが第一方向X1である視野内に存在している。したがって、第一監視画像I1の生成が然るべくなされる。しかしながら、同図において二点鎖線で示される位置にカメラ404があるときは、前記の視野内から後端縁402aが外れてしまう。この場合、プロセッサ312がカメラ404の光軸の向きを第一方向X1から第二方向X2へ変更する制御を行なう。結果として、光軸の向きが第二方向X2である視野内に後端縁402aが収まり、適切な第二監視画像I2が得られる。 When the camera 404 is located at the position indicated by the solid line in FIG. 13, the rear end edge 402a of the trailer portion 402 serving as the reference object exists in the field of view in which the direction of the optical axis is the first direction X1. Accordingly, the first monitoring image I1 is generated accordingly. However, when the camera 404 is located at the position indicated by the two-dot chain line in the figure, the rear end edge 402a is out of the field of view. In this case, the processor 312 performs control to change the direction of the optical axis of the camera 404 from the first direction X1 to the second direction X2. As a result, the rear edge 402a is within the field of view in which the direction of the optical axis is the second direction X2, and an appropriate second monitoring image I2 is obtained.
 初めから画角の広いカメラ404を用いた場合、広い範囲を視認できる一方で表示装置405に表示される監視画像内に表示される物体が小さくなることが避けられない。上記のような構成によれば、画角を変えずとも基準物が所定の範囲に含まれている監視画像を生成し続けることができる。監視画像内に表示される物体が適切に視認できる程度に画角を定めればよいので、視野の確保と視認性の低下防止を両立できる。 When the camera 404 with a wide angle of view is used from the beginning, it is inevitable that an object displayed in the monitoring image displayed on the display device 405 becomes small while a wide range can be visually recognized. According to the above configuration, it is possible to continue to generate a monitoring image in which the reference object is included in a predetermined range without changing the angle of view. Since the angle of view only needs to be set to such an extent that an object displayed in the monitoring image can be properly visually recognized, it is possible to achieve both securing a visual field and preventing a reduction in visibility.
 図14は、第四の具体例を説明するための図である。本例においては、カメラ404として第一カメラ404aと第二カメラ404bが、車両400に搭載されている。第一カメラ404aの光軸の向きと第二カメラ404bの光軸の向きは異なっている。 FIG. 14 is a diagram for explaining a fourth specific example. In this example, a first camera 404 a and a second camera 404 b are mounted on the vehicle 400 as the camera 404. The direction of the optical axis of the first camera 404a is different from the direction of the optical axis of the second camera 404b.
 具体的には、プロセッサ312は、第一カメラ404aによって取得された画像に基づいて第一画像データD1を生成する。他方、プロセッサ312は、第二カメラ404bによって取得された画像に基づいて第二画像データD2を生成する。動作するカメラの切り替えは、プロセッサ312が出力インターフェース313を通じて制御信号Sを送信することにより行なわれうる。 Specifically, the processor 312 generates the first image data D1 based on the image acquired by the first camera 404a. On the other hand, the processor 312 generates the second image data D2 based on the image acquired by the second camera 404b. Switching of the operating camera can be performed by the processor 312 transmitting the control signal S through the output interface 313.
 図14において実線で示される位置にカメラ404があるときは、基準物としてのトレーラ部分402の後端縁402aは、第一カメラ404aの視野内に存在している。したがって、第一監視画像I1の生成が然るべくなされる。しかしながら、同図において二点鎖線で示される位置に第一カメラ404aがあるときは、前記の視野内から後端縁402aが外れてしまう。この場合、プロセッサ312が、画像取得に使用されるカメラを第一カメラ404aから第二カメラ404bへ切り替える制御を行なう。結果として、第二カメラ404b視野内に後端縁402aが収まり、適切な第二監視画像I2が得られる。 In FIG. 14, when the camera 404 is at the position indicated by the solid line, the trailing edge 402a of the trailer portion 402 serving as the reference object exists in the field of view of the first camera 404a. Accordingly, the first monitoring image I1 is generated accordingly. However, when the first camera 404a is located at a position indicated by a two-dot chain line in the figure, the rear end edge 402a is out of the field of view. In this case, the processor 312 performs control to switch the camera used for image acquisition from the first camera 404a to the second camera 404b. As a result, the rear edge 402a is within the field of view of the second camera 404b, and an appropriate second monitoring image I2 is obtained.
 初めから画角の広いカメラ404を用いた場合、広い範囲を視認できる一方で表示装置405に表示される監視画像内に表示される物体が小さくなることが避けられない。上記のような構成によれば、画角を変えずとも基準物が所定の範囲に含まれている監視画像を生成し続けることができる。監視画像内に表示される物体が適切に視認できる程度に各カメラの画角を定めればよいので、視野の確保と視認性の低下防止を両立できる。 When the camera 404 with a wide angle of view is used from the beginning, it is inevitable that an object displayed in the monitoring image displayed on the display device 405 becomes small while a wide range can be visually recognized. According to the above configuration, it is possible to continue to generate a monitoring image in which the reference object is included in a predetermined range without changing the angle of view. Since the angle of view of each camera has only to be determined to such an extent that an object displayed in the monitoring image can be properly visually recognized, both the securing of the visual field and the prevention of deterioration in visibility can be achieved.
 図7に示されるように、画像データ生成装置301の入力インターフェース311は、ユーザインターフェース406からの入力を受け付けうる。ユーザインターフェース406は、車両400の車室内に設けられており、ボタンやタッチパネル装置などへの触覚的操作指示、音声入力指示、視線入力指示などを受け付け可能とされている。 As shown in FIG. 7, the input interface 311 of the image data generation apparatus 301 can accept an input from the user interface 406. The user interface 406 is provided in the passenger compartment of the vehicle 400 and can accept tactile operation instructions, voice input instructions, line-of-sight input instructions, etc. to buttons, touch panel devices, and the like.
 この場合、図9におけるSTEP34の判断に用いられる基準物は、表示装置405に表示された第一監視画像I1を介してユーザによって指定されうる。例えば、ユーザインターフェース406が表示装置405に設けられたタッチパネル装置である場合、第一監視画像I1に含まれる適宜の箇所(トレーラ部分402の後端縁402aなど)に触れることによって当該箇所を基準物として指定できる。 In this case, the reference object used for the determination in STEP 34 in FIG. 9 can be designated by the user via the first monitoring image I1 displayed on the display device 405. For example, when the user interface 406 is a touch panel device provided on the display device 405, the appropriate location included in the first monitoring image I 1 (such as the rear edge 402 a of the trailer portion 402) is touched to make the location a reference object. Can be specified as
 このような構成によれば、第二監視画像I2を生成するための基準物の設定について柔軟性と自由度を高めることができる。 According to such a configuration, it is possible to increase flexibility and flexibility in setting a reference object for generating the second monitoring image I2.
 前述したプロセッサ312の機能は、メモリと協働して動作する汎用マイクロプロセッサにより実現されうる。汎用マイクロプロセッサとしては、CPU、MPU、GPUが例示されうる。汎用メモリの例としては、ROMやRAMが挙げられる。この場合、ROMには、上記の処理を実行するコンピュータプログラムが記憶されうる。汎用マイクロプロセッサは、ROMに記憶されたプログラムの少なくとも一部を指定してRAM上に展開し、RAMと協働して上記の処理を実行しうる。前述したプロセッサ312の機能は、後述する処理を実現するコンピュータプログラムを実行可能なマイクロコントローラ、ASIC、FPGAなどの専用集積回路によって実現されてもよい。前述したプロセッサ312の機能は、汎用マイクロプロセッサと専用集積回路の組合せによって実現されてもよい。 The functions of the processor 312 described above can be realized by a general-purpose microprocessor that operates in cooperation with a memory. As the general-purpose microprocessor, a CPU, an MPU, and a GPU can be exemplified. Examples of general-purpose memory include ROM and RAM. In this case, the ROM can store a computer program for executing the above processing. The general-purpose microprocessor can specify at least a part of a program stored in the ROM, expand it on the RAM, and execute the above-described processing in cooperation with the RAM. The functions of the processor 312 described above may be realized by a dedicated integrated circuit such as a microcontroller, ASIC, or FPGA that can execute a computer program that implements processing to be described later. The functions of the processor 312 described above may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
 第三実施形態は、本開示の理解を容易にするための例示にすぎない。第三実施形態に係る構成は、本開示の趣旨を逸脱しなければ、適宜に変更・改良されうる。 The third embodiment is merely an example for facilitating understanding of the present disclosure. The configuration according to the third embodiment may be changed or improved as appropriate without departing from the spirit of the present disclosure.
 第三実施形態においては、車両400の後部の一部が基準物として指定されている。しかしながら、自動隊列走行の実施時において車両400の後方に位置する車両の一部が基準物として指定されてもよい。 In the third embodiment, a part of the rear portion of the vehicle 400 is designated as the reference object. However, a part of the vehicle located behind the vehicle 400 may be designated as the reference object when the automatic platooning is performed.
 車両400に搭載されるカメラ404の数と位置は、車両400の仕様に応じて適宜に定められうる。 The number and position of the cameras 404 mounted on the vehicle 400 can be appropriately determined according to the specifications of the vehicle 400.
 本出願の記載の一部を構成するものとして、2018年3月5日に提出された日本国特許出願2018-038879号、2018年3月16日に提出された日本国特許出願2018-049652号、および2018年3月19日に提出された日本国特許出願2018-051287号の内容が援用される。 As part of the description of the present application, Japanese Patent Application No. 2018-038879 filed on March 5, 2018, Japanese Patent Application No. 2018-049652 filed on March 16, 2018 The contents of Japanese Patent Application No. 2018-051287 filed on Mar. 19, 2018 are incorporated herein by reference.

Claims (20)

  1.  車両に搭載されるセンサシステムであって、
     前記車両の外部の情報を検出し、当該情報に対応する信号を出力する少なくとも一つのセンサユニットと、
     前記信号を処理して前記情報に対応するデータを生成する信号処理装置と、
    を備えており、
     前記信号処理装置は、
      前記車両のピッチ角に基づいて定められた基準高さを示す基準高さ情報を取得し、
      前記基準高さ情報に基づいて、前記センサユニットの検出範囲のうち前記基準高さに対応する領域に関連付けられた前記信号を用いて前記データを生成する、
    センサシステム。
    A sensor system mounted on a vehicle,
    At least one sensor unit that detects information outside the vehicle and outputs a signal corresponding to the information;
    A signal processing device that processes the signal to generate data corresponding to the information;
    With
    The signal processing device includes:
    Obtaining reference height information indicating a reference height determined based on the pitch angle of the vehicle;
    Based on the reference height information, the data is generated using the signal associated with the region corresponding to the reference height in the detection range of the sensor unit.
    Sensor system.
  2.  前記基準高さ情報に基づいて、前記車両の上下方向における前記センサユニットの検出基準方向を調節するレベリング調節機構を備えている、
    請求項1に記載のセンサシステム。
    A leveling adjustment mechanism for adjusting a detection reference direction of the sensor unit in the vertical direction of the vehicle based on the reference height information;
    The sensor system according to claim 1.
  3.  前記少なくとも一つのセンサユニットは、
      前記車両の外部の第一情報を検出し、当該第一情報に対応する第一信号を出力する第一センサユニットと、
      前記車両の外部の第二情報を検出し、当該第二情報に対応する第二信号を出力する第二センサユニットと、
    を含んでおり、
     前記信号処理装置は、前記第一信号を処理して前記第一情報に対応する第一データを生成するとともに、前記第二信号を処理して前記第二情報に対応する第二データを生成し、
     前記信号処理装置は、前記基準高さ情報に基づいて、前記第一センサユニットの検出範囲のうち前記基準高さに対応する領域に関連付けられた信号を前記第一信号として処理するとともに、前記第二センサユニットの検出範囲のうち前記基準高さに対応する領域に関連付けられた信号を前記第二信号として処理する、
    請求項1または2に記載のセンサシステム。
    The at least one sensor unit comprises:
    A first sensor unit that detects first information outside the vehicle and outputs a first signal corresponding to the first information;
    A second sensor unit that detects second information outside the vehicle and outputs a second signal corresponding to the second information;
    Contains
    The signal processing device processes the first signal to generate first data corresponding to the first information, and processes the second signal to generate second data corresponding to the second information. ,
    The signal processing device processes, as the first signal, a signal associated with a region corresponding to the reference height in the detection range of the first sensor unit based on the reference height information. Processing the signal associated with the region corresponding to the reference height in the detection range of the two-sensor unit as the second signal;
    The sensor system according to claim 1 or 2.
  4.  前記信号処理装置は、前記データを地図情報と関連付ける、
    請求項1から3のいずれか一項に記載のセンサシステム。
    The signal processing device associates the data with map information;
    The sensor system according to any one of claims 1 to 3.
  5.  前記データは、前記地図情報のうち前記基準高さに対応する情報に関連付けられる、
    請求項4に記載のセンサシステム。
    The data is associated with information corresponding to the reference height among the map information.
    The sensor system according to claim 4.
  6.  ランプユニットを収容する灯室を区画しているランプハウジングを備えており、
     前記センサユニットは、前記灯室内に配置されている、
    請求項1から5のいずれか一項に記載のセンサシステム。
    A lamp housing that partitions a lamp chamber that houses the lamp unit;
    The sensor unit is disposed in the lamp chamber.
    The sensor system according to any one of claims 1 to 5.
  7.  前記少なくとも一つのセンサユニットは、LIDARセンサユニット、カメラユニット、およびミリ波センサユニットの少なくとも一つである、
    請求項1から6のいずれか一項に記載のセンサシステム。
    The at least one sensor unit is at least one of a LIDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
    The sensor system according to any one of claims 1 to 6.
  8.  車両に搭載されるセンサシステムであって、
     前記車両の外部の情報を検出し、各々が当該情報に対応する信号を出力する複数のセンサユニットと、
     前記信号を取得する信号処理装置と、
    を備えており、
     前記複数のセンサユニットは、
      前記車両の外部の第一情報を検出し、当該第一情報に対応する第一信号を出力する第一センサユニットと、
      前記車両の外部の第二情報を検出し、当該第二情報に対応する第二信号を出力する第二センサユニットと、
    を含んでおり、
     前記信号処理装置は、
      前記第一信号と前記第二信号に基づいて、前記第一情報と前記第二情報が同一の基準目標物を含んでいるかを判定し、
      前記第一情報と前記第二情報が同一の基準目標物を含んでいると判定された場合、前記第一情報により特定される当該基準目標物の位置と前記第二情報により特定される当該基準目標物の位置との差異を検出する、
    センサシステム。
    A sensor system mounted on a vehicle,
    A plurality of sensor units that detect information outside the vehicle and each output a signal corresponding to the information;
    A signal processing device for acquiring the signal;
    With
    The plurality of sensor units are:
    A first sensor unit that detects first information outside the vehicle and outputs a first signal corresponding to the first information;
    A second sensor unit that detects second information outside the vehicle and outputs a second signal corresponding to the second information;
    Contains
    The signal processing device includes:
    Based on the first signal and the second signal, determine whether the first information and the second information include the same reference target,
    When it is determined that the first information and the second information include the same reference target, the position of the reference target specified by the first information and the reference specified by the second information Detect the difference from the target position,
    Sensor system.
  9.  前記信号処理装置は、
      前記第一情報により特定される前記基準目標物の位置と前記第二情報により特定される当該基準目標物の位置の経時変化を取得し、
      前記経時変化に基づいて補正が必要なセンサユニットを特定する、
    請求項8に記載のセンサシステム。
    The signal processing device includes:
    Obtaining the time-dependent change of the position of the reference target specified by the first information and the position of the reference target specified by the second information;
    Identify a sensor unit that needs to be corrected based on the change over time,
    The sensor system according to claim 8.
  10.  前記複数のセンサユニットは、前記車両の外部の第三情報を検出し、当該第三情報に対応する第三信号を出力する第三センサユニットを含んでおり、
     前記信号処理装置は、
      前記第一信号、前記第二信号、および前記第三信号に基づいて、前記第一情報、前記第二情報、および前記第三情報が同一の基準目標物を含んでいるかを判定し、
      前記第一情報、前記第二情報、および前記第三情報が同一の基準目標物を含んでいると判定された場合、前記第一情報により特定される当該基準目標物の位置、前記第二情報により特定される当該基準目標物の位置、および前記第三情報により特定される当該基準目標物の位置の間の差異に基づいて、補正が必要なセンサユニットを特定する、
    請求項8に記載のセンサシステム。
    The plurality of sensor units include a third sensor unit that detects third information outside the vehicle and outputs a third signal corresponding to the third information,
    The signal processing device includes:
    Based on the first signal, the second signal, and the third signal, determine whether the first information, the second information, and the third information include the same reference target,
    When it is determined that the first information, the second information, and the third information include the same reference target, the position of the reference target specified by the first information, the second information Identifying a sensor unit that needs to be corrected based on the difference between the position of the reference target identified by the position of the reference target identified by the third information,
    The sensor system according to claim 8.
  11.  前記基準目標物は、高さ方向における位置の基準を提供するものである、
    請求項8から10のいずれか一項に記載のセンサシステム。
    The reference target provides a reference for a position in the height direction.
    The sensor system according to any one of claims 8 to 10.
  12.  ランプユニットを収容する灯室を区画しているランプハウジングを備えており、
     前記複数のセンサユニットの少なくとも一つは、前記灯室内に配置されている、
    請求項8から11のいずれか一項に記載のセンサシステム。
    A lamp housing that partitions a lamp chamber that houses the lamp unit;
    At least one of the plurality of sensor units is disposed in the lamp chamber.
    The sensor system according to any one of claims 8 to 11.
  13.  前記少なくとも複数のセンサユニットは、LIDARセンサユニット、カメラユニット、およびミリ波センサユニットの少なくとも一つである、
    請求項8から12のいずれか一項に記載のセンサシステム。
    The at least a plurality of sensor units is at least one of a LIDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
    The sensor system according to any one of claims 8 to 12.
  14.  車両に搭載される画像データ生成装置であって、
     前記車両の運転席よりも後方の画像を取得する少なくとも一つのカメラから出力された当該画像に対応する信号が入力される入力インターフェースと、
     前記信号に基づいて、表示装置に表示される第一監視画像に対応する第一画像データを生成するプロセッサと、
     前記表示装置へ前記第一画像データを出力する出力インターフェースと、
    を備えており、
     前記プロセッサは、
      前記第一監視画像に含まれる基準物を決定し、
      前記基準物が前記第一監視画像における所定の領域内に含まれているかを判断し、
      前記基準物が前記所定の領域内に含まれていないと判断された場合、当該基準物を当該所定の領域内に含む第二監視画像が前記表示装置に表示されるように、当該第二監視画像に対応する第二画像データを生成し、
      前記出力インターフェースを通じて前記第二画像データを前記表示装置へ出力する、
    画像データ生成装置。
    An image data generation device mounted on a vehicle,
    An input interface to which a signal corresponding to the image output from at least one camera that acquires an image behind the driver's seat of the vehicle is input;
    A processor that generates first image data corresponding to a first monitoring image displayed on the display device based on the signal;
    An output interface for outputting the first image data to the display device;
    With
    The processor is
    Determining a reference object included in the first monitoring image;
    Determining whether the reference object is included in a predetermined area in the first monitoring image;
    When it is determined that the reference object is not included in the predetermined area, the second monitoring is performed such that a second monitoring image including the reference object in the predetermined area is displayed on the display device. Generate second image data corresponding to the image,
    Outputting the second image data to the display device through the output interface;
    Image data generation device.
  15.  前記プロセッサは、
      前記画像の第一部分に対応する前記信号に基づいて前記第一画像データを生成し、
      前記画像の第二部分に対応する前記信号に基づいて前記第二画像データを生成する、
    請求項14に記載の画像データ生成装置。
    The processor is
    Generating the first image data based on the signal corresponding to the first portion of the image;
    Generating the second image data based on the signal corresponding to the second portion of the image;
    The image data generation device according to claim 14.
  16.  前記プロセッサは、
      前記カメラの画角が第一画角であるときの前記画像に基づいて前記第一画像データを生成し、
      前記第一画角よりも広い第二画角となるように前記カメラの画角を変更し、
      前記カメラの画角が前記第二画角であるときの前記画像に基づいて前記第二画像データを生成する、
    請求項14に記載の画像データ生成装置。
    The processor is
    Generating the first image data based on the image when the angle of view of the camera is the first angle of view;
    Change the angle of view of the camera so that the second angle of view is wider than the first angle of view,
    Generating the second image data based on the image when the angle of view of the camera is the second angle of view;
    The image data generation device according to claim 14.
  17.  前記プロセッサは、
      前記カメラの光軸が第一方向を向いているときの前記画像に基づいて前記第一画像データを生成し、
      前記カメラの光軸の向きを前記第一方向とは異なる第二方向へ変更し、
      前記カメラの光軸が前記第二方向を向いているときの前記画像に基づいて前記第二画像データを生成する、
    請求項14に記載の画像データ生成装置。
    The processor is
    Generating the first image data based on the image when the optical axis of the camera is in the first direction;
    Changing the direction of the optical axis of the camera to a second direction different from the first direction;
    Generating the second image data based on the image when the optical axis of the camera is in the second direction;
    The image data generation device according to claim 14.
  18.  前記少なくとも一つのカメラは、第一カメラと第二カメラを含んでおり、
     前記プロセッサは、
      前記第一カメラから取得した前記信号に基づいて前記第一画像データを生成し、
      前記第二カメラから取得した前記信号に基づいて前記第二画像データを生成する、
    請求項14に記載の画像データ生成装置。
    The at least one camera includes a first camera and a second camera;
    The processor is
    Generating the first image data based on the signal acquired from the first camera;
    Generating the second image data based on the signal acquired from the second camera;
    The image data generation device according to claim 14.
  19.  前記基準物は、前記第一監視画像を介してユーザにより指定可能である、
    請求項14から18のいずれか一項に記載の画像データ生成装置。
    The reference object can be designated by the user via the first monitoring image.
    The image data generation device according to any one of claims 14 to 18.
  20.  前記基準物は、前記車両の後部の一部、または前記車両の後方に位置する車両の一部である、
    請求項14から19のいずれか一項に記載の画像データ生成装置。
    The reference object is a part of a rear part of the vehicle or a part of a vehicle located behind the vehicle.
    The image data generation device according to any one of claims 14 to 19.
PCT/JP2019/008084 2018-03-05 2019-03-01 Sensor system, and image data generating device WO2019172117A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020504983A JP7288895B2 (en) 2018-03-05 2019-03-01 Sensor system and image data generator

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2018038879 2018-03-05
JP2018-038879 2018-03-05
JP2018049652 2018-03-16
JP2018-049652 2018-03-16
JP2018051287 2018-03-19
JP2018-051287 2018-03-19

Publications (1)

Publication Number Publication Date
WO2019172117A1 true WO2019172117A1 (en) 2019-09-12

Family

ID=67847235

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/008084 WO2019172117A1 (en) 2018-03-05 2019-03-01 Sensor system, and image data generating device

Country Status (2)

Country Link
JP (1) JP7288895B2 (en)
WO (1) WO2019172117A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021056060A (en) * 2019-09-30 2021-04-08 株式会社デンソー Device and method for detecting inclination of rider mounted on vehicle
JP2021155033A (en) * 2020-03-20 2021-10-07 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲーMEKRA Lang GmbH & Co. KG Viewing system for a vehicle and method of switching between image areas displayed by the viewing system
CN114347916A (en) * 2020-10-12 2022-04-15 丰田自动车株式会社 Sensor mounting structure for vehicle

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0996525A (en) * 1995-09-29 1997-04-08 Suzuki Motor Corp Distance-measuring apparatus carried on vehicle
JP2001310679A (en) * 2000-04-28 2001-11-06 Isuzu Motors Ltd Rearview mirror device
JP2001318149A (en) * 2000-03-02 2001-11-16 Denso Corp Front information detecting device for vehicle
JP2003066144A (en) * 2001-08-23 2003-03-05 Omron Corp Object detecting apparatus and method therefor
JP2005505074A (en) * 2001-10-05 2005-02-17 ローベルト ボッシュ ゲゼルシャフト ミット ベシュレンクテル ハフツング Object detection device
JP2009065483A (en) * 2007-09-06 2009-03-26 Denso Corp Parking support system
JP2009210485A (en) * 2008-03-05 2009-09-17 Honda Motor Co Ltd Travel safety device for vehicle
JP2010008280A (en) * 2008-06-27 2010-01-14 Toyota Motor Corp Body detector
JP2016107985A (en) * 2014-12-05 2016-06-20 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲーMEKRA Lang GmbH & Co. KG Visual system
JP2016113054A (en) * 2014-12-16 2016-06-23 祥忠 川越 Side rear view checking device
JP2016223963A (en) * 2015-06-02 2016-12-28 日立建機株式会社 Working machine for mine

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2849127B2 (en) * 1989-09-29 1999-01-20 マツダ株式会社 Mobile vehicle environment recognition device
JP4040620B2 (en) * 2004-11-30 2008-01-30 本田技研工業株式会社 Vehicle periphery monitoring device
JP2008026933A (en) * 2006-07-18 2008-02-07 Toyota Motor Corp Surrounding recognition device
JP6106495B2 (en) * 2013-04-01 2017-03-29 パイオニア株式会社 Detection device, control method, program, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0996525A (en) * 1995-09-29 1997-04-08 Suzuki Motor Corp Distance-measuring apparatus carried on vehicle
JP2001318149A (en) * 2000-03-02 2001-11-16 Denso Corp Front information detecting device for vehicle
JP2001310679A (en) * 2000-04-28 2001-11-06 Isuzu Motors Ltd Rearview mirror device
JP2003066144A (en) * 2001-08-23 2003-03-05 Omron Corp Object detecting apparatus and method therefor
JP2005505074A (en) * 2001-10-05 2005-02-17 ローベルト ボッシュ ゲゼルシャフト ミット ベシュレンクテル ハフツング Object detection device
JP2009065483A (en) * 2007-09-06 2009-03-26 Denso Corp Parking support system
JP2009210485A (en) * 2008-03-05 2009-09-17 Honda Motor Co Ltd Travel safety device for vehicle
JP2010008280A (en) * 2008-06-27 2010-01-14 Toyota Motor Corp Body detector
JP2016107985A (en) * 2014-12-05 2016-06-20 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲーMEKRA Lang GmbH & Co. KG Visual system
JP2016113054A (en) * 2014-12-16 2016-06-23 祥忠 川越 Side rear view checking device
JP2016223963A (en) * 2015-06-02 2016-12-28 日立建機株式会社 Working machine for mine

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021056060A (en) * 2019-09-30 2021-04-08 株式会社デンソー Device and method for detecting inclination of rider mounted on vehicle
JP7439434B2 (en) 2019-09-30 2024-02-28 株式会社デンソー Lidar tilt detection device installed on a vehicle and method for detecting lidar tilt installed on a vehicle
JP2021155033A (en) * 2020-03-20 2021-10-07 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲーMEKRA Lang GmbH & Co. KG Viewing system for a vehicle and method of switching between image areas displayed by the viewing system
JP7158521B2 (en) 2020-03-20 2022-10-21 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲー Viewing system for a vehicle and method for switching between image regions displayed by the viewing system
CN114347916A (en) * 2020-10-12 2022-04-15 丰田自动车株式会社 Sensor mounting structure for vehicle
JP2022063471A (en) * 2020-10-12 2022-04-22 トヨタ自動車株式会社 Vehicular sensor mounting structure

Also Published As

Publication number Publication date
JPWO2019172117A1 (en) 2021-03-11
JP7288895B2 (en) 2023-06-08

Similar Documents

Publication Publication Date Title
US10481271B2 (en) Automotive lighting system for a vehicle
US6888447B2 (en) Obstacle detection device for vehicle and method thereof
US7158015B2 (en) Vision-based method and system for automotive parking aid, reversing aid, and pre-collision sensing application
KR20200047886A (en) Driver assistance system and control method for the same
JP6512164B2 (en) Object detection apparatus, object detection method
US20150154802A1 (en) Augmented reality lane change assistant system using projection unit
WO2019172117A1 (en) Sensor system, and image data generating device
US11345279B2 (en) Device and method for warning a driver of a vehicle
US9251709B2 (en) Lateral vehicle contact warning system
KR102352464B1 (en) Driver assistance system and control method for the same
JP2006318093A (en) Vehicular moving object detection device
US20200031273A1 (en) System for exchanging information between vehicles and control method thereof
US10832438B2 (en) Object distancing system for a vehicle
US20200353919A1 (en) Target detection device for vehicle
JP2008008679A (en) Object detecting apparatus, collision predicting apparatus and vehicle controlling apparatus
JP2010162975A (en) Vehicle control system
WO2016013167A1 (en) Vehicle display control device
US20210316728A1 (en) Smart cruise control system and method of controlling the same
JP6344260B2 (en) Obstacle detection device
JP4284652B2 (en) Radar equipment
CN110497861B (en) Sensor system and inspection method
JP2014106635A (en) Lighting fixture system for vehicle
JP4294450B2 (en) Vehicle driving support device
CN210116465U (en) Sensor system
JP7356451B2 (en) sensor system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19764658

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020504983

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19764658

Country of ref document: EP

Kind code of ref document: A1