WO2019172117A1 - Système de capteur et dispositif de génération de données d'image - Google Patents

Système de capteur et dispositif de génération de données d'image Download PDF

Info

Publication number
WO2019172117A1
WO2019172117A1 PCT/JP2019/008084 JP2019008084W WO2019172117A1 WO 2019172117 A1 WO2019172117 A1 WO 2019172117A1 JP 2019008084 W JP2019008084 W JP 2019008084W WO 2019172117 A1 WO2019172117 A1 WO 2019172117A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
signal
vehicle
sensor unit
camera
Prior art date
Application number
PCT/JP2019/008084
Other languages
English (en)
Japanese (ja)
Inventor
重之 渡邉
裕一 綿野
Original Assignee
株式会社小糸製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社小糸製作所 filed Critical 株式会社小糸製作所
Priority to JP2020504983A priority Critical patent/JP7288895B2/ja
Publication of WO2019172117A1 publication Critical patent/WO2019172117A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to a sensor system mounted on a vehicle.
  • the present disclosure also relates to an image data generation device mounted on a vehicle.
  • driving support means a control process that at least partially performs at least one of driving operation (steering operation, acceleration, deceleration), monitoring of driving environment, and backup of driving operation.
  • this means including partial driving assistance such as a collision damage reducing brake function and a lane keeping assist function to a fully automatic driving operation.
  • Patent Document 2 discloses an example of such an electronic mirror technique.
  • the first problem in the present disclosure is to suppress an increase in load related to signal processing required for driving support of a vehicle.
  • the second problem in the present disclosure is to maintain the detection accuracy of a plurality of sensors required for driving support of the vehicle.
  • the third problem in the present disclosure is to provide an electronic mirror technology with improved rear visibility.
  • One aspect for achieving the first problem is a sensor system mounted on a vehicle, At least one sensor unit that detects information outside the vehicle and outputs a signal corresponding to the information; A signal processing device that processes the signal to generate data corresponding to the information; With The signal processing device includes: Obtaining reference height information indicating a reference height determined based on the pitch angle of the vehicle; Based on the reference height information, the data is generated using the signal associated with the region corresponding to the reference height in the detection range of the sensor unit.
  • the detection reference direction of the sensor unit in the vertical direction of the vehicle matches the reference height.
  • the detection reference direction of the sensor unit changes in the vertical direction of the vehicle according to the change in the pitch angle of the vehicle, the detection range of the sensor unit may be set to have redundancy in the vertical direction of the vehicle. It is common.
  • the signal processing device specifies the region using reference height information separately determined based on the pitch angle of the vehicle. Therefore, it is possible to easily and highly reliably specify a region including information necessary for driving support regardless of the pitch angle of the vehicle. Furthermore, since only the signal associated with the region that is part of the detection range having redundancy is subjected to data generation processing for acquiring external information, it relates to signal processing required for vehicle driving support. Increase in load can be suppressed.
  • the above sensor system can be configured as follows.
  • a leveling adjustment mechanism that adjusts the detection reference direction of the sensor unit in the vertical direction of the vehicle based on the reference height information is provided.
  • the above sensor system can be configured as follows.
  • the at least one sensor unit comprises: A first sensor unit that detects first information outside the vehicle and outputs a first signal corresponding to the first information; A second sensor unit that detects second information outside the vehicle and outputs a second signal corresponding to the second information; Contains The signal processing device processes the first signal to generate first data corresponding to the first information, and processes the second signal to generate second data corresponding to the second information.
  • the signal processing device processes, as the first signal, a signal associated with a region corresponding to the reference height in the detection range of the first sensor unit based on the reference height information.
  • a signal associated with a region corresponding to the reference height in the detection range of the two sensor units is processed as the second signal.
  • the generation of the first data based on the first signal output from the first sensor unit and the second signal based on the second signal output from the second sensor unit Two data are generated. That is, the generated first data and second data both include information associated with the reference height. Therefore, the integrated use of both data for driving assistance becomes easy. Both the first data and the second data are generated using only signals associated with a region that is a part of the detection range of each sensor unit. Therefore, even if both data are used in an integrated manner, an increase in the processing load of the signal processing device can be suppressed.
  • the above sensor system can be configured as follows.
  • the signal processing device associates the data with map information.
  • the data and map information generated by the signal processing device can be used for driving support in an integrated manner.
  • an increase in signal processing load related to data generation can be suppressed, an increase in processing load in integrated driving support combining the data and map information can also be suppressed as a whole.
  • the above sensor system can be configured as follows.
  • the data is associated with information corresponding to the reference height in the map information.
  • map information can be converted into two-dimensional information, so that the amount of data used for integrated driving support can be reduced. Therefore, it is possible to further suppress an increase in processing load in integrated driving support.
  • the above sensor system can be configured as follows.
  • a lamp housing that partitions a lamp chamber that houses the lamp unit;
  • the sensor unit is disposed in the lamp chamber.
  • the lamp unit Since the lamp unit has a function of supplying light to the outside of the vehicle, the lamp unit is generally arranged in a place where there is little shielding in the vehicle. By arranging the sensor unit in such a place, information outside the vehicle can be efficiently acquired.
  • the height detection information when the height detection information is acquired from the vehicle auto leveling system, the height detection information can be shared with the lamp unit. In this case, efficient system design is possible.
  • the above sensor system can be configured as follows.
  • the at least one sensor unit is at least one of a LIDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • One aspect for achieving the second problem is a sensor system mounted on a vehicle, A plurality of sensor units that detect information outside the vehicle and each output a signal corresponding to the information; A signal processing device for acquiring the signal; With The plurality of sensor units are: A first sensor unit that detects first information outside the vehicle and outputs a first signal corresponding to the first information; A second sensor unit that detects second information outside the vehicle and outputs a second signal corresponding to the second information; Contains The signal processing device includes: Based on the first signal and the second signal, determine whether the first information and the second information include the same reference target, When it is determined that the first information and the second information include the same reference target, the position of the reference target specified by the first information and the reference specified by the second information A difference from the position of the target is detected.
  • the user when the magnitude of the difference exceeds a predetermined value, the user can be notified.
  • the user can take appropriate measures to correct the misalignment. Therefore, it is possible to maintain the detection accuracy of the plurality of sensor units required for driving support of the vehicle.
  • the above sensor system can be configured as follows.
  • the signal processing device includes: Obtaining the time-dependent change of the position of the reference target specified by the first information and the position of the reference target specified by the second information; A sensor unit that requires correction is specified based on the change over time.
  • the sensor unit that needs to be corrected can be specified by monitoring the change with time of the position of the reference target specified by each sensor unit. Therefore, it is possible to more easily maintain the detection accuracy of the plurality of sensor units required for vehicle driving assistance.
  • the above sensor system can be configured as follows.
  • the plurality of sensor units include a third sensor unit that detects third information outside the vehicle and outputs a third signal corresponding to the third information
  • the signal processing device includes: Based on the first signal, the second signal, and the third signal, determine whether the first information, the second information, and the third information include the same reference target, When it is determined that the first information, the second information, and the third information include the same reference target, the position of the reference target specified by the first information, the second information Based on the difference between the position of the reference target specified by the above and the position of the reference target specified by the third information, the sensor unit that needs to be corrected is specified.
  • the reference target is preferably one that can provide a reference for the position in the height direction. This is because the position reference in the height direction tends to be relatively small in variation due to the running state of the vehicle, and it is easy to suppress an increase in processing load in the signal processing device.
  • the above sensor system can be configured as follows.
  • a lamp housing that partitions a lamp chamber that houses the lamp unit; At least one of the plurality of sensor units is disposed in the lamp chamber.
  • the lamp unit is generally arranged in a place with little shielding because of the function of supplying light to the outside of the vehicle. By arranging the sensor unit in such a place, information outside the vehicle can be efficiently acquired.
  • the above sensor system can be configured as follows.
  • the at least a plurality of sensor units is at least one of a LIDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the “sensor unit” means a component unit of a component that has a desired information detection function and can be circulated by itself.
  • the “lamp unit” means a structural unit of a part that is provided with a desired lighting function and can be circulated by itself.
  • One aspect for achieving the third problem is an image data generation device mounted on a vehicle, An input interface to which a signal corresponding to the image output from at least one camera that acquires an image behind the driver's seat of the vehicle is input; A processor that generates first image data corresponding to a first monitoring image displayed on the display device based on the signal; An output interface for outputting the first image data to the display device; With The processor is Determining a reference object included in the first monitoring image; Determining whether the reference object is included in a predetermined area in the first monitoring image; When it is determined that the reference object is not included in the predetermined area, the second monitoring is performed such that a second monitoring image including the reference object in the predetermined area is displayed on the display device. Generate second image data corresponding to the image, The second image data is output to the display device through the output interface.
  • either the first image data or the second image data is generated according to the position of the reference object, and the first monitoring image or the second monitoring image including the reference object in a predetermined region. Can be continuously displayed on the display device. Therefore, it is possible to avoid a situation in which the driver is prevented from visually recognizing the rear depending on the state of the vehicle. That is, it is possible to provide an electronic mirror technology with further improved rear visibility.
  • the above image data generation device can be configured as follows.
  • the processor is Generating the first image data based on the signal corresponding to the first portion of the image;
  • the second image data is generated based on the signal corresponding to the second portion of the image.
  • the above image data generation device can be configured as follows.
  • the processor is Generating the first image data based on the image when the angle of view of the camera is the first angle of view; Change the angle of view of the camera so that the second angle of view is wider than the first angle of view,
  • the second image data is generated based on the image when the angle of view of the camera is the second angle of view.
  • the second monitoring image is generated by expanding the angle of view only when necessary, it is possible to achieve both securing the visual field and preventing the visibility from being lowered.
  • the above image data generation device can be configured as follows.
  • the processor is Generating the first image data based on the image when the optical axis of the camera is in the first direction; Changing the direction of the optical axis of the camera to a second direction different from the first direction;
  • the second image data is generated based on the image when the optical axis of the camera is in the second direction.
  • the above image data generation device can be configured as follows.
  • the at least one camera includes a first camera and a second camera;
  • the processor is Generating the first image data based on the signal acquired from the first camera;
  • the second image data is generated based on the signal acquired from the second camera.
  • the above image data generation device can be configured as follows.
  • the reference object can be designated by the user via the first monitoring image.
  • the reference object may be a part of a rear part of the vehicle or a part of a vehicle located behind the vehicle.
  • the structure of the sensor system which concerns on 1st embodiment is illustrated. It is a figure explaining the operation example of the sensor system of FIG. It is a figure which illustrates the position of the sensor system in vehicles.
  • the structure of the sensor system which concerns on 2nd embodiment is illustrated. 5 illustrates processing performed by the sensor system of FIG. 6 illustrates a configuration in which the sensor unit in the sensor system of FIG. 4 is disposed in the lamp chamber.
  • the functional structure of the image data generation apparatus which concerns on 3rd embodiment is illustrated.
  • An example of a vehicle on which the above-described image data generation device is mounted is shown.
  • movement flow of said image data generation apparatus is illustrated.
  • the operation result of said image data generation apparatus is illustrated.
  • movement of said image data generation apparatus is shown.
  • movement of said image data generation apparatus is shown.
  • movement of said image data generation apparatus is shown.
  • an arrow F indicates the forward direction of the illustrated structure.
  • Arrow B indicates the backward direction of the illustrated structure.
  • Arrow L indicates the left direction of the illustrated structure.
  • Arrow R indicates the right direction of the illustrated structure.
  • “Left” and “right” used in the following description indicate the left and right directions viewed from the driver's seat.
  • FIG. 1 schematically shows a configuration of a sensor system 1 according to the first embodiment.
  • the sensor system 1 is mounted on a vehicle.
  • the sensor system 1 includes a LiDAR sensor unit 11.
  • the LiDAR sensor unit 11 has a configuration for emitting invisible light and a configuration for detecting return light as a result of reflection of the invisible light at least on an object existing outside the vehicle.
  • the LiDAR sensor unit 11 may include a scanning mechanism that sweeps the invisible light by changing the emission direction (that is, the detection direction) as necessary.
  • infrared light having a wavelength of 905 nm is used as invisible light.
  • the LiDAR sensor unit 11 can acquire the distance to the object associated with the return light based on, for example, the time from when the invisible light is emitted in a certain direction until the return light is detected. Further, by accumulating such distance data in association with the detection position, information related to the shape of the object associated with the return light can be acquired. In addition to or instead of this, information related to attributes such as the material of the object associated with the return light can be acquired based on the difference between the waveforms of the emitted light and the return light.
  • the LiDAR sensor unit 11 is configured to output a signal S1 corresponding to information outside the vehicle detected as described above.
  • the sensor system 1 includes a signal processing device 12.
  • the signal processing device 12 can be realized by a general-purpose microprocessor that operates in cooperation with a general-purpose memory.
  • a general-purpose microprocessor a CPU, an MPU, and a GPU can be exemplified.
  • the general-purpose memory ROM and RAM can be exemplified.
  • the ROM can store a computer program that realizes processing to be described later.
  • the general-purpose microprocessor designates at least a part of a program stored on the ROM, expands it on the RAM, and executes the above-described processing in cooperation with the RAM.
  • the signal processing device 12 may be realized by a dedicated integrated circuit such as a microcontroller, an ASIC, or an FPGA that can execute a computer program that realizes processing to be described later.
  • the signal processing device 12 may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
  • the signal processing device 12 is configured to process the signal S1 output from the LiDAR sensor unit 11 and generate LiDAR data corresponding to the detected information outside the vehicle. LiDAR data is used for driving support of a vehicle.
  • the signal processing device 12 is configured to acquire the reference height information H from an auto leveling system mounted on the vehicle.
  • the auto leveling system is a system that adjusts the direction of the optical axis of the headlamp in the vertical direction of the vehicle based on the pitch angle of the vehicle.
  • a reference height (for example, the direction of the adjusted optical axis) is determined.
  • the reference height information H indicates this reference height.
  • the signal processing device 12 is configured to generate LiDAR data using the signal S1 associated with the region corresponding to the reference height in the detection range of the LiDAR sensor unit 11. Has been. This operation will be described in detail with reference to FIG.
  • FIG. 2A shows an example in which the sensor system 1 is mounted on the front portion of the vehicle 100.
  • the LiDAR sensor unit 11 is configured to detect information at least ahead of the vehicle 100.
  • An arrow D indicates the detection reference direction of the LiDAR sensor unit 11 in the vertical direction of the vehicle 100.
  • FIG. 2 shows the detection range A of the LiDAR sensor unit 11.
  • the symbol h represents the reference height indicated by the reference height information H.
  • the signal S1 output from the LiDAR sensor unit 11 may include information within the detection range A.
  • the signal processing device 12 generates LiDAR data corresponding to information detected in the region P using the signal S1 associated with the region P that is a part of the detection range A.
  • the region P is a region corresponding to the reference height h.
  • the meaning of the expression “corresponding to the reference height h” is not limited to the case where the reference height h determined by the auto leveling system matches the height of the region P in the vertical direction of the vehicle 100. . As long as the reference height h and the height of the region P have a predetermined correspondence, the two may be different.
  • the signal processing device 12 may acquire only the signal corresponding to the region P as the signal S1, and may use the signal S1 for data generation processing, or after acquiring the signal S1 corresponding to the entire detection range A, Only the corresponding signal may be extracted and used for data generation processing.
  • FIG. 2C shows a state in which the front end of the vehicle 100 is tilted upward from the rear end.
  • the detection reference direction D of the LiDAR sensor unit 11 is directed upward from the reference height h. Therefore, as shown in FIG. 2D, the detection range A of the LiDAR sensor unit 11 moves upward from the state shown in FIG. Even in this case, the signal processing device 12 uses the signal S1 associated with the region P that is a part of the detection range A based on the acquired reference height information H, and converts the information detected in the region P into Corresponding LiDAR data is generated.
  • FIG. 2E shows a state in which the front end of the vehicle 100 is inclined downward from the rear end.
  • the detection reference direction D of the LiDAR sensor unit 11 is directed downward from the reference height h. Therefore, as shown in FIG. 2F, the detection range A of the LiDAR sensor unit 11 moves downward from the state shown in FIG.
  • the signal processing device 12 uses the signal S1 associated with the region P that is a part of the detection range A based on the acquired reference height information H, and converts the information detected in the region P into Corresponding LiDAR data is generated.
  • the detection reference direction D of the LiDAR sensor unit 11 in the vertical direction of the vehicle 100 matches the reference height h. Is preferred. However, since the detection reference direction D of the LiDAR sensor unit 11 changes in the vertical direction of the vehicle 100 according to the change in the pitch angle of the vehicle 100, the detection range A of the LiDAR sensor unit 11 is redundant in the vertical direction of the vehicle 100. Generally, it is set to have.
  • the signal processing device 12 specifies the region P using the reference height information H separately determined based on the pitch angle of the vehicle 100 by the auto leveling system. Therefore, regardless of the pitch angle of the vehicle 100, the region P including information required for driving assistance can be easily and highly reliably identified. Furthermore, since only the signal S1 associated with the region P that is a part of the detection range A having redundancy is used for data generation processing for acquiring external information, it is required for driving support of the vehicle 100. An increase in load related to signal processing can be suppressed.
  • the sensor system 1 can include a leveling adjustment mechanism 13.
  • the leveling adjustment mechanism 13 is configured to include an actuator that can change the detection reference direction D of the LiDAR sensor unit 11 in the vertical direction of the vehicle 100.
  • the auto leveling system a configuration similar to a known mechanism for changing the direction of the optical axis of the headlamp in the vertical direction of the vehicle can be employed.
  • the leveling adjustment mechanism 13 can be communicably connected to the signal processing device 12.
  • the leveling adjustment mechanism 13 is configured to adjust the detection reference direction D of the LiDAR sensor unit 11 in the vertical direction of the vehicle based on the reference height information H acquired by the signal processing device 12.
  • the signal processing device 12 when the detection reference direction D is directed upward from the reference height h, the signal processing device 12 recognizes the fact based on the reference height information H. .
  • the signal processing device 12 outputs a signal for driving the leveling adjustment mechanism 13 so as to eliminate the deviation from the reference height h in the detection reference direction D.
  • the leveling adjustment mechanism 13 operates based on the signal, and directs the detection reference direction D of the LiDAR sensor unit 11 downward.
  • the signal processing device 12 indicates that the detection reference direction D of the LiDAR sensor unit 11 is upward.
  • the signal which operates the leveling adjustment mechanism 13 so that it may face is output.
  • the change in the detection reference direction D of the LiDAR sensor unit 11 according to the change in the pitch angle of the vehicle 100 can be suppressed, the change in the position of the region P within the detection range A of the LiDAR sensor unit 11 can be reduced. Can be suppressed. That is, the position of the region P specified by the signal processing device 12 can be the position shown in FIG. 2B regardless of the pitch angle of the vehicle 100. Therefore, the processing load of the signal processing device 12 for specifying the region P can be further suppressed.
  • the sensor system 1 can include a camera unit 14.
  • the camera unit 14 is a device for acquiring image information outside the vehicle.
  • the camera unit 14 is configured to output a signal S2 corresponding to the acquired image information.
  • the camera unit 14 is an example of a sensor unit.
  • the signal processing device 12 is configured to process the signal S2 output from the camera unit 14 and generate camera data corresponding to the acquired image information outside the vehicle.
  • the camera data is used for driving support of the vehicle.
  • the signal processing device 12 is configured to generate camera data using a signal S2 associated with a region corresponding to the reference height h in the field of view of the camera unit 14. ing. That is, the signal processing device 12 generates the camera data corresponding to the image included in the area using the signal S2 associated with the area that is a part of the field of view of the camera unit 14.
  • the field of view of the camera unit 14 is an example of the detection range of the sensor unit.
  • the LiDAR sensor unit 11 is an example of a first sensor unit.
  • the information outside the vehicle 100 detected by the LiDAR sensor unit 11 is an example of first information.
  • the signal S1 output from the LiDAR sensor unit 11 is an example of a first signal.
  • LiDAR data generated by the signal processing device 12 is an example of first data.
  • the camera unit 14 is an example of a second sensor unit.
  • An image outside the vehicle 100 acquired by the camera unit 14 is an example of second information.
  • the signal S2 output from the camera unit 14 is an example of a second signal.
  • the camera data generated by the signal processing device 12 is an example of second data.
  • leveling adjustment mechanism 13 can also be applied to the camera unit 14.
  • data output from a plurality of sensor units of different types is used for driving support in an integrated manner.
  • data output from a plurality of sensor units of the same type arranged at relatively distant positions in the vehicle may be used for driving support in an integrated manner.
  • the sensor system 1 shown in FIG. 1 is arranged in at least two of the left front corner LF, the right front corner RF, the left rear corner LB, and the right rear corner RB of the vehicle 100 shown in FIG. Can be done.
  • a case where two sensor systems 1 are arranged at the left front corner LF and the right front corner RF of the vehicle 100 will be described as an example.
  • the LiDAR sensor unit 11 disposed in the left front corner LF is an example of a first sensor unit.
  • Information outside the vehicle 100 detected by the LiDAR sensor unit 11 disposed in the left front corner LF is an example of first information.
  • the signal S1 output from the LiDAR sensor unit 11 disposed in the left front corner LF is an example of a first signal.
  • the LiDAR data generated by the signal processing device 12 arranged at the left front corner LF is an example of first data.
  • the LiDAR sensor unit 11 disposed in the right front corner RF is an example of the second sensor unit.
  • Information outside the vehicle 100 detected by the LiDAR sensor unit 11 disposed in the right front corner RF is an example of second information.
  • the signal S1 output from the LiDAR sensor unit 11 disposed at the right front corner RF is an example of a second signal.
  • the LiDAR data generated by the signal processing device 12 disposed in the right front corner RF is an example of second data.
  • the LiDAR data generated by the signal processing device 12 arranged in the left front corner LF and the LiDAR data generated by the signal processing device 12 arranged in the right front corner RF are used by the control device 101 for driving support.
  • An ECU is an example of the control device 101.
  • the ECU can be realized by a general-purpose microprocessor that operates in cooperation with a general-purpose memory.
  • a general-purpose microprocessor a CPU, an MPU, and a GPU can be exemplified.
  • As the general-purpose memory ROM and RAM can be exemplified.
  • the ECU may be realized by a dedicated integrated circuit such as a microcontroller, ASIC, or FPGA.
  • the ECU may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
  • the LiDAR data generated at two locations includes information associated with the reference height h. Therefore, the integrated use of both data for driving assistance becomes easy.
  • the LiDAR data generated at two locations is generated using only the signal S1 associated with the region P that is part of the detection range A of each LiDAR sensor unit 11. Therefore, even when both data are used in an integrated manner, an increase in processing load on the control device 101 can be suppressed.
  • the signal processing device 12 can be configured to acquire map information M.
  • the map information M can be information used for the navigation system of the vehicle 100, for example.
  • the map information M may be stored in advance in a storage device mounted on the vehicle 100, or may be downloaded from an external network periodically or as necessary.
  • the signal processing device 12 may be configured to associate the LiDAR data generated based on the signal S1 output from the LiDAR sensor unit 11 with the map information M. For example, when the LiDAR data indicates the presence of an object outside the vehicle 100, it can be determined by collating with the map information M whether the object is a structure such as a guardrail or a traffic sign.
  • LiDAR data and map information M can be used for driving support in an integrated manner.
  • an increase in signal processing load related to generation of LiDAR data can be suppressed, an increase in processing load in integrated driving support combining the LiDAR data and map information M can also be suppressed as a whole.
  • the above map information M can include three-dimensional information.
  • the signal processing apparatus 12 can associate LiDAR data with information corresponding to the reference height h in the map information M. That is, the two-dimensional map information corresponding to the reference height h can be extracted from the map information M including the three-dimensional information and associated with the LiDAR data.
  • the map information data used for the integrated driving support can be reduced, so that an increase in processing load in the integrated driving support can be further suppressed.
  • map information M acquired by the signal processing device 12 may be provided in advance as two-dimensional information.
  • the sensor system 1 can include a left front lamp device 15.
  • the left front lamp device 15 may include a lamp housing 51 and a translucent cover 52.
  • the lamp housing 51 partitions the lamp chamber 53 together with the translucent cover 52.
  • the left front lamp device 15 is mounted on the left front corner LF of the vehicle 100 shown in FIG.
  • the left front lamp device 15 may include a lamp unit 54.
  • the lamp unit 54 is a device that emits visible light to the outside of the vehicle 100.
  • the lamp unit 54 is accommodated in the lamp chamber 53.
  • Examples of the lamp unit 54 include a headlamp unit, a vehicle width lamp unit, a direction indicator lamp unit, and a fog lamp unit.
  • the translucent cover 52 is formed of a material that transmits not only visible light emitted from the lamp unit 54 but also light having a wavelength with which the sensor unit accommodated in the lamp chamber 53 has sensitivity.
  • the front left lamp device 15 Since the front left lamp device 15 has a function of supplying light to the outside of the vehicle 100, the front left lamp device 15 is generally arranged in a place with a small amount of shielding such as the left front corner FB. By arranging the sensor unit in such a place, information outside the vehicle 100 can be efficiently acquired.
  • the height detection information H when the height detection information H is acquired from the auto leveling system of the vehicle 100, the height detection information H can be shared with the lamp unit 54. In this case, efficient system design is possible.
  • a right front lamp device having a symmetrical configuration with the left front lamp device 15 can be mounted on the right front corner RF of the vehicle 100 shown in FIG.
  • a left rear lamp device may be mounted on the left rear corner LB of the vehicle 100.
  • examples of the lamp unit included in the left rear lamp device may include a brake light unit, a taillight unit, a vehicle width light unit, and a reverse light unit.
  • a right rear lamp device having a configuration symmetrical to the left rear lamp device may be mounted on the right rear corner RB of the vehicle 100.
  • the sensor unit described above can be accommodated in a lamp chamber defined by a lamp housing and a light-transmitting cover.
  • the first embodiment is merely an example for facilitating understanding of the present disclosure.
  • the configuration according to the first embodiment can be changed or improved as appropriate without departing from the spirit of the present disclosure.
  • the sensor system 1 includes at least one of the LiDAR sensor unit 11 and the camera unit 14 has been described.
  • the sensor system 1 can be configured to include at least one of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the camera unit can include a visible light camera unit and an infrared camera unit.
  • the millimeter wave sensor unit has a configuration for transmitting a millimeter wave and a configuration for receiving a reflected wave resulting from the reflection of the millimeter wave by an object existing outside the vehicle 100.
  • Examples of the millimeter wave frequency include 24 GHz, 26 GHz, 76 GHz, and 79 GHz.
  • the millimeter wave sensor unit can acquire the distance to the object associated with the reflected wave based on the time from when the millimeter wave is transmitted in a certain direction until the reflected wave is received. Further, by accumulating such distance data in association with the detection position, it is possible to acquire information related to the motion of the object associated with the reflected wave.
  • the reference height information H is acquired from the auto leveling system. However, if information indicating the reference height h is obtained, the reference height information H may be acquired from a vehicle height sensor or the like.
  • At least a part of the functions of the signal processing device 12 described above can be realized by the control device 101 shown in FIG.
  • FIG. 4 shows a configuration of the sensor system 2 according to the second embodiment.
  • the sensor system 2 is mounted on the vehicle 100 shown in FIG.
  • the sensor system 2 includes a plurality of sensor units 20.
  • Each of the plurality of sensor units 20 is a device that detects information outside the vehicle 100 and outputs a signal corresponding to the information.
  • the plurality of sensor units 20 includes a first sensor unit 21 and a second sensor unit 22.
  • the first sensor unit 21 is configured to detect first information outside the vehicle 100 and output a first signal S1 corresponding to the first information.
  • the first sensor unit 21 can be any one of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the second sensor unit 22 is configured to detect second information outside the vehicle 100 and output a second signal S2 corresponding to the second information.
  • the second sensor unit 22 can be any one of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the LiDAR sensor unit has a configuration for emitting non-visible light and a configuration for detecting return light as a result of reflection of the non-visible light on at least an object existing outside the vehicle.
  • the LiDAR sensor unit can include a scanning mechanism that sweeps the invisible light by changing the emission direction (that is, the detection direction) as necessary. For example, infrared light having a wavelength of 905 nm can be used as invisible light.
  • the camera unit is a device for acquiring an image as information outside the vehicle.
  • the image can include at least one of a still image and a moving image.
  • the camera unit may include a camera having sensitivity to visible light, or may include a camera having sensitivity to infrared light.
  • the millimeter wave sensor unit has a configuration for transmitting a millimeter wave and a configuration for receiving a reflected wave resulting from the reflection of the millimeter wave by an object existing outside the vehicle 100.
  • Examples of the millimeter wave frequency include 24 GHz, 26 GHz, 76 GHz, and 79 GHz.
  • the first sensor unit 21 and the second sensor unit 22 may be a plurality of sensor units arranged in different areas in the vehicle 100.
  • the first sensor unit 21 and the second sensor unit 22 may be disposed at the left front corner LF and the right front corner RF of the vehicle 100, respectively.
  • the first sensor unit 21 and the second sensor unit 22 may be disposed at the left front corner LF and the left rear corner LB of the vehicle 100, respectively.
  • the first sensor unit 21 and the second sensor unit 22 may be a plurality of sensor units arranged in substantially the same region in the vehicle 100.
  • both the first sensor unit 21 and the second sensor unit 22 can be disposed at the left front corner LF of the vehicle 100.
  • the sensor system 2 includes a signal processing device 30.
  • the signal processing device 30 can be arranged at an arbitrary position in the vehicle 100.
  • the signal processing device 30 can be realized by a general-purpose microprocessor that operates in cooperation with a general-purpose memory.
  • a general-purpose microprocessor a CPU, an MPU, and a GPU can be exemplified.
  • the general-purpose memory ROM and RAM can be exemplified.
  • the ROM can store a computer program that realizes processing to be described later.
  • the general-purpose microprocessor designates at least a part of a program stored on the ROM, expands it on the RAM, and executes the above-described processing in cooperation with the RAM.
  • the signal processing device 30 may be realized by a dedicated integrated circuit such as a microcontroller, ASIC, or FPGA that can execute a computer program that realizes processing to be described later.
  • the signal processing device 30 may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
  • the signal processing device 30 acquires the first signal S1 from the first sensor unit 21. In other words, the signal processing device 30 acquires the first information detected by the first sensor unit 21 by receiving the first signal S1 (STEP 21).
  • the signal processing device 30 acquires the second signal S ⁇ b> 2 from the second sensor unit 22.
  • the signal processing device 30 acquires the second information detected by the second sensor unit 22 by receiving the second signal S2 (STEP 22).
  • the order in which the processing in STEP 21 and the processing in STEP 22 are performed may be reversed.
  • the processing of STEP 21 and the processing of STEP 22 may be performed simultaneously.
  • the signal processing device 30 determines whether the first information and the second information include the same feature based on the first signal S1 and the second signal S2 (STEP 23).
  • the signal processing device 30 determines whether the feature can be a reference target (STEP 24).
  • the “reference target” means a feature that can be detected by the sensor unit 20 and can provide reference position information.
  • the reference target include a license plate, a guardrail, a sound barrier, a traffic light, a traffic sign, and a center line of a preceding vehicle.
  • the height from the road surface and the distance from the road shoulder are legally determined, and a feature whose position can be identified with relatively high accuracy by detecting its presence can be a reference target.
  • the feature may not always be a reference target. For example, even if a license plate of a preceding vehicle is detected as a feature, if the position is not determined due to a relative position change with the preceding vehicle, it is determined that the license plate cannot be a reference target. That is, it can be a condition that it is determined that the feature can be a reference target when a certain position information can be read from the detected feature exceeds a predetermined value.
  • the signal processing device 30 determines the reference target specified by the first information and the reference specified by the second information. A difference from the position of the target is detected (STEP 25).
  • the plurality of sensor units 20 mounted on the vehicle may be displaced due to vibration during traveling or the passage of time.
  • the occurrence of such misalignment is a phenomenon in which the same reference target is detected by a plurality of sensor units 20, but the positions of the specified reference targets are different among the plurality of sensor units 20. Connected. Therefore, by detecting the difference, it is possible to grasp that a positional deviation has occurred in at least one of the plurality of sensor units 20.
  • the user can be notified.
  • the user can take appropriate measures to correct the misalignment. Therefore, the detection accuracy of the plurality of sensor units 20 required for driving support of the vehicle 100 can be maintained.
  • the left front LiDAR unit as the first sensor unit 21 is arranged at the left front corner LF of the vehicle 100, and the left rear LiDAR unit as the second sensor unit 22 is The case where it arrange
  • the front left LiDAR unit acquires information on an object existing in an area including the left side of the vehicle 100.
  • the information is an example of first information.
  • the left front LiDAR unit outputs a first signal S1 corresponding to the first information.
  • the signal processing device 30 acquires the first signal S1 (STEP 21).
  • the left rear LiDAR unit acquires information on an object existing in an area including the left side of the vehicle 100.
  • the information is an example of second information.
  • the left rear LiDAR unit outputs a second signal S2 corresponding to the second information.
  • the signal processing device 30 acquires the second signal S2 (STEP 22).
  • the signal processing device 30 performs information processing based on the first signal S1 and the second signal S2, thereby determining whether the same information is included in the first information and the second information (STEP 23).
  • the signal processing device 30 determines whether the guardrail can be a reference target (STEP24).
  • the signal processing device 30 detects a difference between the position (height) of the upper end of the guard rail identified through the left front LiDAR unit and the position (height) of the upper end of the guard rail identified through the left rear LiDAR unit ( (STEP 25).
  • a notification indicating that a positional deviation has occurred in at least one of the left front LiDAR unit and the left rear LiDAR unit is made.
  • the left front camera unit as the first sensor unit 21 is arranged at the left front corner LF of the vehicle 100
  • the right front camera unit as the second sensor unit 22 is the front right of the vehicle 100.
  • positions to corner RF is mentioned.
  • the left front camera unit acquires the first image including the front of the vehicle 100.
  • the first image is an example of first information.
  • the left front camera unit outputs a first signal S1 corresponding to the first image.
  • the signal processing device 30 acquires the first signal S1 (STEP 21).
  • the front right camera unit acquires a second image including the front of the vehicle 100.
  • the second image is an example of second information.
  • the right front camera unit outputs a second signal S2 corresponding to the second image.
  • the signal processing device 30 acquires the second signal S2 (STEP 22).
  • the signal processing device 30 performs image processing based on the first signal S1 and the second signal S2, thereby determining whether or not the same feature is included in the first image and the second image (STEP 23).
  • the signal processing device 30 determines whether the center line can be a reference target (STEP 24).
  • the signal processing device 30 detects a difference between the position of the center line specified through the left front camera unit and the position of the center line specified through the right front camera unit (STEP 25).
  • a notification indicating that a positional deviation has occurred in at least one of the left front camera unit and the right front camera unit is made.
  • the front left LiDAR unit acquires information on an object existing in an area including the front of the vehicle 100.
  • the information is an example of first information.
  • the left front LiDAR unit outputs a first signal S1 corresponding to the first information.
  • the signal processing device 30 acquires the first signal S1 (STEP 21).
  • the left front camera unit acquires a second image including the front of the vehicle 100.
  • the second image is an example of second information.
  • the right front camera unit outputs a second signal S2 corresponding to the second image.
  • the signal processing device 30 acquires the second signal S2 (STEP 22).
  • the signal processing device 30 performs information processing based on the first signal S1 and the second signal S2, thereby determining whether the same information is included in the first information and the second information (STEP 23).
  • the signal processing device 30 determines whether the traffic signal can be a reference target (STEP 24).
  • the signal processing device 30 detects the difference between the position of the traffic light identified through the left front LiDAR unit and the position of the traffic light identified through the left front camera unit (STEP 25).
  • a notification indicating that a positional deviation has occurred in at least one of the left front LiDAR unit and the left front camera unit is made.
  • the reference target is preferably one that can provide a reference for the position in the height direction. This is because the position reference in the height direction tends to be relatively small in variation due to the traveling state of the vehicle 100 and easily suppress an increase in processing load in the signal processing device 30.
  • the signal processing device 30 can acquire the time-dependent change in the position of the reference target specified based on the first information and the position of the reference target specified based on the second information. (STEP 26). Specifically, the signal processing device 30 repeats the processing from STEP 21 to STEP 25 at a predetermined timing, and changes the position of the reference target specified based on the first information in the latest processing and the first information in the previous processing. The position of the reference target identified based on the comparison is compared. Similarly, the position of the reference target specified based on the second information in the latest process is compared with the position of the reference target specified based on the second information in the previous process. Examples of the predetermined timing include the elapse of a predetermined time from the end of the previous process, the time when the user inputs a process execution instruction, and the like.
  • the signal processing device 30 specifies the sensor unit 20 that needs to be corrected based on the acquired temporal change (STEP 27). For example, when the position of the reference target specified by the first sensor unit 21 does not change over time and the position of the reference target specified by the second sensor unit 22 changes over time, the second sensor unit 22 It is determined that there is a cause of misalignment on the side and correction is necessary. The identified sensor unit can be notified to the user.
  • the sensor unit 20 that needs to be corrected can be specified by monitoring the change with time of the position of the reference target specified by each sensor unit 20. Therefore, the detection accuracy of the plurality of sensor units 20 required for driving support of the vehicle 100 can be more easily maintained.
  • the plurality of sensor units 20 may include a third sensor unit 23.
  • the third sensor unit 23 may be configured to detect third information outside the vehicle 100 and output a third signal S3 corresponding to the third information.
  • the third sensor unit 23 can be any of a LiDAR sensor unit, a camera unit, and a millimeter wave sensor unit.
  • the third sensor unit 23 can be arranged in a different area in the vehicle 100 from the first sensor unit 21 and the second sensor unit 22.
  • the third sensor unit 23 is configured to It can be arranged in the rear corner RB.
  • the third sensor unit 23 can be disposed in substantially the same region of the vehicle 100 as at least one of the first sensor unit 21 and the second sensor unit 22.
  • the signal processing device 30 acquires the third signal S ⁇ b> 3 from the third sensor unit 23.
  • the signal processing device 30 acquires the third information detected by the third sensor unit 23 by receiving the third signal S3 (STEP 28).
  • the order of STEP 21, STEP 22, and STEP 28 is arbitrary.
  • STEP28 may be performed simultaneously with at least one of STEP21 and STEP22.
  • the signal processing device 30 determines whether the first information, the second information, and the third information include the same feature based on the first signal S1, the second signal S2, and the third signal S3. Determine (STEP 23).
  • the processing by the signal processing device 30 ends.
  • the signal processing device 30 determines whether the feature can be a reference target. (STEP 24).
  • the signal processing device 30 determines the position of the reference target specified by the first information and the reference specified by the second information. A difference between the position of the target and the position of the reference target specified by the third information is detected (STEP 25).
  • the signal processing device 30 identifies the sensor unit 20 that needs to be corrected based on the difference identified in STEP 25 (STEP 27). For example, when the position of the reference target specified by the first sensor unit 21 and the second sensor unit 22 is the same and only the position of the reference target specified by the third sensor unit 23 is different, There is a high probability that the sensor unit 23 is displaced. The identified sensor unit 20 can be left to the user.
  • the sensor unit 20 that needs to be corrected can be specified without repeating the process of specifying the position of the reference target. Therefore, the detection accuracy of the plurality of sensor units 20 required for driving support of the vehicle 100 can be more easily maintained.
  • the sensor system 2 can include a left front lamp device 40.
  • the left front lamp device 40 may include a lamp housing 41 and a translucent cover 42.
  • the lamp housing 41 partitions the lamp chamber 43 together with the translucent cover 42.
  • the front left lamp device 40 is mounted on the front left corner LF of the vehicle 100 shown in FIG.
  • the left front lamp device 40 may include a lamp unit 44.
  • the lamp unit 44 is a device that emits visible light to the outside of the vehicle 100.
  • the lamp unit 44 is accommodated in the lamp chamber 43. Examples of the lamp unit 44 include a headlamp unit, a vehicle width lamp unit, a direction indicator lamp unit, and a fog lamp unit.
  • At least one sensor unit 20 is arranged in the lamp chamber 43. Since the lamp unit 44 has a function of supplying light to the outside of the vehicle 100, the lamp unit 44 is generally disposed in a place with a small amount of shielding such as the left front corner FB. By arranging the sensor unit 20 in such a place, information outside the vehicle 100 can be efficiently acquired.
  • a right front lamp device having a symmetrical configuration with the left front lamp device 40 can be mounted on the right front corner RF of the vehicle 100 shown in FIG.
  • a left rear lamp device may be mounted on the left rear corner LB of the vehicle 100.
  • examples of the lamp unit included in the left rear lamp device may include a brake light unit, a taillight unit, a vehicle width light unit, and a reverse light unit.
  • a right rear lamp device having a configuration symmetrical to the left rear lamp device may be mounted on the right rear corner RB of the vehicle 100.
  • at least one sensor unit 20 can be disposed in a lamp chamber defined by a lamp housing.
  • the second embodiment is merely an example for facilitating understanding of the present disclosure.
  • the configuration according to the second embodiment can be changed or improved as appropriate without departing from the spirit of the present disclosure.
  • FIG. 7 shows a functional configuration of the image data generation apparatus 301 according to the third embodiment.
  • the image data generation device 301 is mounted on a vehicle.
  • FIG. 8 shows an example of a vehicle 400 on which the image data generation device 301 is mounted.
  • the vehicle 400 is a towed vehicle having a tractor portion 401 and a trailer portion 402.
  • the tractor portion 401 includes a driver seat 403.
  • the vehicle 400 includes a camera 404.
  • the camera 404 is a device for acquiring an image behind the driver seat 403.
  • the camera 404 is configured to output a camera signal corresponding to the acquired image.
  • the image data generation apparatus 301 includes an input interface 311.
  • a camera signal CS output from the camera 404 is input to the input interface 311.
  • the image data generation device 301 further includes a processor 312, an output interface 313, and a communication bus 314.
  • the input interface 311, the processor 312, and the output interface 313 can exchange signals and data via the communication bus 314.
  • the processor 312 is configured to execute the processing shown in FIG.
  • the processor 312 acquires the camera signal CS input to the input interface 311 (STEP 31).
  • the expression “obtain camera signal CS” means that the camera signal CS input to the input interface 311 is in a state in which processing described later can be performed via an appropriate circuit configuration.
  • the processor 312 generates the first image data D1 based on the camera signal CS (STEP 32). As shown in FIG. 7, the first image data D ⁇ b> 1 is transmitted to the display device 405 mounted on the vehicle 400 via the output interface 313.
  • the display device 405 may be disposed in the passenger compartment of the vehicle 400 or may be disposed at the position of the side door mirror.
  • the first image data D1 is data corresponding to the first monitoring image I1 displayed on the display device 405.
  • FIG. 10A shows an example of the first monitoring image I1.
  • an image behind the driver's seat 403 acquired by the camera 404 is constantly displayed on the display device 405.
  • the driver acquires information behind the driver's seat 403 through the first monitoring image I1 displayed on the display device 405.
  • an arrow X indicates the direction of the optical axis of the camera 404.
  • the tractor portion 401 can take a posture that is largely bent with respect to the trailer portion 402 as indicated by a two-dot chain line in FIG.
  • the optical axis of the camera 404 may face the side wall of the trailer portion 402, which may hinder the driver's rearward visual recognition.
  • the processor 312 determines a reference object included in the first monitoring image I1 (STEP 33).
  • the “reference object” needs to be always included in the first monitoring image I1 in order for the driver to continuously acquire information behind the driver's seat 403, and image recognition is compared as a target object. It is defined as a thing that is easy.
  • the rear end edge 402a of the trailer portion 402 of the vehicle 400 is used as a reference object.
  • the processor 312 determines the trailing edge 402a as a reference object using, for example, an edge extraction technique.
  • the rear end edge 402a of the trailer portion 402 is an example of a rear portion of the vehicle 400.
  • “Rear part” means a part of the vehicle 400 that is located behind the driver's seat 403.
  • the processor 312 determines whether the reference object determined in STEP 33 is included in a predetermined area in the first monitoring image I1 (STEP 34).
  • the predetermined area may be the entire first monitoring image I1 shown in FIG. 10A.
  • the predetermined area may be the right side of the boundary line BD indicated by the one-dot chain line in the drawing. It may be defined as part of the image I1.
  • the first monitoring image I1 is continuously displayed on the display device 405.
  • the predetermined area is the entire first monitoring image I1
  • it is determined that the rear edge 402a as the reference object is included in the predetermined area.
  • the predetermined region is a region on the right side of the boundary line BD, it is determined that the rear edge 402a as the reference object is not included in the predetermined region. (N in STEP 34).
  • the processor 312 If it is determined that the reference object is not included in the predetermined area in the first monitoring image I1, the processor 312 generates the second image data D2 as shown in FIG. 9 (STEP 35).
  • the second image data D2 is data for causing the display device 405 to display the second monitoring image I2 in which the reference object is included in a predetermined area. As illustrated in FIG. 7, the second image data D2 is transmitted to the display device 405 mounted on the vehicle 400 via the output interface 313.
  • FIG. 10B shows an example of the second monitoring image I2.
  • a predetermined area can be defined similarly to the first monitoring image I1.
  • a region on the right side of the boundary line BD is a predetermined region. It can be seen that the trailing edge 402a of the trailer portion 402 as a reference object is included in a predetermined region. The driver can continue to acquire information behind the driver seat 403 through the second monitoring image I2 displayed on the display device 405.
  • either the first image data D1 or the second image data D2 is generated according to the position of the reference object, and the first monitoring image I1 or the first image including the reference object in a predetermined region. It is possible to continue displaying the second monitoring image I2 on the display device 405. Therefore, it is possible to avoid a situation in which the driver is prevented from visually recognizing the rear depending on the state of the vehicle 400. That is, it is possible to provide an electronic mirror technology with further improved rear visibility.
  • FIG. 11 is a diagram for explaining the first specific example.
  • a symbol I0 indicates the entire image acquired by the camera 404.
  • the first monitoring image I1 and the second monitoring image I2 can be different parts in the image. That is, the processor 312 generates the first image data D1 based on the camera signal CS corresponding to the first portion of the image acquired by the camera 404. Similarly, the processor 312 generates the second image data D2 based on the camera signal CS corresponding to the second portion of the image acquired by the camera 404.
  • the first monitoring image I1 and the second monitoring image I2 are generated by clipping the minimum necessary area from the original image, it is possible to achieve both securing the visual field and preventing the deterioration of the visibility. .
  • FIG. 12 is a diagram for explaining a second specific example.
  • the angle of view of the camera 404 can be changed.
  • a known mechanism for changing the angle of view is provided in the camera 404.
  • the angle of view of the camera 404 can be changed by the processor 312 sending a control signal S to the camera 404 through the output interface 313 as shown in FIG.
  • the processor 312 generates the first image data D1 based on the image acquired when the angle of view of the camera 404 is the first angle of view ⁇ 1.
  • the processor 312 generates the second image data D2 based on the image acquired when the angle of view of the camera 404 is the second angle of view ⁇ 2.
  • the second field angle ⁇ 2 is wider than the first field angle ⁇ 1. That is, the second monitoring image I2 is displayed on the display device 405 as a wider-angle image.
  • the trailing edge 402a of the trailer portion 402 serving as the reference object exists in the field of view of the first angle of view ⁇ 1. Accordingly, the first monitoring image I1 is generated accordingly.
  • the rear edge 402a is out of the field of view of the first angle of view ⁇ 1.
  • the processor 312 performs control to expand the angle of view of the camera 404 to the second angle of view ⁇ 2. As a result, the rear edge 402a is within the field of view of the second angle of view ⁇ 2, and an appropriate second monitoring image I2 is obtained.
  • the camera 404 with a wide angle of view is used from the beginning, it is inevitable that an object displayed in the monitoring image displayed on the display device 405 becomes small while a wide range can be visually recognized.
  • the second monitoring image I2 is generated by widening the angle of view only when necessary, it is possible to ensure both the field of view and the prevention of deterioration in visibility.
  • FIG. 13 is a diagram for explaining a third specific example.
  • the direction of the optical axis of the camera 404 can be changed.
  • a known swivel mechanism that changes the direction of the optical axis is provided in the camera 404.
  • the direction of the optical axis of the camera 404 can be changed by the processor 312 sending a control signal S to the camera 404 via the output interface 313 as shown in FIG.
  • the processor 312 generates the first image data D1 based on the image acquired when the optical axis of the camera 404 is oriented in the first direction X1.
  • the processor 312 generates the second image data D2 based on the image acquired when the direction of the optical axis of the camera 404 is in the second direction X2 different from the first direction X1. That is, the imaging target located in the center differs between the first monitoring image I1 and the second monitoring image I2.
  • the rear end edge 402a of the trailer portion 402 serving as the reference object exists in the field of view in which the direction of the optical axis is the first direction X1. Accordingly, the first monitoring image I1 is generated accordingly.
  • the processor 312 performs control to change the direction of the optical axis of the camera 404 from the first direction X1 to the second direction X2. As a result, the rear edge 402a is within the field of view in which the direction of the optical axis is the second direction X2, and an appropriate second monitoring image I2 is obtained.
  • FIG. 14 is a diagram for explaining a fourth specific example.
  • a first camera 404 a and a second camera 404 b are mounted on the vehicle 400 as the camera 404.
  • the direction of the optical axis of the first camera 404a is different from the direction of the optical axis of the second camera 404b.
  • the processor 312 generates the first image data D1 based on the image acquired by the first camera 404a.
  • the processor 312 generates the second image data D2 based on the image acquired by the second camera 404b. Switching of the operating camera can be performed by the processor 312 transmitting the control signal S through the output interface 313.
  • the trailing edge 402a of the trailer portion 402 serving as the reference object exists in the field of view of the first camera 404a. Accordingly, the first monitoring image I1 is generated accordingly.
  • the processor 312 performs control to switch the camera used for image acquisition from the first camera 404a to the second camera 404b. As a result, the rear edge 402a is within the field of view of the second camera 404b, and an appropriate second monitoring image I2 is obtained.
  • the input interface 311 of the image data generation apparatus 301 can accept an input from the user interface 406.
  • the user interface 406 is provided in the passenger compartment of the vehicle 400 and can accept tactile operation instructions, voice input instructions, line-of-sight input instructions, etc. to buttons, touch panel devices, and the like.
  • the reference object used for the determination in STEP 34 in FIG. 9 can be designated by the user via the first monitoring image I1 displayed on the display device 405.
  • the user interface 406 is a touch panel device provided on the display device 405
  • the appropriate location included in the first monitoring image I 1 is touched to make the location a reference object.
  • the functions of the processor 312 described above can be realized by a general-purpose microprocessor that operates in cooperation with a memory.
  • a general-purpose microprocessor a CPU, an MPU, and a GPU can be exemplified.
  • Examples of general-purpose memory include ROM and RAM.
  • the ROM can store a computer program for executing the above processing.
  • the general-purpose microprocessor can specify at least a part of a program stored in the ROM, expand it on the RAM, and execute the above-described processing in cooperation with the RAM.
  • the functions of the processor 312 described above may be realized by a dedicated integrated circuit such as a microcontroller, ASIC, or FPGA that can execute a computer program that implements processing to be described later.
  • the functions of the processor 312 described above may be realized by a combination of a general-purpose microprocessor and a dedicated integrated circuit.
  • the third embodiment is merely an example for facilitating understanding of the present disclosure.
  • the configuration according to the third embodiment may be changed or improved as appropriate without departing from the spirit of the present disclosure.
  • a part of the rear portion of the vehicle 400 is designated as the reference object.
  • a part of the vehicle located behind the vehicle 400 may be designated as the reference object when the automatic platooning is performed.
  • the number and position of the cameras 404 mounted on the vehicle 400 can be appropriately determined according to the specifications of the vehicle 400.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Studio Devices (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

L'invention concerne une unité de capteur LiDAR (11) qui détecte des informations relatives à l'extérieur d'un véhicule, et délivre en sortie un signal (S1) correspondant auxdites informations. Un dispositif de traitement de signal (12) traite le signal (S1) pour générer des données LiDAR correspondant aux informations. Le dispositif de traitement de signal (12) acquiert des informations de hauteur de référence (H) qui indiquent une hauteur de référence déterminée sur la base d'un angle de tangage du véhicule. Sur la base des informations de hauteur de référence (H), le dispositif de traitement de signal (12) génère les données LiDAR en utilisant le signal (S1) associé à une région correspondant à la hauteur de référence, à l'intérieur d'une zone de détection de l'unité de capteur LiDAR (11).
PCT/JP2019/008084 2018-03-05 2019-03-01 Système de capteur et dispositif de génération de données d'image WO2019172117A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020504983A JP7288895B2 (ja) 2018-03-05 2019-03-01 センサシステム、および画像データ生成装置

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2018-038879 2018-03-05
JP2018038879 2018-03-05
JP2018-049652 2018-03-16
JP2018049652 2018-03-16
JP2018-051287 2018-03-19
JP2018051287 2018-03-19

Publications (1)

Publication Number Publication Date
WO2019172117A1 true WO2019172117A1 (fr) 2019-09-12

Family

ID=67847235

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/008084 WO2019172117A1 (fr) 2018-03-05 2019-03-01 Système de capteur et dispositif de génération de données d'image

Country Status (2)

Country Link
JP (1) JP7288895B2 (fr)
WO (1) WO2019172117A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021056060A (ja) * 2019-09-30 2021-04-08 株式会社デンソー 車両に搭載されているライダーの傾きの検出装置、および車両に搭載されているライダーの傾きの検出方法
JP2021155033A (ja) * 2020-03-20 2021-10-07 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲーMEKRA Lang GmbH & Co. KG 車両用のビューイングシステム、およびビューイングシステムによって表示される画像領域間で切り換える方法
CN114347916A (zh) * 2020-10-12 2022-04-15 丰田自动车株式会社 车辆用传感器安装构造

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0996525A (ja) * 1995-09-29 1997-04-08 Suzuki Motor Corp 車載用距離測定装置
JP2001310679A (ja) * 2000-04-28 2001-11-06 Isuzu Motors Ltd バックミラー装置
JP2001318149A (ja) * 2000-03-02 2001-11-16 Denso Corp 車両用前方情報検出装置
JP2003066144A (ja) * 2001-08-23 2003-03-05 Omron Corp 対象物検出装置および方法
JP2005505074A (ja) * 2001-10-05 2005-02-17 ローベルト ボッシュ ゲゼルシャフト ミット ベシュレンクテル ハフツング 対象検出装置
JP2009065483A (ja) * 2007-09-06 2009-03-26 Denso Corp 駐車支援装置
JP2009210485A (ja) * 2008-03-05 2009-09-17 Honda Motor Co Ltd 車両用走行安全装置
JP2010008280A (ja) * 2008-06-27 2010-01-14 Toyota Motor Corp 物体検出装置
JP2016107985A (ja) * 2014-12-05 2016-06-20 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲーMEKRA Lang GmbH & Co. KG ビジュアルシステム
JP2016113054A (ja) * 2014-12-16 2016-06-23 祥忠 川越 サイドリアビュー確認装置
JP2016223963A (ja) * 2015-06-02 2016-12-28 日立建機株式会社 鉱山用作業機械

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2849127B2 (ja) * 1989-09-29 1999-01-20 マツダ株式会社 移動車の環境認識装置
JP4040620B2 (ja) 2004-11-30 2008-01-30 本田技研工業株式会社 車両周辺監視装置
JP2008026933A (ja) 2006-07-18 2008-02-07 Toyota Motor Corp 周辺環境認識装置
JP6106495B2 (ja) 2013-04-01 2017-03-29 パイオニア株式会社 検出装置、制御方法、プログラム及び記憶媒体

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0996525A (ja) * 1995-09-29 1997-04-08 Suzuki Motor Corp 車載用距離測定装置
JP2001318149A (ja) * 2000-03-02 2001-11-16 Denso Corp 車両用前方情報検出装置
JP2001310679A (ja) * 2000-04-28 2001-11-06 Isuzu Motors Ltd バックミラー装置
JP2003066144A (ja) * 2001-08-23 2003-03-05 Omron Corp 対象物検出装置および方法
JP2005505074A (ja) * 2001-10-05 2005-02-17 ローベルト ボッシュ ゲゼルシャフト ミット ベシュレンクテル ハフツング 対象検出装置
JP2009065483A (ja) * 2007-09-06 2009-03-26 Denso Corp 駐車支援装置
JP2009210485A (ja) * 2008-03-05 2009-09-17 Honda Motor Co Ltd 車両用走行安全装置
JP2010008280A (ja) * 2008-06-27 2010-01-14 Toyota Motor Corp 物体検出装置
JP2016107985A (ja) * 2014-12-05 2016-06-20 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲーMEKRA Lang GmbH & Co. KG ビジュアルシステム
JP2016113054A (ja) * 2014-12-16 2016-06-23 祥忠 川越 サイドリアビュー確認装置
JP2016223963A (ja) * 2015-06-02 2016-12-28 日立建機株式会社 鉱山用作業機械

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021056060A (ja) * 2019-09-30 2021-04-08 株式会社デンソー 車両に搭載されているライダーの傾きの検出装置、および車両に搭載されているライダーの傾きの検出方法
JP7439434B2 (ja) 2019-09-30 2024-02-28 株式会社デンソー 車両に搭載されているライダーの傾きの検出装置、および車両に搭載されているライダーの傾きの検出方法
JP2021155033A (ja) * 2020-03-20 2021-10-07 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲーMEKRA Lang GmbH & Co. KG 車両用のビューイングシステム、およびビューイングシステムによって表示される画像領域間で切り換える方法
JP7158521B2 (ja) 2020-03-20 2022-10-21 メクラ・ラング・ゲーエムベーハー・ウント・コー・カーゲー 車両用のビューイングシステム、およびビューイングシステムによって表示される画像領域間で切り換える方法
CN114347916A (zh) * 2020-10-12 2022-04-15 丰田自动车株式会社 车辆用传感器安装构造
JP2022063471A (ja) * 2020-10-12 2022-04-22 トヨタ自動車株式会社 車両用センサ取り付け構造

Also Published As

Publication number Publication date
JPWO2019172117A1 (ja) 2021-03-11
JP7288895B2 (ja) 2023-06-08

Similar Documents

Publication Publication Date Title
US10481271B2 (en) Automotive lighting system for a vehicle
US6888447B2 (en) Obstacle detection device for vehicle and method thereof
US7158015B2 (en) Vision-based method and system for automotive parking aid, reversing aid, and pre-collision sensing application
KR20200047886A (ko) 운전자 보조 시스템 및 그 제어방법
JP6512164B2 (ja) 物体検出装置、物体検出方法
US20150154802A1 (en) Augmented reality lane change assistant system using projection unit
WO2019172117A1 (fr) Système de capteur et dispositif de génération de données d'image
US11345279B2 (en) Device and method for warning a driver of a vehicle
US9251709B2 (en) Lateral vehicle contact warning system
KR102352464B1 (ko) 운전자 보조 시스템 및 그 제어 방법
JP2006318093A (ja) 車両用移動物体検出装置
US20200031273A1 (en) System for exchanging information between vehicles and control method thereof
US10832438B2 (en) Object distancing system for a vehicle
US20200353919A1 (en) Target detection device for vehicle
JP2008008679A (ja) 物体検出装置、衝突予測装置、及び車両制御装置
JP2010162975A (ja) 車両制御システム
WO2016013167A1 (fr) Dispositif de commande d'affichage de véhicule
US11794740B2 (en) Smart cruise control system and method of controlling the same
JP6344260B2 (ja) 障害物検出装置
JP4284652B2 (ja) レーダ装置
CN110497861B (zh) 传感器系统及检查方法
JP2014106635A (ja) 車両用灯具システム
JP4294450B2 (ja) 車両用運転支援装置
CN210116465U (zh) 传感器系统
JP7356451B2 (ja) センサシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19764658

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020504983

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19764658

Country of ref document: EP

Kind code of ref document: A1