WO2024024148A1 - Dispositif de surveillance embarqué, dispositif de traitement d'informations et système de surveillance embarqué - Google Patents

Dispositif de surveillance embarqué, dispositif de traitement d'informations et système de surveillance embarqué Download PDF

Info

Publication number
WO2024024148A1
WO2024024148A1 PCT/JP2023/007277 JP2023007277W WO2024024148A1 WO 2024024148 A1 WO2024024148 A1 WO 2024024148A1 JP 2023007277 W JP2023007277 W JP 2023007277W WO 2024024148 A1 WO2024024148 A1 WO 2024024148A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
vehicle
unit
image data
power consumption
Prior art date
Application number
PCT/JP2023/007277
Other languages
English (en)
Japanese (ja)
Inventor
一騎 原
順一 坂本
慎弥 角倉
卓義 小曽根
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2024024148A1 publication Critical patent/WO2024024148A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R25/00Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
    • B60R25/30Detection related to theft or to other events relevant to anti-theft systems
    • B60R25/31Detection related to theft or to other events relevant to anti-theft systems of human presence inside or outside the vehicle
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the present disclosure relates to an in-vehicle monitoring device, an information processing device, and an in-vehicle monitoring system.
  • An in-vehicle monitoring system uses an image sensor to constantly monitor the surrounding situation of a vehicle parked in a parking lot or the like.
  • In-vehicle monitoring systems for example, use multiple image sensors to constantly capture images in the front, rear, left, and right directions of a vehicle, and then analyze or record the captured images through image processing under the control of an ECU (Electronic Control Unit) mounted on the vehicle. monitor the surroundings of the vehicle.
  • ECU Electronic Control Unit mounted on the vehicle.
  • Patent Document 1 discloses a technique in which cameras mounted as electronic mirrors provided on the left and right sides of the vehicle are used to capture images in the left-right direction of the vehicle.
  • the image sensor and ECU operate constantly when the vehicle is parked in the same way as when the vehicle is in operation, which consumes a large amount of power and battery, making it difficult to monitor parking for long periods of time. .
  • An object of the present disclosure is to provide an in-vehicle monitoring device, an information processing device, and an in-vehicle monitoring system that can suppress power consumption when monitoring the surrounding situation of a vehicle when it is parked.
  • An in-vehicle monitoring device is provided in a vehicle including a side mirror housing, and includes an imaging section that generates image data in response to imaging, and a control section that controls the imaging operation and imaging direction of the imaging section.
  • the control unit controls the imaging direction of the imaging unit according to a state of the side mirror housing, and controls the imaging operation of the imaging unit to a first power consumption level according to a determination that the ignition of the vehicle is turned off. mode, and the imaging unit is configured to set the imaging operation to the first power consumption mode when movement of a peripheral object is detected based on first image data generated in the first power consumption mode.
  • the second power consumption mode which consumes more power, is set.
  • FIG. 1 is a block diagram schematically showing the configuration of an example of an on-vehicle monitoring system.
  • FIG. 2 is a schematic diagram for explaining the operation of the in-vehicle monitoring system.
  • FIG. 1 is a block diagram illustrating a configuration example of a vehicle control system, which is an example of a mobile device control system applicable to each embodiment of the present disclosure.
  • FIG. 3 is a diagram illustrating an example of a sensing area by an external recognition sensor mounted on a vehicle, which is applicable to each embodiment of the present disclosure.
  • FIG. 1 is a block diagram schematically showing the configuration of an example of an on-vehicle monitoring device according to a first embodiment.
  • FIG. 3 is a schematic diagram for explaining low resolution, medium resolution, and high resolution image data according to the first embodiment.
  • FIG. 3 is a schematic diagram for explaining the operation of the in-vehicle monitoring device according to the first embodiment.
  • FIG. 3 is a schematic diagram for explaining each operation mode and its transition in the in-vehicle monitoring device according to the first embodiment.
  • FIG. 1 is a block diagram showing the configuration of an example of an image sensor according to the first embodiment.
  • FIG. 1 is a perspective view schematically showing the structure of an example of an image sensor according to the first embodiment.
  • FIG. 2 is a block diagram showing in more detail the configuration of an example of an imaging unit applicable to the first embodiment.
  • FIG. 2 is a schematic diagram showing an example of a Bayer array. It is a flow chart of an example showing parking monitoring processing concerning a 1st embodiment.
  • FIG. 3 is a flowchart of an example of image data recording processing according to the first embodiment.
  • 1 is a schematic diagram schematically showing a side mirror camera according to existing technology
  • FIG. 2 is a schematic diagram illustrating an example of an imaging range of a side mirror camera and a captured image when a side mirror is deployed according to an existing technique.
  • FIG. 2 is a schematic diagram showing an example of an imaging range of a side mirror camera and a captured image when a side mirror is retracted according to an existing technique.
  • FIG. 1 is a schematic diagram schematically showing a side mirror camera according to a first embodiment.
  • FIG. 3 is a schematic diagram for explaining control of the imaging direction of the side mirror camera according to the first embodiment.
  • FIG. 2 is a block diagram showing an example of a configuration of a camera fixing jig driving section that drives the camera fixing jig according to the first embodiment.
  • 7 is a flowchart of an example showing control of the imaging direction of the side mirror camera according to the first embodiment.
  • FIG. 7 is a block diagram showing an example of a configuration of a camera fixing jig driving section that drives a camera fixing jig according to a first modification of the first embodiment.
  • 12 is a flowchart of an example showing control of the imaging direction of the side mirror camera according to the first modification of the first embodiment.
  • FIG. 7 is a schematic diagram showing the principle of a detection method using IR light according to a second modification of the first embodiment.
  • FIG. 2 is a schematic diagram showing an example of the arrangement of color filters including an IR filter in a color filter section.
  • FIG. 2 is a schematic diagram showing an example of spectral characteristics of a sensor in which color filters that transmit light in each wavelength region of each RGB color and an IR filter that transmits light in an IR wavelength region are arranged.
  • FIG. 3 is a schematic diagram showing an example of spectral characteristics of a dual bandpass filter.
  • FIG. 3 is a block diagram showing the configuration of an example of a vehicle-mounted monitoring device according to a third modification of the first embodiment. It is a block diagram showing the composition of an example of the vehicle-mounted monitoring device concerning the 4th modification of a 1st embodiment.
  • FIG. 7 is a schematic diagram for explaining control of the imaging direction according to the second embodiment.
  • FIG. 7 is a schematic diagram for explaining control of the imaging direction according to the third embodiment.
  • the present disclosure relates to an in-vehicle monitoring system that implements a parking monitoring function that monitors the surroundings of a parked vehicle based on image data captured by a camera mounted on the vehicle.
  • the in-vehicle surveillance system controls the imaging operation of the surround camera mounted on the vehicle in a parking surveillance mode that realizes a parking surveillance function based on the image data captured by the surround camera, with low power consumption.
  • the imaging operation mode is controlled in stages from the imaging operation mode to the imaging operation mode with higher power consumption.
  • the in-vehicle monitoring system first sets the imaging operation by the surround camera to the first power consumption mode in which power consumption is low.
  • the in-vehicle monitoring system performs moving object detection based on image data captured by a surround camera in the first power consumption mode, and detects movement of peripheral objects that are objects around the vehicle.
  • the vehicle-mounted monitoring system sets the imaging operation by the surround camera to the second power consumption mode, which consumes more power than the first power consumption mode, when the movement of a surrounding object is detected by the moving object detection.
  • the vehicle-mounted monitoring system performs human detection based on image data captured by a surround camera in the second power consumption mode, and detects people around the vehicle.
  • the imaging operation by the surround camera is set to a third power consumption mode that consumes more power than the second power consumption mode.
  • the vehicle-mounted monitoring system records image data captured by the surround camera on a recording medium in the third power consumption mode.
  • the in-vehicle monitoring system controls the imaging operation mode of the surround camera in stages from a low power consumption mode to a higher power consumption mode based on the image data. , it is possible to suppress the power consumption of the device and realize parking monitoring for a longer period of time.
  • the in-vehicle monitoring system controls the imaging direction of a side mirror camera mounted on a side mirror among the surround cameras to be constant regardless of the state of the side mirror. More specifically, the in-vehicle monitoring system maintains the imaging direction of the side mirror camera when the side mirror is folded and stored, and the imaging direction when the side mirror is unfolded, which is the normally used state. control.
  • the in-vehicle monitoring system controls the imaging direction of the side mirror camera to be constant regardless of the state of the side mirror, thereby suppressing the blind spot of the side mirror camera when the vehicle is parked. It becomes possible to do so. Therefore, according to the in-vehicle monitoring system according to each embodiment of the present disclosure, even if the side mirrors are in the retracted state during parking, parking monitoring of 360° around the vehicle can be realized.
  • FIG. 1 is a block diagram schematically showing the configuration of an example of an on-vehicle monitoring system.
  • an on-vehicle monitoring system 3000 includes an image sensor 3100 and some functions of an ECU (Electronic Control Unit) 3200, which are each mounted on a vehicle.
  • the image sensor 3100 includes an image sensor and a drive circuit that drives the image sensor, and outputs image data acquired by imaging.
  • ECU 3200 may control the entire vehicle (center), or may control a part of the vehicle (zone).
  • ECU 3200 may include a versatile processor such as a CPU (Central Processing Unit) or a processor specialized for image processing such as an ISP (Image Signal Processor).
  • CPU Central Processing Unit
  • ISP Image Signal Processor
  • Imaging is performed by the image sensor 3100, and the captured image data is passed to the ECU 3200.
  • ECU 3200 executes recognition processing based on the image data passed from image sensor 3100, and recognizes moving objects and people included in the image data.
  • ECU 3200 may further perform segmentation analysis of image data, face recognition processing, etc. as cognitive processing.
  • ECU 3200 executes a determination process on the result of the recognition process, and performs predetermined control according to the determination result. For example, if the ECU 3200 determines that an unregistered person approaches the vehicle while the vehicle is parked, the ECU 3200 may transmit an alarm to the terminal device of the user of the vehicle.
  • FIG. 2 is a schematic diagram for explaining the operation of the in-vehicle monitoring system 3000.
  • the in-vehicle monitoring system 3000 captures an image using the image sensor 3100 in the parking monitoring mode.
  • the image sensor 3100 outputs image data 500 acquired by imaging in units of frames.
  • the ECU 3200 detects a moving object in image data 500 det output at time t det .
  • the ECU 3200 may perform recognition processing or the like by focusing on the area 501 in which a moving object is detected in the image data 500 from the image data 500 of the next frame after the image data 500 det in which a moving object is detected.
  • the vehicle-mounted monitoring system 3000 operates the image sensor 3100 at high resolution to obtain high-resolution image data as a captured image.
  • the high-resolution operation is, for example, an operation in which image data 500 is formed from each pixel data acquired by all pixels in an effective pixel area in an image sensor including a plurality of pixels.
  • section (b) schematically shows an example of power consumption of the image sensor 3100 and the ECU 3200.
  • the horizontal axis indicates time corresponding to section (a) in the figure, and the vertical axis indicates power consumption.
  • the relationship in power consumption between ECU 3200 and image sensor 3100 is not limited to this example.
  • ECU 3200 and image sensor 3100 are each kept in an active state at all times during the parking monitoring mode, and as shown by characteristic lines 502 and 503, power consumption is substantially constant during the parking monitoring mode.
  • FIG. 3 is a block diagram showing a configuration example of a vehicle control system 10011, which is an example of a mobile device control system applicable to each embodiment of the present disclosure.
  • the vehicle control system 10011 is provided in the vehicle 10000 and performs processing related to driving support, automatic driving, and parking monitoring of the vehicle 10000.
  • the vehicle control system 10011 includes a vehicle control ECU 10021, a communication section 10022, a map information storage section 10023, a position information acquisition section 10024, an external recognition sensor 10025, an in-vehicle sensor 10026, a vehicle sensor 10027, a storage section 10028, and a driving support/automatic driving control section. 10029, a DMS (Driver Monitoring System) 10030, an HMI (Human Machine Interface) 10031, and a vehicle control unit 10032.
  • a vehicle control ECU 10021 includes a vehicle control ECU 10021, a communication section 10022, a map information storage section 10023, a position information acquisition section 10024, an external recognition sensor 10025, an in-vehicle sensor 10026, a vehicle sensor 10027, a storage section 10028, and a driving support/automatic driving control section. 10029, a DMS (Driver Monitoring System) 10030, an HMI (Human Machine Interface) 10031, and a vehicle control unit 10032.
  • Vehicle control ECU 10021, communication unit 10022, map information storage unit 10023, location information acquisition unit 10024, external recognition sensor 10025, interior sensor 10026, vehicle sensor 10027, storage unit 10028, driving support/automatic driving control unit 10029, driver monitoring system ( DMS) 10030, human machine interface (HMI) 10031, and vehicle control unit 10032 are connected to be able to communicate with each other via a communication network 10041.
  • the communication network 10041 is, for example, an in-vehicle network compliant with digital two-way communication standards such as CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), FlexRay (registered trademark), and Ethernet (registered trademark). It consists of communication networks, buses, etc.
  • the communication network 10041 may be used depending on the type of data to be transmitted. For example, CAN may be applied to data related to vehicle control, and Ethernet may be applied to large-capacity data.
  • each part of the vehicle control system 10011 uses wireless communication that assumes communication over a relatively short distance, such as near field communication (NFC) or Bluetooth (registered trademark), without going through the communication network 10041. In some cases, the connection may be made directly using the .
  • NFC near field communication
  • Bluetooth registered trademark
  • the vehicle control ECU 10021 is composed of various processors such as a CPU (Central Processing Unit) and an MPU (Micro Processing Unit). Vehicle control ECU 10021 may include an ISP. Vehicle control ECU 10021 controls the entire or part of the functions of vehicle control system 10011. Further, the vehicle control system 10011 may include a plurality of ECUs.
  • the communication unit 10022 communicates with various devices inside and outside the vehicle, other vehicles, servers, base stations, etc., and sends and receives various data. At this time, the communication unit 10022 can perform communication using a plurality of communication methods.
  • the communication unit 10022 communicates with an external network via a base station or an access point using a wireless communication method such as 5G (fifth generation mobile communication system), LTE (Long Term Evolution), or DSRC (Dedicated Short Range Communications). (hereinafter referred to as an external server), etc.
  • the external network with which the communication unit 10022 communicates is, for example, the Internet, a cloud network, or a network unique to the operator.
  • the communication method that the communication unit 10022 performs with the external network is not particularly limited as long as it is a wireless communication method that allows digital two-way communication at a communication speed of a predetermined rate or higher and over a predetermined distance or longer.
  • the communication unit 10022 can communicate with a terminal located near the own vehicle using P2P (Peer To Peer) technology.
  • Terminals that exist near your vehicle include, for example, terminals worn by moving objects that move at relatively low speeds such as pedestrians and bicycles, terminals that are installed at fixed locations in stores, or MTC (Machine Type) terminals. Communication) terminal.
  • the communication unit 10022 can also perform V2X communication.
  • V2X communication includes, for example, vehicle-to-vehicle communication with other vehicles, vehicle-to-infrastructure communication with roadside equipment, and vehicle-to-home communication. It also refers to communication between one's own vehicle and others, such as vehicle-to-pedestrian communication between pedestrians and terminals carried by pedestrians.
  • the communication unit 10022 can receive, for example, a program for updating software that controls the operation of the vehicle control system 10011 from the outside (over the air).
  • the communication unit 10022 can further receive map information, traffic information, information around the vehicle 10000, etc. from the outside. Further, for example, the communication unit 10022 can transmit information regarding the vehicle 10000, information around the vehicle 10000, etc. to the outside.
  • the information regarding the vehicle 10000 that the communication unit 10022 transmits to the outside includes, for example, data indicating the state of the vehicle 10000, recognition results by the recognition unit 10073, and the like. Further, for example, the communication unit 10022 performs communication compatible with a vehicle emergency notification system such as e-call.
  • the communication unit 10022 receives electromagnetic waves transmitted by a road traffic information and communication system (VICS (Vehicle Information and Communication System) (registered trademark)) such as a radio beacon, an optical beacon, and FM multiplex broadcasting.
  • VICS Vehicle Information and Communication System
  • the communication unit 10022 can communicate with each device in the vehicle using, for example, wireless communication.
  • the communication unit 10022 performs wireless communication with devices in the vehicle using a communication method such as wireless LAN, Bluetooth, NFC, or WUSB (Wireless USB) that allows digital two-way communication at a communication speed higher than a predetermined communication speed. Can be done.
  • the communication unit 10022 is not limited to this, and can also communicate with each device in the vehicle using wired communication.
  • the communication unit 10022 can communicate with each device in the vehicle through wired communication via a cable connected to a connection terminal (not shown).
  • the communication unit 10022 performs digital two-way communication at a predetermined communication speed or higher through wired communication, such as USB (Universal Serial Bus), HDMI (High-Definition Multimedia Interface) (registered trademark), and MHL (Mobile High-definition Link). It is possible to communicate with each device in the car using a communication method that allows for communication.
  • USB Universal Serial Bus
  • HDMI High-Definition Multimedia Interface
  • MHL Mobile High-definition Link
  • in-vehicle equipment refers to, for example, equipment that is not connected to the communication network 10041 inside the car.
  • in-vehicle devices include mobile devices and wearable devices owned by passengers such as drivers, information devices brought into the vehicle and temporarily installed, and the like.
  • the map information storage unit 10023 stores one or both of a map acquired from the outside and a map created by the vehicle 10000.
  • the map information storage unit 10023 stores three-dimensional high-precision maps, global maps that are less accurate than high-precision maps, and cover a wide area, and the like.
  • Examples of high-precision maps include dynamic maps, point cloud maps, vector maps, etc.
  • the dynamic map is, for example, a map consisting of four layers of dynamic information, semi-dynamic information, semi-static information, and static information, and is provided to the vehicle 10000 from an external server or the like.
  • a point cloud map is a map composed of point clouds (point cloud data).
  • a vector map is a map that is compatible with ADAS (Advanced Driver Assistance System) and AD (Autonomous Driving) by associating traffic information such as lanes and traffic light positions with a point cloud map.
  • the point cloud map and vector map may be provided, for example, from an external server, or may be provided as a local map (described later) based on sensing results from a camera 10051, radar 10052, LiDAR (Laser Imaging Detection and Ranging) 10053, etc.
  • the map may be created by the vehicle 10000 and stored in the map information storage unit 10023 as a map for performing matching.
  • map data of, for example, several hundred meters square regarding the planned route that the vehicle 10,000 will travel from now on is acquired from the external server, etc. in order to reduce communication capacity. .
  • the position information acquisition unit 10024 receives a GNSS signal from a GNSS (Global Navigation Satellite System) satellite and acquires the position information of the vehicle 10000.
  • the acquired position information is supplied to the driving support/automatic driving control unit 10029.
  • the location information acquisition unit 10024 is not limited to a method using a GNSS signal, and may acquire location information using a beacon, for example.
  • the external recognition sensor 10025 includes various sensors used to recognize the external situation of the vehicle 10000, and supplies sensor data from each sensor to each part of the vehicle control system 10011.
  • the type and number of sensors included in the external recognition sensor 10025 are arbitrary.
  • the external recognition sensor 10025 includes a camera 10051, a radar 10052, a LiDAR 10053, and an ultrasonic sensor 10054.
  • the configuration is not limited to this, and the external recognition sensor 10025 may include one or more types of sensors among a camera 10051, a radar 10052, a LiDAR 10053, and an ultrasonic sensor 10054.
  • the number of cameras 10051, radar 10052, LiDAR 10053, and ultrasonic sensors 10054 is not particularly limited as long as it can be realistically installed in vehicle 10000.
  • the types of sensors included in the external recognition sensor 10025 are not limited to this example, and the external recognition sensor 10025 may include other types of sensors. Examples of sensing areas of each sensor included in the external recognition sensor 10025 will be described later.
  • the shooting method of the camera 10051 is not particularly limited.
  • cameras with various imaging methods such as a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, and an infrared camera that can perform distance measurement can be applied to the camera 10051 as necessary.
  • the camera 10051 is not limited to this, and the camera 10051 may be used simply to obtain a captured image, regardless of distance measurement.
  • a plurality of cameras 10051 are mounted on the vehicle 10000.
  • at least one camera 10051 may be provided at the front, rear, left side, and right side of the vehicle 10000 so that images of 360° around the vehicle 10000 can be obtained.
  • cameras 10051 may be provided on the left and right side mirrors on each side.
  • the external recognition sensor 10025 can include an environment sensor for detecting the environment for the vehicle 10000.
  • the environmental sensor is a sensor for detecting the environment such as weather, weather, and brightness, and can include various sensors such as a raindrop sensor, a fog sensor, a sunlight sensor, a snow sensor, and an illumination sensor.
  • the external recognition sensor 10025 includes a microphone used to detect sounds around the vehicle 10000 and the position of a sound source.
  • the in-vehicle sensor 10026 includes various sensors for detecting information inside the vehicle, and supplies sensor data from each sensor to each part of the vehicle control system 10011.
  • the types and number of various sensors included in the in-vehicle sensor 10026 are not particularly limited as long as they can be realistically installed in the vehicle 10000.
  • the in-vehicle sensor 10026 can include one or more types of sensors among a camera, radar, seating sensor, steering wheel sensor, microphone, and biological sensor.
  • the camera included in the in-vehicle sensor 10026 it is possible to use cameras of various shooting methods capable of distance measurement, such as a ToF camera, a stereo camera, a monocular camera, and an infrared camera.
  • the present invention is not limited to this, and the camera included in the in-vehicle sensor 10026 may simply be used to acquire captured images, regardless of distance measurement.
  • the camera included in the in-vehicle sensor 10026 will be appropriately referred to as an in-vehicle camera.
  • the in-vehicle camera may be configured such that its imaging direction can be changed under the control of the vehicle control ECU 10021, for example.
  • a biosensor included in the in-vehicle sensor 10026 is provided, for example, on a seat or a steering wheel, and detects various biometric information of a passenger such as a driver.
  • the vehicle sensor 10027 includes various sensors for detecting the state of the vehicle 10000, and supplies sensor data from each sensor to each part of the vehicle control system 10011.
  • the types and number of various sensors included in the vehicle sensor 10027 are not particularly limited as long as they can be realistically installed in the vehicle 10000.
  • the vehicle sensor 10027 includes a speed sensor, an acceleration sensor, an angular velocity sensor (gyro sensor), and an inertial measurement unit (IMU) that integrates these.
  • the vehicle sensor 10027 includes a steering angle sensor that detects the steering angle of the steering wheel, a yaw rate sensor, an accelerator sensor that detects the amount of operation of the accelerator pedal, and a brake sensor that detects the amount of operation of the brake pedal.
  • the vehicle sensor 10027 includes a rotation sensor that detects the rotation speed of an engine or motor, an air pressure sensor that detects tire air pressure, a slip rate sensor that detects tire slip rate, and a wheel speed sensor that detects wheel rotation speed. Equipped with a sensor.
  • the vehicle sensor 10027 includes a battery sensor that detects the remaining battery power and temperature, and an impact sensor that detects an external impact.
  • the storage unit 10028 includes at least one of a nonvolatile storage medium and a volatile storage medium, and stores data and programs.
  • the storage unit 10028 is used as, for example, EEPROM (Electrically Erasable Programmable Read Only Memory) and RAM (Random Access Memory), and the storage medium includes magnetic storage devices such as HDD (Hard Disc Drive), semiconductor storage devices such as flash memory, and optical Storage devices and magneto-optical storage devices can be applied.
  • the storage unit 10028 stores various programs and data used by each unit of the vehicle control system 10011.
  • the storage unit 10028 includes an EDR (Event Data Recorder) and a DSSAD (Data Storage System for Automated Driving), and stores information about the vehicle 10000 before and after an event such as an accident and information acquired by the in-vehicle sensor 10026.
  • EDR Event Data Recorder
  • DSSAD Data Storage System for Automated Driving
  • the driving support/automatic driving control unit 10029 controls driving support and automatic driving of the vehicle 10000.
  • the driving support/automatic driving control unit 10029 includes an analysis unit 10061, an action planning unit 10062, and an operation control unit 10063.
  • the analysis unit 10061 performs analysis processing of the vehicle 10000 and the surrounding situation.
  • the analysis section 10061 includes a self-position estimation section 10071, a sensor fusion section 10072, and a recognition section 10073.
  • the self-position estimation unit 10071 estimates the self-position of the vehicle 10000 based on the sensor data from the external recognition sensor 10025 and the high-precision map stored in the map information storage unit 10023. For example, the self-position estimating unit 10071 estimates the self-position of the vehicle 10000 by generating a local map based on sensor data from the external recognition sensor 10025 and matching the local map with a high-precision map.
  • the position of the vehicle 10000 is, for example, based on the center of the rear wheel versus the axle.
  • the local map is, for example, a three-dimensional high-precision map created using a technology such as SLAM (Simultaneous Localization And Mapping), an occupancy grid map, or the like.
  • the three-dimensional high-precision map is, for example, the point cloud map described above.
  • the occupancy grid map is a map that divides the three-dimensional or two-dimensional space around the vehicle 10000 into grids (grids) of a predetermined size and shows the occupancy state of objects in each grid.
  • the occupancy state of an object is indicated by, for example, the presence or absence of the object and the probability of its existence.
  • the local map is also used, for example, in detection processing and recognition processing of the external situation of the vehicle 10000 by the recognition unit 10073.
  • the self-position estimation unit 10071 may estimate the self-position of the vehicle 10000 based on the position information acquired by the position information acquisition unit 10024 and sensor data from the vehicle sensor 10027.
  • the sensor fusion unit 10072 performs sensor fusion processing to obtain new information by combining multiple different types of sensor data (for example, image data supplied from the camera 10051 and sensor data supplied from the radar 10052). .
  • Methods for combining different types of sensor data include integration, fusion, and federation.
  • the recognition unit 10073 executes a detection process for detecting a situation outside the vehicle 10000 and a recognition process for recognizing a situation outside the vehicle 10000.
  • the recognition unit 10073 performs detection processing and recognition processing of the external situation of the vehicle 10000 based on information from the external recognition sensor 10025, information from the self-position estimation unit 10071, information from the sensor fusion unit 10072, etc. .
  • the recognition unit 10073 performs detection processing and recognition processing of objects around the vehicle 10000.
  • the object detection process is, for example, a process of detecting the presence, size, shape, position, movement, etc. of an object.
  • the object recognition process is, for example, a process of recognizing attributes such as the type of an object or identifying a specific object.
  • detection processing and recognition processing are not necessarily clearly separated, and may overlap.
  • the recognition unit 10073 detects objects around the vehicle 10000 by performing clustering to classify point clouds based on sensor data from the radar 10052, LiDAR 10053, etc. into point clouds. As a result, the presence, size, shape, and position of objects around vehicle 10000 are detected.
  • the recognition unit 10073 detects the movement of objects around the vehicle 10000 by performing tracking that follows the movement of a group of points classified by clustering. As a result, the speed and traveling direction (movement vector) of objects around the vehicle 10000 are detected.
  • the recognition unit 10073 detects or recognizes vehicles, people, bicycles, obstacles, structures, roads, traffic lights, traffic signs, road markings, etc. based on the image data supplied from the camera 10051. Further, the recognition unit 10073 may recognize the types of objects around the vehicle 10000 by performing recognition processing such as semantic segmentation.
  • the recognition unit 10073 uses the map stored in the map information storage unit 10023, the self-position estimation result by the self-position estimating unit 10071, and the recognition result of objects around the vehicle 10000 by the recognition unit 10073 to: Recognition processing of traffic rules around the vehicle 10000 can be performed. Through this processing, the recognition unit 10073 can recognize the positions and states of traffic lights, the contents of traffic signs and road markings, the contents of traffic regulations, and the lanes in which the vehicle can travel.
  • the recognition unit 10073 can perform recognition processing of the environment around the vehicle 10000.
  • the surrounding environment to be recognized by the recognition unit 10073 includes weather, temperature, humidity, brightness, road surface conditions, and the like.
  • the action planning unit 10062 creates an action plan for the vehicle 10000.
  • the action planning unit 10062 creates an action plan by performing route planning and route following processing.
  • global path planning is a process of planning a rough route from the start to the goal. This route planning is called trajectory planning, and involves generating a trajectory (local path planning) that can safely and smoothly proceed near the vehicle 10,000 in consideration of the motion characteristics of the vehicle 10,000 on the planned route. It also includes the processing to be performed.
  • Route following is a process of planning actions to safely and accurately travel the route planned by route planning within the planned time.
  • the action planning unit 10062 can calculate the target speed and target angular velocity of the vehicle 10000, for example, based on the result of this route following process.
  • the motion control unit 10063 controls the motion of the vehicle 10000 in order to realize the action plan created by the action planning unit 10062.
  • the operation control unit 10063 controls a steering control unit 10081, a brake control unit 10082, and a drive control unit 10083 included in a vehicle control unit 10032, which will be described later, so that the vehicle 10000 follows a trajectory calculated by a trajectory plan. Acceleration/deceleration control and direction control are performed to move forward.
  • the operation control unit 10063 performs cooperative control aimed at realizing ADAS functions such as collision avoidance or shock mitigation, follow-up driving, vehicle speed maintenance driving, self-vehicle collision warning, and self-vehicle lane deviation warning.
  • the operation control unit 10063 performs cooperative control for the purpose of automatic driving in which the vehicle autonomously travels without depending on the driver's operation.
  • the DMS 10030 performs driver authentication processing, recognition processing of the driver's state, etc. based on sensor data from the in-vehicle sensor 10026 and input data input to the HMI 10031 (described later).
  • the driver's condition to be recognized includes, for example, physical condition, alertness level, concentration level, fatigue level, gaze direction, drunkenness level, driving operation, and posture.
  • the DMS 10030 may perform authentication processing for passengers other than the driver and recognition processing for the state of the passenger. Further, for example, the DMS 10030 may perform recognition processing of the situation inside the vehicle based on sensor data from the in-vehicle sensor 10026.
  • the conditions inside the car that are subject to recognition include, for example, temperature, humidity, brightness, and odor.
  • the HMI 10031 inputs various data and instructions, and presents various data to the driver and the like.
  • HMI 10031 includes an input device for a person to input data.
  • the HMI 10031 generates input signals based on data, instructions, etc. input by an input device, and supplies them to each part of the vehicle control system 10011.
  • the HMI 10031 includes operators such as a touch panel, buttons, switches, and levers as input devices.
  • the present invention is not limited to this, and the HMI 10031 may further include an input device capable of inputting information by a method other than manual operation using voice, gesture, or the like.
  • the HMI 10031 may use, as an input device, an externally connected device such as a remote control device using infrared rays or radio waves, or a mobile device or wearable device compatible with the operation of the vehicle control system 10011.
  • the HMI 10031 generates visual information, auditory information, and tactile information for the passenger or the outside of the vehicle. Furthermore, the HMI 10031 performs output control to control the output, output content, output timing, output method, etc. of each generated information.
  • the HMI 10031 generates and outputs, as visual information, information shown by images and light, such as an operation screen, a status display of the vehicle 10000, a warning display, and a monitor image showing the surrounding situation of the vehicle 10000.
  • the HMI 10031 generates and outputs, as auditory information, information indicated by sounds such as audio guidance, warning sounds, and warning messages.
  • the HMI 10031 generates and outputs, as tactile information, information given to the passenger's tactile sense by, for example, force, vibration, movement, or the like.
  • an output device for the HMI 10031 to output visual information for example, a display device that presents visual information by displaying an image or a projector device that presents visual information by projecting an image can be applied.
  • display devices that display visual information within the passenger's field of vision include, for example, a head-up display, a transparent display, and a wearable device with an AR (Augmented Reality) function. It may be a device.
  • the HMI 10031 can also use a display device included in a navigation device, an instrument panel, a CMS (Camera Monitoring System), an electronic mirror, a lamp, etc. provided in the vehicle 10000 as an output device that outputs visual information.
  • an output device through which the HMI 10031 outputs auditory information for example, an audio speaker, headphones, or earphones can be applied.
  • a haptics element using haptics technology can be applied as an output device from which the HMI 10031 outputs tactile information.
  • the haptic element is provided at a portion of the vehicle 10000 that comes into contact with a passenger, such as a steering wheel or a seat, for example.
  • the vehicle control unit 10032 controls each part of the vehicle 10000.
  • the vehicle control section 10032 includes a steering control section 10081, a brake control section 10082, a drive control section 10083, a body system control section 10084, a light control section 10085, and a horn control section 10086.
  • the steering control unit 10081 detects and controls the state of the steering system of the vehicle 10000.
  • the steering system includes, for example, a steering mechanism including a steering wheel, an electric power steering, and the like.
  • the steering control unit 10081 includes, for example, a steering ECU that controls the steering system, an actuator that drives the steering system, and the like.
  • the brake control unit 10082 detects and controls the state of the brake system of the vehicle 10000.
  • the brake system includes, for example, a brake mechanism including a brake pedal, an ABS (Antilock Brake System), a regenerative brake mechanism, and the like.
  • the brake control unit 10082 includes, for example, a brake ECU that controls the brake system, an actuator that drives the brake system, and the like.
  • the drive control unit 10083 detects and controls the state of the drive system of the vehicle 10000.
  • the drive system includes, for example, an accelerator pedal, a drive force generation device such as an internal combustion engine or a drive motor, and a drive force transmission mechanism for transmitting the drive force to the wheels.
  • the drive control unit 10083 includes, for example, a drive ECU that controls the drive system, an actuator that drives the drive system, and the like.
  • the body system control unit 10084 detects and controls the state of the body system of the vehicle 10000.
  • the body system includes, for example, a keyless entry system, a smart key system, a power window device, a power seat, an air conditioner, an air bag, a seat belt, a shift lever, and the like.
  • the body system control unit 10084 includes, for example, a body system ECU that controls the body system, an actuator that drives the body system, and the like.
  • the body system control unit 10084 may, for example, control the ignition (ignition on/off) in response to a user operation. Further, the body system control unit 10084 may control opening/closing (expanding, retracting) operations of side mirrors.
  • the light control unit 10085 detects and controls the states of various lights on the vehicle 10000. Examples of lights to be controlled include headlights, backlights, fog lights, turn signals, brake lights, projections, and bumper displays.
  • the light control unit 10085 includes a light ECU that controls lights, an actuator that drives lights, and the like.
  • the horn control unit 10086 detects and controls the state of the car horn of the vehicle 10000.
  • the horn control unit 10086 includes, for example, a horn ECU that controls the car horn, an actuator that drives the car horn, and the like.
  • FIG. 4 is a diagram showing an example of a sensing area by the camera 10051, radar 10052, LiDAR 10053, ultrasonic sensor 10054, etc. of the external recognition sensor 10025 in FIG. 3.
  • a state in which the vehicle 10000 is viewed from the top is schematically shown, and in the figure, the lower end side is the front end (front) side of the vehicle 10000, and the upper end side is the rear end (rear) side of the vehicle 10000. There is.
  • the sensing region 10101F and the sensing region 10101B are examples of sensing regions of the ultrasonic sensor 10054.
  • Sensing region 10101F covers the vicinity of the front end of vehicle 10000 by a plurality of ultrasonic sensors 10054.
  • Sensing region 10101B covers the vicinity of the rear end of vehicle 10000 by a plurality of ultrasonic sensors 10054.
  • the sensing results in the sensing region 10101F and the sensing region 10101B are used, for example, for parking assistance for the vehicle 10000.
  • Sensing region 10102F to sensing region 10102B are examples of sensing regions of short-range or medium-range radar 10052.
  • Sensing area 10102F covers the front of vehicle 10000 to a position farther than sensing area 10101F.
  • Sensing area 10102B covers the rear of vehicle 10000 to a position farther than sensing area 10101B.
  • Sensing region 10102L covers the rear periphery of the left side of vehicle 10000.
  • Sensing region 10102R covers the rear periphery of the right side of vehicle 10000.
  • the sensing results in the sensing region 10102F are used, for example, to detect vehicles, pedestrians, etc. that are present in front of the vehicle 10000.
  • the sensing results in the sensing region 10102B are used, for example, for a rear collision prevention function of the vehicle 10000.
  • the sensing results in sensing region 10102L and sensing region 10102R are used, for example, to detect an object in a blind spot on the side of vehicle 10000.
  • Sensing area 10103F to sensing area 10103B are examples of sensing areas by camera 10051.
  • Sensing area 10103F corresponds to the imaging range of camera 10051 (appropriately referred to as front camera) whose imaging direction is directed toward the front of vehicle 10000, and covers the front of vehicle 10000 to a position farther than sensing area 10102F.
  • Sensing area 10103B corresponds to the imaging range of camera 10051 (appropriately referred to as a rear camera) whose imaging direction is directed toward the rear of vehicle 10000, and covers the rear of vehicle 10000 to a position farther than sensing area 10102B.
  • the sensing region 10103L covers the periphery of the left side of the vehicle 10000.
  • the sensing region 10103L is, for example, a region corresponding to the imaging range of a camera 10051 (hereinafter referred to as a left camera) provided on the side mirror 60L on the left side of the vehicle 10000. That is, the side mirror 60L is configured such that the camera is configured to image the left side space (sensing area 10103L) of the vehicle 10000 when the side mirror 60L is unfolded, in other words, when the side mirror 60L is normally used. 10051 is provided.
  • the sensing region 10103R covers the periphery of the right side of the vehicle 10000.
  • Sensing region 10103R like sensing region 10103L described above, is a region corresponding to the imaging range of camera 10051 (hereinafter referred to as right camera) provided, for example, on side mirror 60R on the right side of vehicle 10000.
  • the side mirror 60R is provided with a camera 10051 so as to image the space on the right side of the vehicle 10000 (sensing area 10103R) when the side mirror 60R is expanded.
  • the sensing areas of the front camera, rear camera, left camera, and right camera can cover approximately 360° around the vehicle 10000.
  • the front camera, rear camera, left camera, and right camera constitute a surround camera.
  • the imaging unit in the claims of the present disclosure corresponds to a plurality of cameras such as a front camera, a rear camera, a left camera, and a right camera.
  • the sensing results in the sensing region 10103F can be used, for example, for recognition of traffic lights and traffic signs, lane departure prevention support systems, and automatic headlight control systems.
  • Sensing results in sensing region 10103B can be used, for example, in parking assistance, parking monitoring, and surround view systems.
  • the sensing results in the sensing region 10103L and the sensing region 10103R can be used, for example, in parking monitoring and surround view systems.
  • the sensing region 10104 shows an example of the sensing region of the LiDAR 10053. Sensing area 10104 covers the front of vehicle 10000 to a position farther than sensing area 10103F. On the other hand, the sensing region 10104 has a narrower range in the left-right direction than the sensing region 10103F.
  • the sensing results in the sensing area 10104 are used, for example, to detect objects such as surrounding vehicles.
  • the sensing area 10105 shows an example of the sensing area of the long-distance radar 10052. Sensing area 10105 covers a position farther forward than sensing area 10104 in front of vehicle 10000. On the other hand, the sensing region 10105 has a narrower range in the left-right direction than the sensing region 10104. Sensing results in the sensing area 10105 are used, for example, for ACC (Adaptive Cruise Control), emergency braking, collision avoidance, and the like.
  • ACC Adaptive Cruise Control
  • the sensing areas of the cameras 10051, radar 10052, LiDAR 10053, and ultrasonic sensors 10054 included in the external recognition sensor 10025 may have various configurations other than those shown in FIG. 4.
  • the ultrasonic sensor 10054 may also sense the side of the vehicle 10000, or the LiDAR 10053 may sense the rear of the vehicle 10000.
  • the installation position of each sensor is not limited to each example mentioned above. Further, the number of each sensor may be one or more than one.
  • the in-vehicle monitoring system according to the first embodiment gradually increases the resolution of the image data to be subjected to the detection process, depending on the result of the detection process to the image data captured by the image sensor. Thereby, power consumption in the parking monitoring system can be suppressed.
  • FIG. 5 is a block diagram schematically showing the configuration of an example of the vehicle-mounted monitoring device 1 according to the first embodiment.
  • the in-vehicle monitoring device 1 according to the first embodiment includes an image sensor 10 and an ECU 20, each of which is mounted on a vehicle.
  • the image sensor 10 includes an imaging section 100 and a detection section 101.
  • the imaging unit 100 includes an imaging device and a drive circuit that drives the imaging device, and outputs image data generated by imaging with the imaging device.
  • the detection unit 101 performs moving object detection and human detection based on the image data output from the imaging unit 100.
  • the image sensor 10 may transmit image data output from the imaging section 100 to the ECU 20.
  • the ECU 20 includes a recognition section 200, a determination section 201, and a control section 202.
  • the recognition unit 200 executes recognition processing based on the image data transmitted from the image sensor 10.
  • the determination unit 201 may determine whether or not to record the image data transmitted from the image sensor 10. Note that the image data may be recorded for a certain period of time regardless of the determination by the determination unit 201.
  • the control unit 202 controls recording of the image data in the storage device according to the determination result of the determination unit 201. Further, the control unit 202 sets the operation mode of the image sensor 10. Further, the control section 202 can communicate with each section of the vehicle control system 10011 via the communication network 10041.
  • the storage device is a nonvolatile recording medium such as a hard disk drive or flash memory, and is built into the in-vehicle monitoring device 1.
  • the storage device is not limited to this, and the storage device may be connected to the vehicle-mounted monitoring device 1 via a predetermined cable or communication means as an external device for the vehicle-mounted monitoring device 1.
  • the ECU 20 includes a versatile processor such as a CPU and a processor specialized for image processing such as an ISP.
  • the ECU 20 may be one that controls the entire vehicle (center), like the vehicle control ECU 10021 in FIG. 3, or may be one that controls a part of the vehicle (zone). The configuration is not limited to this, and the ECU 20 may be included in the vehicle-mounted monitoring device 1.
  • the vehicle-mounted monitoring device 1 is described as a single device including the image sensor 10 and the ECU 20, but this is not limited to this example.
  • the in-vehicle monitoring device 1 may be configured as an in-vehicle monitoring system in which the image sensor 10 and the ECU 20 are separate devices, and the image sensor 10 and the ECU 20 communicate remotely.
  • Imaging is performed by the imaging unit 100 in the image sensor 10.
  • the detection unit 101 executes detection processing based on captured image data.
  • the image sensor 10 has a moving object detection mode, a person detection mode, and a recording mode as operating modes in the parking monitoring operation.
  • the moving object detection mode is an operation mode in which a moving object is detected using low resolution image data (hereinafter referred to as a low resolution image as appropriate).
  • the human detection mode is an operation mode in which a human is detected using medium resolution image data (hereinafter referred to as a medium resolution image as appropriate) having a higher resolution than a low resolution image.
  • the recording mode is an operation mode in which high-resolution image data (hereinafter referred to as a high-resolution image) having a higher resolution than a medium-resolution image is acquired in order to record the image data on a recording medium.
  • the operation mode of the image sensor 10 is set to the moving object detection mode in response to a trigger from the ECU 20.
  • the image sensor 10 sets the operation mode to the human detection mode, for example, depending on the detection result of the moving body detection mode. Further, the image sensor 10 sets the operation mode to the recording mode according to the detection result in the human detection mode.
  • FIG. 6 is a schematic diagram for explaining a low resolution image, a medium resolution image, and a high resolution image according to the first embodiment.
  • the image sensor included in the image sensor 10 includes pixels Pix of 2000 ⁇ 2000 pixels arranged in a matrix in its effective pixel area.
  • section (a) shows an example of a high resolution image 30a.
  • the high-resolution image 30a is processed at a resolution of 2000 pixels x 2000 pixels per frame in units of pixels Pix.
  • the high-resolution image 30a may be, for example, image data of the maximum resolution output by the imaging unit 100.
  • section (b) shows an example of a medium resolution image 30b.
  • the medium resolution image 30b is processed in blocks 31 of 2 pixels x 2 pixels. If the block 31 is regarded as one pixel, the medium resolution image 30b will be processed at a resolution of 100 pixels x 100 pixels in a frame of the same size as the high resolution image 30a.
  • the medium resolution image 30b may have a resolution that allows the detection unit 101 to detect an image that looks like a person (such as a silhouette of a person), for example.
  • section (c) shows an example of a low resolution image 30c.
  • the low resolution image 30c is processed in units of blocks 32 of 4 pixels x 4 pixels. If the block 32 is regarded as one pixel, the low-resolution image 30c will be processed with a resolution of 50 pixels x 50 pixels in a frame of the same size as the high-resolution image 30a.
  • the low-resolution image 30c may have a resolution that allows the detection unit 101 to detect a moving object, for example.
  • each block 32 may use a representative value of the pixel value of each pixel Pix included in the block 32 as the pixel value when the block 32 is regarded as a pixel.
  • the representative value for example, the area average in block 32 may be applied.
  • the present invention is not limited to this, and each block 32 may be configured by thinning out the pixels Pix within the effective pixel area according to the resolution. These techniques can be similarly applied to generation of the medium resolution image 30b.
  • FIG. 7 is a schematic diagram for explaining the operation of the in-vehicle monitoring device 1 according to the first embodiment.
  • the image sensor 10 in the in-vehicle monitoring device 1 first executes a moving object detection process based on the low resolution image 30c in the moving object detection mode using the detection unit 101. It is assumed that the detection unit 101 detects a moving object in the low resolution image 30c det1 acquired at time t det1 .
  • the detection unit 101 executes human detection processing using the medium resolution image 30b, for example, starting from the next frame of the low resolution image 30c det1 in which a moving object is detected.
  • the operation of the ECU 20 may be stopped or the ECU 20 may be transitioned to the power saving mode.
  • the detection unit 101 detects a person in the medium resolution image 30b det2 acquired at time t det2 .
  • the ECU 20 returns to the normal operation mode when the detection unit 101 detects a person as a trigger.
  • the ECU 20 uses the high resolution image 30a to focus on the region 33 where a person is detected by the recognition unit 200, for example, from the next frame of the medium resolution image 30b det2 where a person is detected. Cognitive processing etc. may be performed.
  • section (b) schematically shows an example of power consumption of the image sensor 10 and the ECU 20 in the first embodiment.
  • the horizontal axis indicates the time corresponding to section (a) in the figure, and the vertical axis indicates power consumption. Note that in the figure, the relationship in power consumption between the ECU 20 and the image sensor 10 is not limited to this example.
  • the image sensor 10 operates with low power consumption in the moving object detection mode, and when a moving object is detected at time t det1 and the operation mode changes to the human detection mode. , it operates with moderate power consumption. Further, when a person is detected at time t det2 and the operation mode changes to recording mode, the image sensor 10 operates with higher power consumption than in the person detection mode.
  • the power consumption of the image sensor 10 is determined by comparing the characteristic line 50 with the characteristic line 51 indicating the power consumption in the recording mode, and the power consumption in the motion detection mode is about 1/100 of that in the recording mode, and in the human detection mode it is about 1/100 of that in the recording mode. /10.
  • the ECU 20 only activates some functions and deactivates the remaining functions, thereby suppressing power consumption to an extremely low level. .
  • the entire ECU 20 is activated, for example, and consumes high power, as shown by the characteristic line 52.
  • FIG. 8 is a schematic diagram for explaining each operation mode and its transition in the in-vehicle monitoring device 1 according to the first embodiment.
  • the in-vehicle monitoring device 1 detects movement in the low resolution image 30c acquired by the image sensor 10 in the moving object detection mode (step S11).
  • the in-vehicle monitoring device 1 acquires, for example, a 50 ⁇ 50 processing unit of low resolution images 30c (see FIG. 6), and executes the moving object detection process.
  • the in-vehicle monitoring device 1 can realize low power consumption by operating only the minimum necessary processing blocks for detecting a moving object in the chip that constitutes the image sensor 10 (first power consumption mode).
  • the subsequent chip for example, the ECU 20
  • the subsequent chip may operate in a processing stopped state or in a power saving mode.
  • the in-vehicle monitoring device 1 may perform preprocessing such as geometric transformation and brightness scaling before inputting moving object detection.
  • the in-vehicle monitoring device 1 changes the operation mode to the human detection mode (step S12).
  • the in-vehicle monitoring device 1 detects a person included in the medium resolution image 30b acquired by the image sensor 10 in the person detection mode (step S21).
  • the human detection mode the in-vehicle monitoring device 1 acquires, for example, a medium resolution image 30b (see FIG. 6) in 100 ⁇ 100 processing units, and executes human detection processing.
  • the human detection mode as in the moving object detection mode, low power consumption can be achieved by operating only the minimum necessary processing blocks for human detection in the chip that constitutes the image sensor 10, for example. 2 power consumption mode). Since the human detection mode performs human detection processing using the medium resolution image 30b, which has a higher resolution than the above-described moving object detection mode, the power consumption is higher than that of the moving object detection mode.
  • the image sensor 10 side takes charge of the detection processing, so the subsequent chip (for example, the ECU 20) may operate in a processing stopped state or in a power saving mode.
  • the in-vehicle monitoring device 1 may perform preprocessing such as geometric transformation and brightness scaling before inputting human detection.
  • the in-vehicle monitoring device 1 changes the operation mode to the recording mode (step S22). On the other hand, if a person is not detected for a certain period of time in the person detection mode, the in-vehicle monitoring device 1 may change the operation mode of the image sensor 10 to the moving object detection mode (step S23).
  • the in-vehicle monitoring device 1 records the image data acquired by the image sensor 10 on the recording medium in the recording mode (step S31).
  • the vehicle-mounted monitoring device 1 acquires a high-resolution image 30a (see FIG. 6) and records the acquired high-resolution image 30a.
  • the in-vehicle monitoring device 1 changes the operation mode to the moving object detection mode (step S32).
  • FIG. 9 is a block diagram showing the configuration of an example of the image sensor 10 according to the first embodiment.
  • the image sensor 10 includes an imaging block 110 that performs imaging processing, and a signal processing block 120 that performs processing according to each of the operation modes described above on image data acquired by the imaging block 110. It is composed of:
  • the image sensor 10 includes an imaging block 110 and a signal processing block 120.
  • the imaging block 110 and the signal processing block 120 are electrically connected by connection lines CL1, CL2, and CL3, which are internal buses, respectively.
  • the imaging block 110 includes an imaging unit 111, an imaging processing section 112, an output control section 113, an output I/F (interface) 114, and an imaging control section 115, and images a subject to obtain a captured image.
  • the imaging unit 111 includes a pixel array in which a plurality of pixels, each of which is a light receiving element that outputs a signal according to light received through photoelectric conversion, are arranged in a matrix.
  • the imaging unit 111 is driven by the imaging processing section 112 and performs imaging of a subject.
  • the imaging unit 111 receives incident light from an optical system in each pixel included in the pixel array, performs photoelectric conversion, and outputs an analog image signal corresponding to the incident light.
  • the size of the image based on the image signal output by the imaging unit 111 can be selected from a plurality of sizes such as width x height of 3968 pixels x 2976 pixels, 1920 pixels x 1080 pixels, 640 pixels x 480 pixels, etc. can.
  • the image size that the imaging unit 111 can output is not limited to this example.
  • an imaging unit 111 repeatedly acquires information on pixels arranged in a matrix at a predetermined rate (frame rate) in time series.
  • the image sensor 10 collectively outputs the acquired information for each frame.
  • the imaging processing section 112 performs various operations in the imaging unit 111 under the control of the imaging control section 115, such as driving the imaging unit 111, AD (Analog to Digital) conversion of an analog image signal output from the imaging unit 111, and processing the imaging signal. performs imaging processing related to imaging the image.
  • AD Analog to Digital
  • the imaging signal processing carried out by the imaging processing unit 112 includes, for example, calculating the average value of pixel values for each predetermined small region of the image output by the imaging unit 111 to obtain the brightness of each small region. processing, AGC (Auto Gain Control) processing, noise removal processing, etc.
  • AGC Automatic Gain Control
  • the imaging processing section 112 outputs, as image data, a digital image signal obtained by AD conversion or the like of the analog image signal output by the imaging unit 111.
  • the image data output from the imaging processing unit 112 may be image data (RAW data) of a RAW image that is not subjected to processing such as development.
  • the image data output by the imaging processing section 112 is supplied to the output control section 113 and also to the image compression section 125 of the signal processing block 120 via the connection line CL2.
  • the output control unit 113 is supplied with image data from the imaging processing unit 112, and is also supplied with signal processing results of signal processing using image data and the like from the signal processing block 120 via the connection line CL3.
  • the output control unit 113 transmits the image data from the imaging processing unit 112 and the signal processing result from the signal processing block 120 through (one) output I/F 114 connected to the outside (for example, the outside of the ECU 20 or the image sensor 10). Performs output control to selectively output data to recording media (such as recording media). That is, the output control unit 113 selects the image data from the imaging processing unit 112 or the signal processing result from the signal processing block 120, and supplies the selected image data to the output I/F 114.
  • the output I/F 114 is an interface that outputs the image data and signal processing results supplied from the output control unit 113 to the outside.
  • a relatively high-speed parallel I/F such as MIPI (Mobile Industry Processor Interface) can be adopted.
  • the output I/F 114 outputs the image data from the imaging processing section 112 or the signal processing result from the signal processing block 120 to the outside according to the output control of the output control section 113. Therefore, for example, if only the signal processing result from the signal processing block 120 is required externally and image data based on RAW data is not required, only the signal processing result can be output from the output I/F 114. The amount of data output to the outside can be reduced.
  • the signal processing block 120 performs signal processing to obtain a signal processing result required externally, and outputs the signal processing result from the output I/F 114, thereby eliminating the need for external signal processing.
  • the load on external blocks can be reduced.
  • the imaging control unit 115 has a communication I/F 116 and a register group 117.
  • the communication I/F 116 is, for example, a first communication I/F such as a serial communication I/F such as I 2 C (Inter-Integrated Circuit), and communicates with the outside (for example, the ECU 20) in the register 117 group. Exchange necessary information such as reading and writing information.
  • a serial communication I/F such as I 2 C (Inter-Integrated Circuit)
  • I 2 C Inter-Integrated Circuit
  • the register group 117 has a plurality of registers, and stores imaging information related to the imaging of an image by the imaging unit 111 and other various information.
  • the register group 117 stores imaging information received from the outside through the communication I/F 116 and results of imaging signal processing by the imaging processing unit 112 (for example, the brightness of each small region of the captured image).
  • the imaging information stored in the register group 117 includes, for example, ISO (International Organization for Standardization) sensitivity (analog gain during AD conversion in the imaging processing unit 112), exposure time (shutter speed), frame rate, focus, There is (information representing) shooting mode, cropping range, etc.
  • ISO International Organization for Standardization
  • sensitivity analog gain during AD conversion in the imaging processing unit 112
  • exposure time shutter speed
  • frame rate frame rate
  • focus focus
  • There is (information representing) shooting mode cropping range, etc.
  • Photography modes include, for example, a manual mode in which exposure time, frame rate, etc. are manually set, and an automatic mode in which they are automatically set according to the scene.
  • the automatic mode includes, for example, modes corresponding to various shooting scenes such as night scenes and human faces.
  • the cropping range refers to a range to be cropped from the image output by the image capturing unit 111 when the image capturing processing unit 112 cuts out a part of the image output by the image capturing unit 111 and outputs it as image data.
  • the cropping range it becomes possible, for example, to crop only the range in which a person is shown from the image output by the imaging unit 111.
  • the imaging control section 115 controls the imaging processing section 112 according to the imaging information stored in the register group 117, thereby controlling the imaging of the image by the imaging unit 111.
  • the register group 117 can store not only imaging information and the results of imaging signal processing in the imaging processing unit 112 but also output control information related to output control in the output control unit 113.
  • the output control unit 113 can perform output control to selectively output the captured image and the signal processing result according to the output control information stored in the register group 117.
  • the imaging control unit 115 and the sensor control unit 121 of the signal processing block 120 are connected via a connection line CL1, and the sensor control unit 121 is connected to a register via the connection line CL1.
  • Information can be read and written to the group 117. That is, in the image sensor 10, information can be read and written to the register group 117 not only from the communication I/F 116 but also from the sensor control unit 121.
  • the signal processing block 120 includes a sensor control section 121, a signal processing section 122, a memory 123, a communication I/F 124, an image compression section 125, and an input I/F 126, and processes the captured image etc. obtained by the imaging block 110. and perform predetermined signal processing.
  • the sensor control unit 121 may be a processor such as a CPU or an MPU (Micro Processor Unit), or an MCU (Micro Controller Unit).
  • the sensor control unit 121, signal processing unit 122, memory 123, communication I/F 124, and input I/F 126 that constitute the signal processing block 120 are connected to each other via a bus, and exchange information as necessary. be able to.
  • the sensor control unit 121 controls the signal processing block 120, reads and writes information to the register group 117 of the imaging control unit 115 via the connection line CL1, and performs other operations by executing a program stored in the memory 123. Performs various processing.
  • the sensor control unit 121 functions as an imaging information calculation unit that calculates imaging information using the signal processing result obtained by signal processing in the signal processing unit 122, and calculates the signal processing result.
  • the new imaging information calculated using the image capturing controller 115 is fed back to the register group 117 of the imaging control unit 115 via the connection line CL1 and is stored therein.
  • the sensor control section 121 can control the imaging in the imaging unit 111 and the imaging signal processing in the imaging processing section 112, as a result, according to the signal processing result of the captured image.
  • the imaging information stored in the register group 117 by the sensor control unit 121 can be provided (output) to the outside from the communication I/F 116.
  • focus information among the imaging information stored in the register group 117 can be provided from the communication I/F 116 to a focus driver (not shown) that controls focus.
  • the signal processing unit 122 performs image processing on image data supplied from the imaging block 1102 to the signal processing block 120 via the connection line CL2, and signal processing using information received by the input I/F 126 from the outside.
  • the signal processing unit 122 performs development processing on the image data, which is, for example, RAW data supplied from the imaging processing unit 112, so that each pixel becomes each color data of R (red), G (green), and B (blue). You may generate RGB data with .
  • the signal processing unit 122 is not limited to this, but performs defect correction, AWB (Auto White Balance) processing on the image data supplied from the imaging processing unit 112, and converts the image data into an HDR (High Dynamic Range) image. HDR conversion processing or the like may be performed.
  • the signal processing unit 122 may further include a function as an ISP.
  • the signal processing unit 122 executes the moving body detection process in the above-described moving body detection mode and the human detection process in the human detection mode.
  • the signal processing unit 122 may execute the above-described moving body detection processing and human detection processing using machine learning models, respectively, regarding moving body detection and human detection.
  • the signal processing unit 122 may detect a moving object using motion vector detection based on the difference in image data between frames, or may detect a person using pattern matching or the like.
  • the signal processing unit 122 controls the transition of the operation modes shown in FIG. 8 based on the detection results of these moving object detection processes and human detection processes. Further, the signal processing unit 122 may include the detection results of the moving object detection processing and the human detection processing in the signal processing results and output them.
  • the memory 123 is composed of an SRAM (Static Random Access Memory), a DRAM (Dynamic RAM), etc., and stores data necessary for processing by the signal processing block 120.
  • SRAM Static Random Access Memory
  • DRAM Dynamic RAM
  • the memory 123 stores programs received from the outside in the communication I/F 124 , captured images compressed by the image compression unit 125 and used in signal processing by the signal processing unit 122 , and programs received by the signal processing unit 122 .
  • the signal processing result of signal processing, information received by the input I/F 126, etc. are stored.
  • the communication I/F 124 is, for example, a second communication I/F such as a serial communication I/F such as SPI (Serial Peripheral Interface), and communicates with the outside (for example, the ECU 20) by the sensor control unit 121 and signal processing. Necessary information such as programs to be executed by the unit 122 is exchanged.
  • a serial communication I/F such as SPI (Serial Peripheral Interface)
  • SPI Serial Peripheral Interface
  • the communication I/F 124 downloads a program to be executed by the sensor control unit 121 and the signal processing unit 122 from the outside, supplies it to the memory 123, and stores it. Therefore, depending on the program downloaded by the communication I/F 124, the sensor control unit 121 and the signal processing unit 122 can perform various processes.
  • the communication I/F 124 can exchange arbitrary data in addition to programs with the outside.
  • the communication I/F 124 can output a signal processing result obtained by signal processing in the signal processing unit 122 to the outside.
  • the communication I/F 124 outputs information according to instructions from the sensor control unit 121 to an external device, thereby making it possible to control the external device according to instructions from the sensor control unit 121.
  • the signal processing results obtained by signal processing in the signal processing unit 122 can be output to the outside from the communication I/F 124 and can also be written to the register group 117 of the imaging control unit 115 by the sensor control unit 121.
  • the signal processing results written in the register group 117 can be output from the communication I/F 116 to the outside. The same applies to the processing results of the processing performed by the sensor control unit 121.
  • a captured image is supplied to the image compression unit 125 from the imaging processing unit 112 via the connection line CL2.
  • the image compression unit 125 performs compression processing to compress image data, and generates compressed image data having a smaller amount of data than the image data.
  • the compressed image data generated by the image compression unit 125 is supplied to the memory 123 via the bus and stored therein.
  • signal processing in the signal processing unit 122 can be performed using not only the image data itself but also compressed image data generated from the image data in the image compression unit 125. Since compressed image data has a smaller amount of data than the image data supplied from the imaging processing section 112, it is possible to reduce the signal processing load on the signal processing section 122 and save the storage capacity of the memory 123 that stores compressed image data. can be achieved.
  • the compression process in the image compression unit 125 for example, scaling down can be performed to convert a captured image of 3968 pixels x 2976 pixels into an image of 640 pixels x 480 pixels.
  • the compression processing may include YUV data, which converts the RGB data into, for example, YUV image data. conversion can be performed.
  • image compression unit 125 can be realized by software or by dedicated hardware.
  • the input I/F 126 is an I/F that receives information from the outside.
  • the input I/F 126 receives, for example, an output from an external sensor (external sensor output) from an external sensor, and supplies the received output to the memory 123 via a bus for storage.
  • an external sensor external sensor output
  • a parallel I/F such as MIPI can be used as the input I/F 126.
  • the external sensor for example, a distance sensor that senses information regarding distance can be adopted. Furthermore, as the external sensor, for example, an image that senses light and outputs an image corresponding to the light can be adopted.
  • a sensor ie, an image sensor different from image sensor 10, can be employed.
  • the signal processing unit 122 uses the captured image or the compressed image generated from the captured image, and also uses the external sensor output received by the input I/F 126 from the above-mentioned external sensor and stored in the memory 123 to generate a signal. can be processed.
  • the signal processing unit 122 performs signal processing using image data obtained by imaging with the imaging unit 111 or compressed image data generated from the image data.
  • the signal processing results and image data are selectively output from the output I/F 114. Therefore, the image sensor 10 that outputs information required by the user can be configured to be small.
  • image data can be output without performing signal processing by the signal processing unit 122. That is, in this case, the image sensor 10 is configured to simply capture and output an image. In this case, the image sensor 10 can be configured only with the imaging block 110 without the output control section 113. Further, in this case, the moving body detection process and the human detection process may be executed at an output destination (such as the ECU 20) by the output I/F 114.
  • FIG. 10 is a perspective view schematically showing the structure of an example of the image sensor 10 according to the first embodiment described using FIG. 9.
  • the image sensor 10 can be configured as a one-chip semiconductor device having a stacked structure in which a plurality of dies are stacked.
  • the image sensor 10 is configured as a one-chip semiconductor device in which two dies 130 and 131 are stacked.
  • a die refers to a small piece of silicon with an electronic circuit built into it, and an individual product in which one or more dies are sealed is called a chip.
  • an imaging unit 111 is mounted on the upper die 130. Furthermore, the lower die 131 is equipped with an imaging processing section 112, an output control section 113, an output I/F 114, and an imaging control section 115. In this way, in the example of FIG. 10, the imaging unit 111 of the imaging block 110 is mounted on the die 130, and the parts other than the imaging unit 111 are mounted on the die 131.
  • a signal processing block 120 including a sensor control section 121, a signal processing section 122, a memory 123, a communication I/F 124, an image compression section 125, and an input I/F 126 is further mounted on the die 131.
  • the image sensor 10 has the imaging section 100 and the detection section 101 integrated into one chip.
  • the upper die 130 and the lower die 131 are electrically connected, for example, by forming a through hole that penetrates the die 130 and reaches the die 131.
  • the dies 130 and 131 are not limited to this, but the dies 130 and 131 may be made of metal such as a Cu-Cu bond that directly connects the metal wiring such as Cu exposed on the lower surface side of the die 130 and the metal wiring such as Cu exposed on the upper surface side of the die 131. - May be electrically connected by metal wiring or the like.
  • a column parallel AD method or an area AD method can be adopted as a method for AD converting the image signal output by the imaging unit 111 in the imaging processing section 112 as a method for AD converting the image signal output by the imaging unit 111 in the imaging processing section 112 as a method for AD converting the image signal output by the imaging unit 111 in the imaging processing section 112.
  • a column parallel AD method or an area AD method can be adopted as a method for AD converting the image signal output by the imaging unit 111 in the imaging processing section 112
  • an ADC Analog to Digital Converter
  • the ADC in each column is responsible for AD conversion of the pixel signal of the pixel in that column.
  • AD conversion of the image signals of pixels in each column of one row is performed in parallel.
  • a part of the imaging processing unit 112 that performs AD conversion in the column-parallel AD method may be mounted on the upper die 130.
  • pixels making up the imaging unit 111 are divided into a plurality of blocks, and an ADC is provided for each block. Then, the ADC of each block takes charge of AD conversion of the pixel signals of the pixels of that block, so that AD conversion of the image signals of the pixels of the plurality of blocks is performed in parallel.
  • AD conversion reading and AD conversion
  • image signals can be performed only for necessary pixels among the pixels constituting the imaging unit 111, using a block as the minimum unit.
  • the image sensor 10 can be configured with one die.
  • one-chip image sensor 10 is formed by stacking three or more dies. It can be configured as follows. For example, when three dies are stacked to form a one-chip image sensor 10, the memory 123 mounted on the die 131 in FIG. 10 can be mounted on a die different from the dies 130 and 131. .
  • an image sensor (hereinafter also referred to as a bump-connected sensor) in which a sensor chip, a memory chip, and a DSP chip are connected in parallel with each other through a plurality of bumps, a single-chip image sensor 10 configured in a stacked structure is used. Compared to , the thickness increases significantly and the device becomes larger.
  • the image sensor 10 having a laminated structure it is possible to prevent the device from increasing in size as described above and from being unable to secure a sufficient rate between the imaging processing section 112 and the output control section 113. be able to. Therefore, according to the image sensor 10 having a laminated structure, it is possible to make the configuration for outputting information required in the subsequent processing of the image sensor 10 compact.
  • the image sensor 10 can output the image data (RAW data, RGB data, etc.).
  • the image sensor 10 performs the signal processing in the signal processing unit 122 to obtain the information required by the user. It is possible to obtain and output signal processing results.
  • the signal processing performed by the image sensor 10 that is, the signal processing by the signal processing unit 122
  • detection processing for detecting a moving object or a person from image data
  • recognition processing (such as face recognition) based on image data (RAW data) acquired by the imaging unit 111 may be employed as the signal processing by the signal processing unit 122.
  • the image sensor 10 can receive, at the input I/F 126, the output of a distance sensor such as a ToF (Time of Flight) sensor that is arranged in a predetermined positional relationship with the image sensor 10.
  • a distance sensor such as a ToF (Time of Flight) sensor that is arranged in a predetermined positional relationship with the image sensor 10.
  • the signal processing of the signal processing unit 122 may include, for example, processing of removing noise in a distance image obtained from the output of the distance sensor received by the input I/F 126 using a captured image. It is possible to employ fusion processing that integrates the output and the captured image to obtain a highly accurate distance.
  • the image sensor 10 can receive, at the input I/F 126, an image output by an image sensor arranged in a predetermined positional relationship with the image sensor 10.
  • the signal processing of the signal processing unit 122 for example, self-position estimation processing (SLAM (Simultaneously Localization And Mapping)) using the image received by the input I/F 126 and the captured image as a stereo image is adopted.
  • SLAM Simultaneously Localization And Mapping
  • FIG. 11 is a block diagram showing in more detail the configuration of an example of the imaging unit 111 applicable to the first embodiment.
  • the imaging unit 111 includes a pixel array section 1011, a vertical scanning section 1012, an AD (Analog to Digital) conversion section 1013, a pixel signal line 1016, a vertical signal line 1017, and an imaging operation control section 1019. , and an imaging processing section 112.
  • AD Analog to Digital
  • the pixel array section 1011 includes a plurality of pixels Pix each having a photoelectric conversion element that performs photoelectric conversion on received light.
  • a photodiode can be used as the photoelectric conversion element.
  • a plurality of pixels Pix are arranged in a two-dimensional grid in the horizontal direction (row direction) and vertical direction (column direction).
  • the arrangement of pixels Pix in the row direction is called a line.
  • One frame of image (image data) is formed by pixel signals read out from a predetermined number of lines in this pixel array section 1011. For example, when one frame image is formed with 3000 pixels x 2000 lines, the pixel array section 1011 includes at least 2000 lines including at least 3000 pixels Pix.
  • a rectangular area formed by pixels Pix that output pixel signals effective for forming image data is referred to as an effective pixel area.
  • One frame of image is formed based on pixel signals of pixels Pix within the effective pixel area.
  • a pixel signal line 1016 is connected to each row and column of each pixel Pix, and a vertical signal line 1017 is connected to each column.
  • the end of the pixel signal line 1016 that is not connected to the pixel array section 1011 is connected to the vertical scanning section 1012.
  • the vertical scanning unit 1012 transmits control signals such as drive pulses for reading out pixel signals from the pixels Pix to the pixel array unit 1011 via the pixel signal line 1016 under the control of the imaging operation control unit 1019 described later.
  • An end of the vertical signal line 1017 that is not connected to the pixel array section 1011 is connected to the AD conversion section 1013.
  • the pixel signal read from the pixel is transmitted to the AD converter 1013 via the vertical signal line 1017.
  • a pixel signal is read out from a pixel by transferring charges accumulated in a photoelectric conversion element by exposure to light to a floating diffusion layer (FD), and converting the transferred charges in the floating diffusion layer into a voltage.
  • a voltage resulting from charge conversion in the floating diffusion layer is output to the vertical signal line 1017 via an amplifier.
  • the floating diffusion layer and vertical signal line 1017 are connected in accordance with a selection signal supplied via pixel signal line 1016. Further, in response to a reset pulse supplied via the pixel signal line 1016, the floating diffusion layer is connected to the power supply voltage VDD or the black level voltage supply line for a short period of time to reset the floating diffusion layer. A reset level voltage (referred to as voltage P) of the floating diffusion layer is output to the vertical signal line 1017.
  • a transfer pulse supplied via the pixel signal line 1016 turns on (closes) the space between the photoelectric conversion element and the floating diffusion layer, and transfers the charges accumulated in the photoelectric conversion element to the floating diffusion layer.
  • a voltage (referred to as voltage Q) corresponding to the amount of charge in the floating diffusion layer is output to the vertical signal line 1017.
  • the AD conversion unit 1013 includes an AD converter 1300 provided for each vertical signal line 1017, a reference signal generation unit 1014, and a horizontal scanning unit 1015.
  • the AD converter 1300 is a column AD converter that performs AD conversion processing on each column of the pixel array section 1011.
  • the AD converter 1300 performs AD conversion processing on the pixel signal supplied from the pixel Pix via the vertical signal line 1017, and performs two signals for correlated double sampling (CDS) processing to reduce noise. Two digital values (values corresponding to voltage P and voltage Q, respectively) are generated.
  • the AD converter 1300 supplies the two generated digital values to the imaging processing section 112.
  • the imaging processing unit 112 performs CDS processing based on the two digital values supplied from the AD converter 1300, and generates a pixel signal (pixel data) as a digital signal.
  • the pixel data generated by the imaging processing section 112 is output to the outside of the imaging unit 111.
  • One frame worth of pixel data output from the imaging processing section 112 is supplied as image data to, for example, the output control section 113 and the image compression section 125.
  • the reference signal generation unit 1014 generates a ramp signal RAMP used by each AD converter 1300 to convert a pixel signal into two digital values, based on the ADC control signal input from the imaging operation control unit 1019.
  • the ramp signal RAMP is a signal whose level (voltage value) decreases at a constant slope over time, or a signal whose level decreases stepwise.
  • Reference signal generation section 1014 supplies the generated ramp signal RAMP to each AD converter 1300.
  • the reference signal generation unit 1014 is configured using, for example, a DA (Digital to Analog) conversion circuit.
  • the horizontal scanning unit 1015 performs a selection scan to select each AD converter 1300 in a predetermined order under the control of the imaging operation control unit 1019, thereby scanning each digital image temporarily held by each AD converter 1300.
  • the values are sequentially output to the imaging processing unit 112.
  • the horizontal scanning unit 1015 is configured using, for example, a shift register or an address decoder.
  • the imaging operation control section 1019 performs drive control of the vertical scanning section 1012, AD conversion section 1013, reference signal generation section 1014, horizontal scanning section 1015, etc.
  • the imaging operation control unit 1019 generates various drive signals that serve as operating standards for the vertical scanning unit 1012, AD conversion unit 1013, reference signal generation unit 1014, and horizontal scanning unit 1015.
  • the imaging operation control unit 1019 allows the vertical scanning unit 1012 to scan each pixel Pix via the pixel signal line 1016 based on a vertical synchronization signal or an external trigger signal supplied from the outside (for example, the sensor control unit 121) and a horizontal synchronization signal. Generates control signals to supply to.
  • the imaging operation control unit 1019 supplies the generated control signal to the vertical scanning unit 1012.
  • the vertical scanning unit 1012 sends various signals including drive pulses to the pixel signal line 1016 of the selected pixel row of the pixel array unit 1011, based on the control signal supplied from the imaging operation control unit 1019, to each pixel Pix line by line. and outputs a pixel signal from each pixel Pix to the vertical signal line 1017.
  • the vertical scanning unit 1012 is configured using, for example, a shift register or an address decoder.
  • the imaging unit 111 configured in this manner is a column AD type CMOS (Complementary Metal Oxide Semiconductor) image sensor in which AD converters 1300 are arranged in each column.
  • CMOS Complementary Metal Oxide Semiconductor
  • the imaging unit 111 by sharing the floating diffusion layer of a plurality of pixels Pix, it is possible to obtain the area sum value of the pixel values of the plurality of pixels Pix. For example, by making the floating diffusion layers of four pixels Pix included in a 2 pixel x 2 pixel area common, it is possible to obtain the area sum value of the pixel values in the area, and calculate the area average based on the area sum value. It can be calculated.
  • the present invention is not limited to this, and it is also possible to calculate the area average by image processing in the imaging processing section 112 or the signal processing section 122.
  • the control signal output line by line from the vertical scanning unit 1012 via the pixel signal line 1016 and the scanning of each column by the horizontal scanning unit 1015 thinning readout from each pixel Pix can be realized. It is.
  • the medium-resolution image 30b and low-resolution image 30c used in the moving body detection mode and the human detection mode are realized by sharing the floating diffusion layer in the imaging unit 111 or by controlling the readout of the pixel Pix, for example, the power consumption related to the readout of the pixel Pix Also, the load of AD conversion processing on the AD converter 1300 can be suppressed, and power saving can be achieved. Furthermore, the detection unit 101 can reduce the power consumption of the detection process by processing the medium-resolution image 30b or the low-resolution image 30c, which have a smaller number of processing units than the high-resolution image 30a.
  • the medium resolution image 30b and the low resolution image 30c by image processing in the signal processing unit 122, for example.
  • the reading process of the pixel Pix in the imaging unit 111 is performed in the same way as in the case of the high-resolution image 30a, so power saving in the imaging unit 111 cannot be achieved.
  • Each pixel Pix can be provided with a filter that selectively transmits light in a predetermined wavelength band.
  • the filter is called a color filter.
  • color filters for each wavelength band of red (R), green (G), and blue (B), which constitute the three primary colors, are arranged for each pixel Pix.
  • the invention is not limited to this, and color filters of complementary colors may be arranged for each pixel Pix, or filters that selectively transmit light in the infrared wavelength range, or filters that selectively transmit light in the infrared wavelength range, or filters that selectively transmit light in the infrared wavelength range, or It may also be a filter that transmits light in a band.
  • these various filters will be explained using color filters as a representative.
  • FIG. 12 is a schematic diagram showing an example of a commonly used Bayer array.
  • the Bayer array consists of two pixels Pix(G) where G color filters are arranged, one pixel Pix(R) where R color filters are arranged, and B color pixel Pix(R).
  • these four pixels are arranged in a grid of 2 pixels x 2 pixels so that no two pixels Pix(G) are adjacent to each other.
  • the Bayer array is an array in which pixels Pix in which color filters that transmit light in the same wavelength band are arranged are not adjacent to each other.
  • pixel Pix(R) where an R color filter is arranged will be referred to as “R color pixel Pix(R)” or simply “pixel Pix(R)". It is called. The same applies to the pixel Pix (G) where the G color filter is arranged and the pixel Pix (B) where the B color filter is arranged. Furthermore, if the color filter is not a particular issue, each pixel Pix(R), Pix(G), and Pix(B) will be described as being represented by the pixel Pix.
  • FIG. 13 is a flowchart of an example of parking monitoring processing according to the first embodiment. The process according to the flowchart of FIG. 13 is started in the vehicle 10000 in which the in-vehicle monitoring device 1 according to the first embodiment is mounted, using, for example, a determination that the ignition is turned off as a trigger.
  • step S101 when the ECU 20 executes the control of retracting the side mirror and changing the imaging direction of the camera provided on the side mirror when the ignition is turned off (step S100), in the next step S101, the control unit 202 performs the following:
  • the parking monitoring mode is activated as the operation mode of the on-vehicle monitoring device 1. Note that details of the process in step S100 will be described later.
  • step S102 the ECU 20 uses the control unit 202 to change its operating mode to the power saving mode or stop some of its functions.
  • step S102 the ECU 20 issues a trigger to the image sensor 10 to start the detection process using the control unit 202.
  • step S103 to step S107 is performed on the image sensor 10 side.
  • step S103 the image sensor 10 uses the imaging unit 100 to acquire the low resolution image 30c.
  • the detection unit 101 instructs the imaging unit 100 to acquire the low resolution image 30c.
  • the imaging unit 100 acquires a low-resolution image 30c by area averaging, thinning, or the like.
  • the detection unit 101 performs a moving object detection process on the low resolution image 30c acquired in step S103.
  • the detection unit 101 may execute the moving object detection process using the low resolution image 30c acquired in the immediately previous step S103 and the low resolution image 30c taken one to several frames before the immediately previous step S103. .
  • step S104 If a moving object is not detected from the low-resolution image 30c in step S104 (step S104, "No"), the detection unit 101 returns the process to step S103 and acquires the low-resolution image 30c of the next frame. On the other hand, when a moving object is detected from the low resolution image 30c in step S104 (step S104, "Yes"), the detection unit 101 shifts the process to step S105.
  • step S105 the image sensor 10 uses the imaging unit 100 to acquire the medium resolution image 30b.
  • the detection unit 101 instructs the imaging unit 100 to acquire the medium resolution image 30b.
  • the imaging unit 100 acquires a medium resolution image 30b by area averaging, thinning, or the like.
  • the detection unit 101 performs human detection processing on the medium resolution image 30b acquired in step S105.
  • the detection unit 101 may perform the human detection process using the medium resolution image 30b acquired in the immediately preceding step S105 and the medium resolution image 30b from one to several frames before the immediately preceding step S105. .
  • the detection target of the detection unit 101 in step S106 is not limited to humans.
  • the detection unit 101 may detect an object other than a person as long as it is an object that triggers the start of recording, which will be described later.
  • step S106 If no person is detected from the medium resolution image 30b in step S106 (step S106, "No"), the detection unit 101 moves the process to step S107.
  • step S107 the detection unit 101 determines whether a certain period of time has passed since the person detection process in step S106. If the detection unit 101 determines that the certain period of time has not elapsed (step S107, "No"), the process returns to step S105 and acquires the medium resolution image 30b of the next frame. On the other hand, when the detection unit 101 determines that the certain period of time has elapsed (step S107, "Yes"), the process returns to step S103 and executes moving object detection based on the low resolution image 30c.
  • step S106 If a person is detected in the medium resolution image 30b in step S106 (step S106, "Yes"), the detection unit 101 moves the process to step S108.
  • step S108 the detection unit 101 restores the operation of the ECU 20.
  • the detection unit 101 issues an instruction to restore the operation of the ECU 20, and passes the instruction to the ECU 20.
  • the ECU 20 receives this instruction, the ECU 20 transitions its own operating mode from the power saving mode or the inactive mode except for a part to the normal operating mode in accordance with the received instruction.
  • step S109 to step S111 is performed on the ECU 20 side.
  • step S109 the ECU 20 in the vehicle-mounted monitoring device 1 acquires the high-resolution image 30a using the imaging unit 100.
  • the ECU 20 instructs the image sensor 10 to acquire a high-resolution image 30a.
  • the image sensor 10 acquires a high-resolution image 30a without performing resolution reduction processing such as area averaging or thinning from the imaging unit 100.
  • the ECU 20 executes a recording process in the determination unit 201 and the control unit 202, and records the high resolution image 30a in the storage device 103.
  • the determination unit 201 in the ECU 20 determines whether or not to end recording of the high resolution image 30a.
  • step S111 determines in step S111 that the recording unit 102 does not end recording (step S111, “No”)
  • the ECU 20 returns the process to step S109 and acquires the high-resolution image 30a of the next frame.
  • the image sensor 10 is instructed to do so.
  • step S111 determines that recording has ended in step S111 (step S111, "No")
  • the determination unit 201 moves the process to step S102, and changes the operation mode of the ECU 20 to the power saving mode or Some functions will be stopped.
  • the ECU 20 uses the control unit 202 to issue a trigger to the image sensor 10 to start the detection process in step S102.
  • the operation mode of the image sensor 10 is changed to a moving body detection mode and a human detection mode in response to this trigger. Therefore, the ECU 20 functions as a control unit that controls the imaging operation of the image sensor 10.
  • FIG. 14 is a flowchart of an example of the image data recording process according to the first embodiment.
  • the flowchart of FIG. 14 shows in more detail the processing of steps S109 to S111 in the flowchart of FIG. 13 described above.
  • the ECU 20 receives an image data recording request from the detection unit 101 in the image sensor 10, for example, by the control unit 202 (step S200).
  • the control unit 202 in the ECU 20 starts up the ISP included in the ECU 20.
  • the control unit 202 in the ECU 20 activates a storage device for recording image data.
  • step S203 the control unit 202 in the ECU 20 acquires the high resolution image 30a from the image sensor 10.
  • the process in step S203 corresponds to the process in step S109 in the flowchart of FIG. 14, for example.
  • the recognition unit 200 in the ECU 20 executes the ISP function and performs image processing on the high resolution image 30a acquired in step S202.
  • the recognition unit 200 may cause the ISP to perform recognition processing for recognizing a recognition target on the high-resolution image 30a acquired as RAW data.
  • the recognition unit 200 may recognize the person detected in the process of step S106 according to the flowchart of FIG. 14 from the high-resolution image 30a as a recognition target.
  • the determination unit 201 in the ECU 20 determines whether the high-resolution image 30a acquired in step S202 is a recording-determined image. For example, if a person is recognized by the recognition process at the ISP in step S204, the determination unit 201 may determine that the high-resolution image 30a is image data that has been determined to be recorded as a recording target.
  • step S205 If the determination unit 201 in the ECU 20 determines that the high-resolution image 30a is image data for which recording has been determined (step S205, "Yes"), the process proceeds to step S206.
  • step S206 the control unit 202 in the ECU 20 records the high resolution image 30a in the storage device.
  • step S207 the determination unit 201 in the ECU 20 determines whether or not to finish recording the high-resolution image 30a. If the determining unit 201 determines that recording is not to be ended (step S207, "No"), the process returns to step S203. On the other hand, when the determination unit 201 determines that recording has ended (step S207, "Yes"), the process proceeds to step S209.
  • step S205 determines whether the high-resolution image 30a is image data for which recording has been determined. If the determining unit 201 in the ECU 20 determines in step S205 that the high-resolution image 30a is not image data for which recording has been determined (step S205, "No"), the process moves to step S208. In step S208, the determining unit 201 determines whether the high-resolution image 30a is image data of a scene to be recorded.
  • the determination unit 201 selects the high-resolution images 30a from the time when the recognition target is no longer recognized (determined that the image data is not recording-determined image data) to the predetermined number of frames for the scene to be recorded. It may be determined that the image data is That is, in parking monitoring, the context of frames in which a person or object is detected may also be important. Therefore, it is preferable to record image data after a person or object that has been recognized is no longer recognized.
  • step S208 determines in step S208 that the high-resolution image 30a is image data of a scene to be recorded (step S208, "Yes")
  • step S206 the control unit 202 in the ECU 20 records the high resolution image 30a in the storage device.
  • step S208 determines in step S208 that the high-resolution image 30a is not image data of a scene to be recorded (step S208, "No")
  • step S209 determines in step S208 that the high-resolution image 30a is not image data of a scene to be recorded
  • step S209 the control unit 202 in the ECU 20 stops the storage device for recording image data.
  • step S210 the control unit 202 stops the operation of the ISP.
  • the in-vehicle monitoring device 1 increases the resolution of the image data to be processed in stages according to the detection results for the image data.
  • the in-vehicle monitoring device 1 executes moving object detection and human detection using the minimum necessary processing blocks at each stage of image data resolution, and finally records a high-resolution image 30a.
  • the moving object detection and the human detection are executed with the ECU 20 downstream of the image sensor 10 in a stopped state or in a power saving mode. Therefore, the in-vehicle monitoring device 1 according to the first embodiment can monitor the surroundings of the vehicle during parking with low power consumption, and can perform continuous monitoring operation for a long time.
  • the in-vehicle monitoring device 1 records only the image data of the high-resolution image 30a after the human detection process in the storage device, and the medium-resolution image 30b and the image data used in the moving object detection mode and the human detection mode.
  • the low resolution image 30c is not recorded. Therefore, in the in-vehicle monitoring device 1 according to the first embodiment, it is possible to suppress the consumption of the storage capacity of the storage device, and since only necessary image data is recorded, the effort of checking the recorded image data is suppressed. It is also possible.
  • the side mirrors 60L and 60R have no difference other than the left and right sides, so unless otherwise specified, the side mirrors 60L and 60R will be described with the side mirror 60R as a representative. Further, when there is no need to distinguish between the side mirrors 60L and 60R, the side mirrors 60L and 60R may be collectively described as the side mirror 60.
  • the side mirror 60R is provided with a camera (referred to as a side mirror camera) for capturing an image of the sensing region 10103R that covers the periphery of the right side of the vehicle 10000.
  • the side mirror camera is attached to the side mirror 60R so as to image the sensing region 10103R when the side mirror 60R is deployed. Therefore, when the side mirror 60R is retracted, the imaging direction may change, making it difficult to image the sensing region 10103R.
  • FIG. 15 is a schematic diagram schematically showing a side mirror camera according to existing technology.
  • the side mirror 60R includes a mirror portion 61 and an arm portion 62, and is fixed to the side surface of the vehicle 10000 by the arm portion 62.
  • the side mirror 60R can be folded by a shaft 63 provided on the arm portion 62 so that the mirror surface faces the side surface of the vehicle 10000.
  • a state in which the side mirror 60R is folded is called a stored state, and a state in which it is opened as shown in the figure is called a deployed state.
  • the side mirror camera 70 is fixedly attached to the side mirror 60R using a camera fixing jig 72.
  • the side mirror camera 70 is attached to the side mirror 60R, for example, so that the lens portion 71 faces slightly downward.
  • the side mirror camera 70 is an example of the camera 10051 described using FIGS. 3 and 4.
  • FIG. 16A is a schematic diagram showing an example of the imaging range of the side mirror camera 70 and the captured image when the side mirror 60R is deployed, according to the existing technology.
  • the imaging range (sensing area 10103R) by the side mirror camera 70 provided on the side mirror 60R covers the periphery of the right side of the vehicle 10000. Covered.
  • Section (b) of FIG. 16A shows an example of an image captured by the side mirror camera 70 mounted on the side mirror 60R when the side mirror 60R is expanded.
  • the left side is the direction of the front of the vehicle 10000.
  • the lens section 71 often uses a fisheye lens or a super wide-angle lens so that a wide angle of view can be obtained.
  • a fisheye lens is used as the lens section 71.
  • FIG. 16B is a schematic diagram showing an example of the imaging range of the side mirror camera 70 and a captured image when the side mirror 60R is retracted according to the existing technology.
  • the imaging direction of the side mirror camera 70 mounted on the side mirror 60R faces toward the rear of the vehicle 10000 according to the state of the side mirror 60R. Put it away.
  • the imaging range (sensing area 10103R') by the side mirror camera 70 covers only the rear right side of the vehicle 10000, and the front right side of the vehicle 10000 becomes a blind spot, as shown by arrow C in the figure. It ends up.
  • the imaging direction of the side mirror camera 70 mounted on the side mirror 60L faces toward the rear of the vehicle 10000.
  • the sensing area 10103L' by the side mirror camera 70 covers only the rear left side of the vehicle 10000, and the front left side of the vehicle 10000 becomes a blind spot, as shown by arrow D in the figure.
  • Section (b) of FIG. 16B shows an example of an image captured by the side mirror camera 70 mounted on the side mirror 60R when the side mirror 60R is retracted. Since the side mirror camera 70 faces the rear of the vehicle 10000, the left half of the captured image is an image of the vehicle 10000, the right half is an image of the rear right side of the vehicle 10000, and does not include an image of the front of the vehicle 10000. In this way, it can be seen that when the side mirror 60R is retracted, the front of the vehicle 10000 is a blind spot.
  • FIG. 17 is a schematic diagram schematically showing the side mirror camera 70 according to the first embodiment.
  • the side mirror camera 70 is attached to the side mirror 60R using a camera fixing jig 72a.
  • the camera fixing jig 72a can control the attitude of the side mirror camera 70 using, for example, two rotation axes.
  • the camera fixing jig 72a can control the attitude of the side mirror camera 70 in the vertical direction and the horizontal direction within predetermined angle ranges, respectively, as shown by arrows A and B in the figure. has been done.
  • FIG. 18 is a schematic diagram for explaining control of the imaging direction of the side mirror camera 70 according to the first embodiment.
  • section (a) shows the side mirror 60R when it is unfolded.
  • the posture of the side mirror camera 70 is controlled by the camera fixing jig 72a so that the imaging direction (the direction of the lens portion 71) faces in the opposite direction to the vehicle 10000 when the side mirror camera 70 is deployed. Therefore, the imaging range of the side mirror camera 70 can cover the sensing region 10103R (not shown) on the right side of the vehicle 10000.
  • section (b) shows the side mirror 60R when it is stored.
  • the side mirror 60R is stored, it is folded by the shaft 63 with the mirror surface facing the vehicle 10000 side.
  • the attitude of the side mirror camera 70 is controlled by the camera fixing jig 72a so that the imaging direction faces in the opposite direction to the vehicle 10000.
  • the posture of the side mirror camera 70 is controlled by the camera fixing jig 72a so that when the side mirror 60R is retracted, the imaging direction, that is, the direction of the lens portion 71, maintains the direction when the side mirror 60R is expanded. be done.
  • the imaging range of the side mirror camera 70 can cover the sensing region 10103R (not shown) on the right side of the vehicle 10000 even when the side mirror 60R is retracted. Therefore, it is possible to suppress the occurrence of blind spots in the existing technology, which was explained using FIG. 16B.
  • the side mirror camera 70 in order to direct (fix or change) the imaging range of the side mirror camera 70 mounted on the side mirror 60 in any three-dimensional direction within a predetermined angular range, the side mirror camera 70 is mounted on the side mirror 60.
  • a motor or a driving component having a similar function to the motor is incorporated into the camera fixing jig 72a for fixing the camera to the side mirror 60.
  • the drive component is driven according to the attitude of the side mirror 60. More specifically, when the angle of the side mirror 60 with respect to the vehicle 10000 is changed, the drive component is controlled based on a control signal that offsets the movement of the side mirror camera 70 due to the angle change.
  • FIG. 19 is a block diagram showing the configuration of an example of a camera fixing jig driving section that drives the camera fixing jig 72a according to the first embodiment.
  • the camera fixing jig drive section 75a includes a drive control section 76a, drive circuits 77v and 77h, and motors 78v and 78h.
  • the drive control section 76a is included in the ECU 20.
  • the present invention is not limited to this, and the ECU 20 may further include the drive circuits 77v and 77h.
  • the drive circuits 77v and 77h generate drive signals for driving the camera fixing jig 72a vertically and horizontally within a predetermined angular range, respectively, according to the drive information supplied from the drive control unit 76a. .
  • the drive circuit 77v drives the motor 78v using the generated drive signal.
  • the motor 78v drives the camera fixing jig 72a in the vertical direction.
  • the drive circuit 77h drives the motor 78h using the generated drive signal.
  • the motor 78h drives the camera fixing jig 72a in the horizontal direction.
  • the attitude of the side mirror camera 70 changes in the horizontal direction.
  • the drive circuits 77v and 77h output information regarding the drive of the motors 78v and 78h, respectively, as motor information.
  • the motor information includes, for example, information indicating the rotation angle of the motor.
  • the drive control unit 76a acquires side mirror drive motor information from the vehicle control system 10011 via the communication network 10041, for example, to drive the side mirror 60R to the unfolded state and the retracted state. Furthermore, the drive control unit 76a acquires each motor information output from the drive circuits 77v and 77h. The drive control unit 76a maintains the imaging direction of the side mirror camera 70 in the imaging direction when the side mirror 60 is deployed, based on the side mirror drive motor information and each motor information output from the drive circuits 77v and 77h. For this purpose, drive information for each of drive circuits 77v and 77h is generated.
  • the ECU 20 includes the drive control section 76a and functions as a control section that controls the imaging direction of the side mirror camera 70.
  • FIG. 20 is a flowchart of an example showing control of the imaging direction of the side mirror camera 70 according to the first embodiment.
  • step S300 the drive control unit 76a acquires motor information of each motor that drives the side mirror 60.
  • the drive control unit 76a acquires at least motor information of a motor that drives the side mirror 60 to expand and retract.
  • the drive control unit 76a may further acquire motor information of a motor that drives the mirror surface of the side mirror 60.
  • the drive control unit 76a acquires motor information of the motors 78v and 78h that drive the camera fixing jig 72a to the side mirror 60.
  • the drive control unit 76a uses each motor information acquired in step S300 and each motor information acquired in step S301 to determine the three-dimensional relative position of the side mirror camera 70 with respect to the body of the vehicle 10000. Calculate posture. That is, the drive control unit 76a calculates the current posture information of the side mirror camera 70 in step S302.
  • the attitude information of the side mirror camera 70 may include three-dimensional rotation information (roll, pitch, yaw) of the side mirror camera 70 and orientation information including height information of the side mirror camera 70. That is, the attitude information of the side mirror camera 70 can be said to be information indicating the imaging direction of the side mirror camera 70.
  • the drive control unit 76a uses preset three-dimensional posture information of the side mirror camera 70 when parking (for example, when the side mirror 60 is retracted) and the current three-dimensional posture information calculated in step S302. Calculate the difference between the dimensional posture information.
  • the drive control unit 76a In the next step S304, the drive control unit 76a generates drive information for driving the camera fixing jig 72a based on the difference in posture information calculated in step S303. More specifically, the drive control unit 76a generates each piece of drive information for driving the motors 78v and 78h so that the difference approaches zero.
  • the drive control unit 76a supplies the generated drive information to drive circuits 77v and 77h, respectively.
  • Each of the drive circuits 77v and 77h generates a drive signal based on the supplied drive information, and drives the motors 78v and 78h, respectively.
  • the attitude control of the side mirror camera 70 of the first embodiment it is possible to significantly suppress the occurrence of a blind spot when the side mirror 60 is retracted, as described in the existing technology.
  • the vehicle 10000 and the side mirror camera 70 have IMUs (Inertial Measurement Units), and the outputs of these IMUs are used to determine the photographing range and photographing direction of the side mirror camera 70. This is an example of controlling the IMUs (Inertial Measurement Units).
  • IMUs Inertial Measurement Units
  • FIG. 21 is a block diagram showing the configuration of an example of a camera fixing jig driving section that drives the camera fixing jig 72a according to the first modification of the first embodiment.
  • camera fixing jig drive section 75b includes a drive control section 76b, drive circuits 77v and 77h, and motors 78v and 78h.
  • the drive control section 76b is included in the ECU 20.
  • the present invention is not limited to this, and the ECU 20 may further include the drive circuits 77v and 77h.
  • drive circuits 77v and 77h and the motors 78v and 78h are equivalent to the drive circuits 77v and 77h and the motors 78v and 78h described using FIG. 19, so their descriptions will be omitted here.
  • the drive control unit 76b is supplied with camera attitude information indicating the attitude of the side mirror camera 70, which is output from the IMU 79 included in the side mirror camera 70.
  • the IMU 79 is not limited to this, and may be provided near the side mirror camera 70 attachment position of the camera fixing jig 72a.
  • the drive control unit 76b further acquires vehicle body posture information indicating the body posture of the vehicle 10000 output from the IMU included in the vehicle sensor 10027 via the communication network 10041.
  • the drive control unit 76a controls drive circuits 77v and 77h to maintain the imaging direction of the side mirror camera 70 in the imaging direction when the side mirror 60 is deployed, based on the camera attitude information and the vehicle attitude information. Generate information.
  • Each drive circuit 77v and 77h generates each drive signal for driving each motor 78v and 78h, based on each drive information generated by drive control section 76b.
  • FIG. 22 is an example flowchart showing control of the imaging direction of the side mirror camera according to the first modification of the first embodiment.
  • step S400 the drive control unit 76b acquires three-dimensional posture information of the vehicle body (vehicle body posture information) from the IMU included in the vehicle sensor 10027.
  • step S401 the drive control unit 76b acquires three-dimensional attitude information (camera attitude information) of the side mirror camera 70 output from the IMU 79.
  • the drive control unit 76b uses the vehicle body posture information acquired in step S400 and the camera posture information acquired in step S401 to determine the three-dimensional relative position of the side mirror camera 70 with respect to the body of the vehicle 10000. Calculate posture. That is, the drive control unit 76b calculates the current posture information of the side mirror camera 70 in step S402.
  • the drive control unit 76b uses preset three-dimensional posture information of the side mirror camera 70 when parking (for example, when the side mirror 60 is retracted) and the current three-dimensional posture information calculated in step S402. Calculate the difference between the dimensional posture information.
  • the drive control unit 76b In the next step S404, the drive control unit 76b generates drive information for driving the camera fixing jig 72a based on the difference in posture information calculated in step S403. More specifically, the drive control unit 76b generates each piece of drive information for driving the motors 78v and 78h so that the difference approaches zero.
  • the drive control unit 76b supplies the generated drive information to drive circuits 77v and 77h, respectively.
  • Each of the drive circuits 77v and 77h generates a drive signal based on the supplied drive information, and drives the motors 78v and 78h, respectively.
  • the drive control unit 76b determines whether the attitude of the side mirror camera 70 has become the ideal attitude. For example, the drive control unit 76b may determine that the attitude of the side mirror camera 70 has become the ideal attitude when the difference calculated in step S403 is within a predetermined range including 0. The drive control unit 76b is not limited to this, and in step S405, the drive control unit 76b executes the processes of steps S400 to S403 described above again, and determines whether the attitude of the side mirror camera 70 has become the ideal attitude based on the difference calculated in step S403. It may be determined whether or not.
  • step S405, "No" the process returns to step S400.
  • step S405, "Yes” the drive control unit 76b determines that the side mirror camera 70 attitude has become the ideal attitude (step S405, "Yes"), it ends the series of processes according to the flowchart of FIG.
  • the attitude control of the side mirror camera 70 of the first modification of the first embodiment it is possible to significantly suppress the occurrence of a blind spot when the side mirror 60 is retracted, as described in the existing technology. be.
  • the attitude of the side mirror camera 70 is controlled using the output of the IMU included in the vehicle sensor 10027 of the vehicle 10000 and the output of the IMU 79 included in the side mirror camera 70, but this is limited to this example. Not done.
  • the IMU 79 is provided near the side mirror camera 70 or the side mirror camera 70 mounting position of the camera fixing jig 72a, and the vehicle 10000 is not provided with an IMU, or the output of the IMU provided in the vehicle 10000 is drive-controlled. There may also be a case where the section 76b is unavailable.
  • the attitude information of the vehicle 10000 can be estimated based on the attitude information of the side mirror camera 70 acquired by the IMU 79 and the state of the motor that drives the side mirror 60 to the expanded state and the retracted state.
  • the drive control unit 76b determines the three-dimensional relative attitude of the side mirror camera 70 with respect to the vehicle 10000 based on the attitude information of the vehicle 10000 estimated in this way and the attitude information of the side mirror camera 70.
  • the drive control unit 76b uses the relative posture obtained in this way to execute the processes from step S403 in the flowchart of FIG. 22.
  • the image data captured by the side mirror camera 70 may be converted into an image while taking into account the three-dimensional posture of the side mirror 60.
  • the above-mentioned motors 78v and 78h are not driven, and in order to adjust the imaging range when the side mirror 60 is retracted, the three-dimensional posture information of the side mirror camera 70 at the time of unfolding and retracting is held. You can.
  • the attitude of the side mirror camera 70 may be changed to an attitude preset for parking monitoring.
  • the imaging direction of the side mirror camera 70 when the side mirror 60 is unfolded is directed toward the ground in order to monitor the vicinity of the wheels, and therefore the imaging range when the side mirror 60 is unfolded. In this case, it is difficult to monitor a wide range around the vehicle 10000.
  • the imaging direction of the side mirror camera 70 for example, changing the imaging direction to the horizontal direction
  • the imaging unit 100 of the image sensor 10 a sensor that receives light in the visible wavelength range is used, which uses a color filter that transmits light in the wavelength range of each RGB color.
  • the imaging unit 100 is capable of receiving light in the infrared wavelength region (IR (Infrared) light) in addition to light in the visible wavelength region. Apply the sensor.
  • IR Infrared
  • nighttime performance When monitoring parking, we want to monitor it for 24 hours a day, but if we use a sensor that receives light in the visible wavelength range, the nighttime performance will depend on the sensor sensitivity, which will affect the monitoring accuracy. There is a limit. In the second modification of the first embodiment, nighttime performance is improved by using a sensor capable of receiving IR light.
  • FIG. 23 is a schematic diagram showing the principle of a detection method using IR light according to a second modification of the first embodiment.
  • a light source 80 capable of emitting light including IR light is used to irradiate a subject 81 with light.
  • the reflected light from the subject 81 is irradiated onto the imaging unit 111 via the lens 82, the dual bandpass filter 83, and the color filter section 84.
  • the color filter section 84 includes color filters that transmit light in the visible wavelength range of R, G, and B colors, and an IR filter that transmits light in the IR wavelength range, which are arranged for each pixel Pix. , including.
  • FIG. 24 is a schematic diagram showing an example of the arrangement of color filters including IR filters (referred to as RGBIR arrangement) in the color filter section 84.
  • the unit is 16 pixels (4 pixels x 4 pixels), each of which has two pixels Pix (R), two pixels Pix (B), eight pixels Pix (G), and an IR filter.
  • Each pixel Pix includes four pixels Pix (IR), and each pixel Pix is arranged such that pixels Pix in which filters that transmit light in the same wavelength band are disposed are not adjacent to each other.
  • FIG. 25 is a schematic diagram showing an example of spectral characteristics of a sensor in which color filters that transmit light in each wavelength region of each RGB color and an IR filter that transmits light in an IR wavelength region are arranged.
  • Characteristic lines 90R, 90G, and 90B are examples of the characteristics of each color filter that transmits light in the visible wavelength range of R color, G color, and B color, respectively, and characteristic line 90IR transmits light in the IR wavelength range. Examples of characteristics by IR filters are shown.
  • the wavelength range from 400 nm to 700 nm is the visible light wavelength range
  • the wavelength range near 940 nm is the IR light wavelength range.
  • FIG. 26 is a schematic diagram showing an example of the spectral characteristics of the dual bandpass filter 83. As shown as a characteristic line 91 in FIG. 26, the dual bandpass filter 83 selectively transmits light in a visible wavelength range of 400 nm to 700 nm and light in a wavelength range near 940 nm. It has the characteristics of
  • a sensor capable of receiving IR light and equipped with a color filter section 84 having an RGBIR array shown in FIG. 24 is used as the camera 10051 in the external recognition sensor 10025, and by projecting IR light at night, parking monitoring at night accuracy can be improved.
  • the detection function of the detection unit 101 for example, when a machine learning model is used for human detection, it also learns and deals with the IR light environment. Furthermore, in the case of a drive recorder, if it is installed inside a car, it may not be able to receive IR light due to the influence of glass, but in the case of a camera outside the car, this is not a problem.
  • FIG. 27 is a block diagram showing the configuration of an example of an in-vehicle monitoring device 1a according to a third modification of the first embodiment.
  • the imaging section 100 is included in the image sensor 10a as one chip, and the detection section 101 is included in the detection unit 11 as one chip.
  • the ECU 20 includes a recognition section 200, a determination section 201, and a control section 202, similar to the in-vehicle monitoring device 1 described using FIG.
  • the image sensor 10a outputs a low resolution image 30c in the moving object detection mode, and outputs a medium resolution image 30b in the human detection mode. Furthermore, the image sensor 10a outputs a high resolution image 30a in the recording mode.
  • each pixel in the imaging unit 100 is The load on the readout process from Pix and the detection process by the detection unit 101 is reduced, and the ECU 20 operates in a low power consumption mode, making it possible to save power.
  • the imaging section 100 and the detection section 101 are integrated into one chip, and the ECU 20 includes the recognition section 200, the determination section 201, and the control section 202.
  • the detection unit 101 is configured on the ECU 20 side.
  • FIG. 28 is a block diagram showing the configuration of an example of an in-vehicle monitoring device 1b according to a fourth modification of the first embodiment.
  • the ECU 20a includes a detection unit 101, a recognition unit 200, a determination unit 201, and a control unit 202, and the image sensor 10a includes only the imaging unit 100.
  • the image sensor 10a outputs a low resolution image 30c in the moving object detection mode, and outputs a medium resolution image 30b in the human detection mode. Furthermore, the image sensor 10a outputs a high resolution image 30a in the recording mode. Furthermore, when the detection unit 101 executes the detection process, the ECU 20a causes the recognition unit 200, the determination unit 201, and the control unit 202 to operate in a low power consumption mode.
  • each pixel in the imaging unit 100 is The load of read processing from Pix and detection processing by the detection unit 101 is reduced, and the recognition unit 200, determination unit 201, and control unit 202 in the ECU 20a operate in a low power consumption mode, making it possible to save power. .
  • the second embodiment of the present disclosure differs from the above-described first embodiment in that the imaging direction of one or more cameras 10051 among the front, rear, left, and right cameras 10051 is changed according to the detection result of moving object detection or human detection. This is an example of tracking a target.
  • the front, rear, left, and right cameras 10051 are each fixed to the vehicle body of the vehicle 10000 by the camera fixing jig 72a described in the first embodiment using FIGS. It is assumed that the imaging direction can be changed within a predetermined angular range in both the direction and the horizontal direction.
  • FIG. 29 is a schematic diagram for explaining control of the imaging direction according to the second embodiment.
  • the target object 95 is cut off at the left end of the image data captured in the sensing region 10103R and at the right end of the image data captured in the sensing region 10103F.
  • image distortion becomes large at the edges of the captured image data. Therefore, these image data may have low effectiveness as evidence.
  • the imaging directions of the front camera and the right camera whose captured images include the object 95 are directed toward the object 95.
  • the object 95 is located at the center of each of the sensing area 10103F' of the front camera and the sensing area 10103R' of the right camera. is improved.
  • the recognition unit 200 of the ECU 20 when the operation transitions to the recording mode according to the human detection result in the detection unit 101, the recognition unit 200 of the ECU 20 generates an image of the target object 95 based on the high-resolution image 30a. Find the position within.
  • the control unit 202 controls the motor 78v and the motor 78v, which control the imaging direction of the camera 10051 that can include the position in the center of the imaging range, among the cameras 10051, based on the determined position of the target object 95 in the image. 78h is generated.
  • Drive circuits 77v and 77h drive motors 78v and 78h according to drive information generated by control unit 202. Thereby, the imaging direction of the front camera and the right camera can be directed toward the target object 95, and the target object 95 can be tracked.
  • the ECU 20 changes the resolution of the image data taken by the front camera and the right camera to the resolution before directing the imaging direction toward the object 95.
  • a relatively high resolution is preferable.
  • the imaging directions of the front camera and the right camera are changed with respect to the object 95, but this is not limited to this example.
  • a mechanism for changing the imaging direction is provided only for the right and left cameras, and an event is detected based on image data captured by the front camera or rear camera, the imaging direction of the right or left camera can be changed.
  • the blind spots of other cameras such as the front camera and the rear camera may be covered.
  • the vehicle 10000 is equipped with a camera whose imaging direction can be changed according to control in addition to the front, rear, left, and right cameras 10051 (surround cameras), when the object 95 is detected, the image of the camera is The direction may be directed toward the target object 95.
  • the in-vehicle monitoring device even if the object 95 is completely visible in the imaging range of the front, rear, left, and right cameras 10051 provided in the vehicle 10000, the object 95 By changing the imaging direction of the camera 10051 that is completely out of view, the target object 95 can be included in the center of the imaging range. Therefore, it is possible to perform cognitive processing on the object 95 with high precision, and it is possible to improve the evidence capacity of image data.
  • a third embodiment of the present disclosure is such that, in the second embodiment described above, the imaging direction of the in-vehicle camera provided in the vehicle is changed to track the object in accordance with the detection result of moving object detection or human detection. This is an example.
  • the in-vehicle camera is an example of the in-vehicle sensor 10026 shown in FIG. , for example, at the center of the upper end of the windshield), and the imaging direction can be changed within a predetermined angular range in the vertical and horizontal directions according to instructions from the ECU 20.
  • FIG. 30 is a schematic diagram for explaining control of the imaging direction according to the third embodiment.
  • sensing region 10106 corresponds to the imaging range of in-vehicle camera 10160.
  • section (a) of FIG. 30 similar to section (a) of FIG. 10103F and 10103F respectively.
  • the imaging direction of the in-vehicle camera 10160 is directed toward the object 95, as shown in section (b) of FIG.
  • the object 95 is located at the center of the sensing area 10106' by the in-vehicle camera 10160, and the evidence is improved compared to the case of section (a) in FIG. 30.
  • the recognition unit 200 of the ECU 20 when the operation transitions to the recording mode according to the human detection result in the detection unit 101, the recognition unit 200 of the ECU 20 generates an image of the target object 95 based on the high-resolution image 30a. Find the position within.
  • the control unit 202 generates drive information for driving a motor that controls the imaging direction of the in-vehicle camera 10160, based on the determined position of the object 95 in the image.
  • a drive circuit that drives the motor drives the motor according to drive information generated by the control unit 202.
  • the imaging direction of the in-vehicle camera 10160 can be directed toward the object 95, and the object 95 can be tracked.
  • the in-vehicle monitoring device even when the object 95 is captured in the imaging range of the front, rear, left, and right cameras 10051 provided in the vehicle 10000, the in-vehicle camera 10160 By changing the imaging direction, imaging can be performed with the target object 95 included in the center of the imaging range. Therefore, it is possible to perform cognitive processing on the object 95 with high precision, and it is possible to improve the evidence capacity of image data.
  • each camera provided as a surround camera on the front, rear, left, and right sides of the vehicle body of the vehicle 10000 often uses an ultra-wide-angle lens or a fisheye lens to ensure a wide sensing area.
  • the in-vehicle camera 10160 generally uses a lens with a standard angle of view. Therefore, by controlling the imaging direction of the in-vehicle camera 10160 and tracking the object 95, it is possible to recognize the object 95 with higher accuracy, and the captured image data can be expected to have high evidentiary ability. .
  • the in-vehicle monitoring device detects a person based on the image data captured by the imaging unit 100, but this is not limited to this example.
  • the in-vehicle monitoring device may detect not only a person but also an object that may harm the vehicle 10000. Examples of objects that may cause harm to vehicle 10000 include other vehicles and bicycles. Examples of such objects may include people. Even in this case, the in-vehicle monitoring device may start recording captured image data to the storage device in response to detection of the object.
  • other cameras other than the detected camera may be transitioned to a high-resolution imaging state.
  • the in-vehicle monitoring device in order to realize keyless start-up of the vehicle 10000 using a DMS (Driver Monitoring System) or the like, the in-vehicle monitoring device according to each of the disclosed embodiments and modifications may be applied.
  • the in-vehicle monitoring device can reduce power consumption as a whole by reducing power consumption in stages for face detection, face recognition, and personal authentication processing.
  • an in-vehicle parking monitoring system that covers 360° around the vehicle 10000 as a monitoring range can be realized with low power consumption, and parking monitoring can be performed for a long time.
  • the side mirror camera 70 When the side mirror camera 70 according to each embodiment and each modification of the present disclosure is mounted on the electrically retractable side mirror 60, the side mirror camera 70 captures images when driving (unfolded) and when parking (folded). It becomes possible to change the direction, and blind spots caused by the side mirror camera 70 can be reduced. Furthermore, since the imaging direction of the side mirror camera 70 is generally directed toward the ground, by three-dimensionally changing the imaging direction (posture) of the side mirror camera 70 when parking, the recorded image can be It is possible to improve the strength of evidence.
  • RGBIR array filter to the imaging unit 100 and projecting IR light, in-vehicle parking monitoring with high nighttime performance can be realized, and 24-hour monitoring including nighttime can be realized.
  • each embodiment and each modification of the present disclosure it is possible to use a surround camera mounted on the vehicle 10000, eliminating the need for parking monitoring using an aftermarket drive recorder, and improving the design of the vehicle 10000. Loss of sexuality can also be avoided.
  • moving object detection and human detection are performed based on the captured image data, and high-resolution image data is recorded in the storage device according to the results. . Therefore, it is possible to reduce pressure on the recording capacity of the storage device and reduce the effort required by the user to check recorded moving images. Additionally, unnecessary recording is suppressed, data output outside the sensor is limited, and privacy protection and security can be improved.
  • an imaging unit that is installed in a vehicle and that includes a side mirror housing and that generates image data in response to imaging; a control unit that controls an imaging operation and an imaging direction of the imaging unit; Equipped with The control unit includes: controlling the imaging direction of the imaging unit according to the state of the side mirror housing; setting the imaging operation of the imaging unit to a first power consumption mode in response to a determination that the ignition of the vehicle is off; The imaging unit includes: When the movement of a peripheral object is detected based on the first image data generated in the first power consumption mode, the imaging operation is performed in a second power consumption mode in which power consumption is higher than that in the first power consumption mode. set to mode, In-vehicle monitoring device.
  • the second power consumption mode is a mode that generates second image data with a resolution lower than the maximum resolution of image data generated by the imaging unit
  • the first power consumption mode is a mode that generates second image data with a resolution lower than the maximum resolution of image data generated by the imaging unit.
  • the control unit includes: When an object that may cause harm to a person or the vehicle is detected based on the second image data generated by the imaging unit in the second power consumption mode, the imaging operation of the imaging unit is performed. , setting a third power consumption mode that consumes more power than the second power consumption mode; The in-vehicle monitoring device according to (1) or (2) above.
  • the third power consumption mode is a mode for generating third image data having a higher resolution than the second image data generated in the second power consumption mode.
  • the control unit includes: recording the third image data in a recording section; The in-vehicle monitoring device according to (4) above.
  • the control unit includes: controlling the imaging direction of the imaging unit to maintain the imaging direction when the side mirror casing is deployed, according to a state of the side mirror casing; The vehicle-mounted monitoring device according to any one of (1) to (5) above.
  • the control unit includes: controlling the imaging direction of the imaging unit to maintain the imaging direction when the side mirror casing is unfolded when the side mirror casing is retracted; The in-vehicle monitoring device according to (6) above.
  • the control unit includes: controlling the imaging direction of the imaging unit based on drive information for driving the side mirror housing and drive information for changing the imaging direction of the imaging unit with respect to the side mirror housing; The in-vehicle monitoring device according to (6) or (7) above.
  • the control unit includes: controlling the imaging direction of the imaging unit based on attitude information indicating the attitude of the vehicle and attitude information indicating the attitude of the imaging unit; The in-vehicle monitoring device according to (6) or (7) above.
  • the control unit includes: When a target object is detected based on the image data generated by the imaging unit, acquiring image data captured in the direction of the target object; The vehicle-mounted monitoring device according to any one of (1) to (9) above. (11) The control unit includes: changing the imaging direction by the imaging unit to the direction of the object; The in-vehicle monitoring device according to (10) above. (12) The control unit includes: changing the imaging direction of an in-vehicle camera provided inside the vehicle to the direction of the object; The in-vehicle monitoring device according to (10) above. (13) The control unit includes: When the target object is detected based on the first image data generated by the imaging unit in the first power consumption mode, the resolution is higher than the first image data obtained by capturing the direction of the target object.
  • the vehicle-mounted monitoring device obtain high image data
  • the vehicle-mounted monitoring device according to any one of (10) to (12) above.
  • the imaging unit and the control unit are integrally configured, The vehicle-mounted monitoring device according to any one of (1) to (13) above.
  • the imaging unit and the control unit are configured separately, The vehicle-mounted monitoring device according to any one of (1) to (13) above.
  • the control unit is included in the configuration of a processing unit that executes subsequent processing.
  • the in-vehicle monitoring device includes:
  • the vehicle includes first and second cameras provided in the side mirror housing of the vehicle, third and fourth cameras provided at the front and rear of the vehicle, and an in-vehicle camera provided inside the vehicle.
  • the vehicle-mounted monitoring device according to any one of (1) to (16) above.
  • a control unit that controls an imaging operation and an imaging direction of an imaging unit provided in a vehicle including a side mirror housing that generates image data in response to imaging; Equipped with The control unit includes: controlling the imaging direction of the imaging unit according to the state of the side mirror housing; setting the imaging operation of the imaging unit to a first power consumption mode in response to a determination that the ignition of the vehicle is off; When the movement of a peripheral object is detected based on the first image data generated by the imaging unit in the first power consumption mode, the imaging operation of the imaging unit is changed from the first power consumption mode to the first power consumption mode. is set to the second power consumption mode with high power consumption, Information processing device.
  • an imaging device that is installed in a vehicle and that includes a side mirror housing and that generates image data in response to imaging; a control device that communicates with the imaging device and controls the imaging operation and imaging direction of the imaging device; including;
  • the control device includes: controlling the imaging direction of the imaging device according to the state of the side mirror housing; setting the imaging operation of the imaging device to a first power consumption mode in response to a determination that the ignition of the vehicle is off;
  • the imaging device includes: When the motion of a peripheral object is detected based on the first image data generated in the first power consumption mode, the imaging operation operates in a second power consumption mode in which the imaging operation consumes more power than the first power consumption mode. set to mode, In-vehicle monitoring system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

Selon la présente divulgation, un dispositif de surveillance embarqué est disposé dans un véhicule comprenant une coque de rétroviseur extérieur, et comprend : une unité d'imagerie (100) qui génère des données d'image en réponse à une capture d'image ; et une unité de commande (20) qui commande l'opération d'imagerie et la direction d'imagerie de l'unité d'imagerie. En fonction de l'état de la coque du rétroviseur extérieur, l'unité de commande commande la direction d'imagerie de l'unité d'imagerie et règle, en fonction de la détermination d'allumage-arrêt du véhicule, l'opération d'imagerie de l'unité d'imagerie dans un premier mode de consommation d'énergie. Lorsque le mouvement d'un objet environnant est détecté d'après les premières données d'image générées dans le premier mode de consommation d'énergie, l'unité d'imagerie règle l'opération d'imagerie dans un second mode de consommation d'énergie qui consomme plus d'énergie que le premier mode de consommation d'énergie.
PCT/JP2023/007277 2022-07-26 2023-02-28 Dispositif de surveillance embarqué, dispositif de traitement d'informations et système de surveillance embarqué WO2024024148A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-118660 2022-07-26
JP2022118660 2022-07-26

Publications (1)

Publication Number Publication Date
WO2024024148A1 true WO2024024148A1 (fr) 2024-02-01

Family

ID=89705922

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/007277 WO2024024148A1 (fr) 2022-07-26 2023-02-28 Dispositif de surveillance embarqué, dispositif de traitement d'informations et système de surveillance embarqué

Country Status (1)

Country Link
WO (1) WO2024024148A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240107185A1 (en) * 2022-09-23 2024-03-28 Pixart Imaging Inc. Motion sensor and motion detection system using the same

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009006974A (ja) * 2007-06-29 2009-01-15 Denso Corp サイドミラー装置およびサイドミラーシステム
JP2012242993A (ja) * 2011-05-18 2012-12-10 Nissan Motor Co Ltd 移動体監視装置及び移動体の監視方法
JP2018014554A (ja) * 2016-07-19 2018-01-25 株式会社クボタ 作業車
WO2019003826A1 (fr) * 2017-06-27 2019-01-03 ソニーセミコンダクタソリューションズ株式会社 Dispositif de capture d'image, système d'utilisation de véhicule et système de surveillance de véhicule
WO2020129279A1 (fr) * 2018-12-19 2020-06-25 株式会社Jvcケンウッド Dispositif, système, procédé et programme de commande d'enregistrement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009006974A (ja) * 2007-06-29 2009-01-15 Denso Corp サイドミラー装置およびサイドミラーシステム
JP2012242993A (ja) * 2011-05-18 2012-12-10 Nissan Motor Co Ltd 移動体監視装置及び移動体の監視方法
JP2018014554A (ja) * 2016-07-19 2018-01-25 株式会社クボタ 作業車
WO2019003826A1 (fr) * 2017-06-27 2019-01-03 ソニーセミコンダクタソリューションズ株式会社 Dispositif de capture d'image, système d'utilisation de véhicule et système de surveillance de véhicule
WO2020129279A1 (fr) * 2018-12-19 2020-06-25 株式会社Jvcケンウッド Dispositif, système, procédé et programme de commande d'enregistrement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240107185A1 (en) * 2022-09-23 2024-03-28 Pixart Imaging Inc. Motion sensor and motion detection system using the same

Similar Documents

Publication Publication Date Title
JP6795030B2 (ja) 撮像制御装置及び撮像制御方法、並びに撮像装置
KR102613792B1 (ko) 촬상 장치, 화상 처리 장치 및 화상 처리 방법
JP7024782B2 (ja) 画像処理装置と画像処理方法および撮像装置
US11895398B2 (en) Imaging device and imaging system
US10704957B2 (en) Imaging device and imaging method
WO2020196092A1 (fr) Système d'imagerie, procédé de commande de système d'imagerie, et système de reconnaissance d'objets
WO2020080383A1 (fr) Dispositif d'imagerie et équipement électronique
US11585898B2 (en) Signal processing device, signal processing method, and program
US20220148432A1 (en) Imaging system
WO2024024148A1 (fr) Dispositif de surveillance embarqué, dispositif de traitement d'informations et système de surveillance embarqué
DE112019001772T5 (de) Bildgebungsvorrichtung
JP2018064007A (ja) 固体撮像素子、および電子装置
JP6981416B2 (ja) 画像処理装置と画像処理方法
CN115918101A (zh) 摄像装置、信息处理装置、摄像系统和摄像方法
WO2022153896A1 (fr) Dispositif d'imagerie, procédé et programme de traitement des images
WO2020036044A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme
WO2024106196A1 (fr) Dispositif d'imagerie à semi-conducteurs et appareil électronique
WO2024106132A1 (fr) Dispositif d'imagerie transistorisé et système de traitement d'informations
WO2021125076A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, programme, dispositif de capture d'image et système de capture d'image
US20230412923A1 (en) Signal processing device, imaging device, and signal processing method
WO2021229983A1 (fr) Dispositif et programme de capture d'image
WO2022153888A1 (fr) Dispositif d'imagerie à semi-conducteur, procédé de commande destiné à un dispositif d'imagerie à semi-conducteur et programme de commande destiné à un dispositif d'imagerie à semi-conducteur
WO2022054742A1 (fr) Élément de capture d'image et dispositif de capture d'image
JP2024073899A (ja) 撮像素子
TW202240475A (zh) 資訊處理裝置、資訊處理系統、資訊處理方法及記錄媒體

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23845888

Country of ref document: EP

Kind code of ref document: A1