WO2022062582A1 - 控制摄像头模组补光时间的方法及装置 - Google Patents

控制摄像头模组补光时间的方法及装置 Download PDF

Info

Publication number
WO2022062582A1
WO2022062582A1 PCT/CN2021/106061 CN2021106061W WO2022062582A1 WO 2022062582 A1 WO2022062582 A1 WO 2022062582A1 CN 2021106061 W CN2021106061 W CN 2021106061W WO 2022062582 A1 WO2022062582 A1 WO 2022062582A1
Authority
WO
WIPO (PCT)
Prior art keywords
photosensitive chip
image
target
exposure period
target area
Prior art date
Application number
PCT/CN2021/106061
Other languages
English (en)
French (fr)
Inventor
何彦杉
徐彧
黄为
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21870960.8A priority Critical patent/EP4207731A4/en
Publication of WO2022062582A1 publication Critical patent/WO2022062582A1/zh
Priority to US18/189,362 priority patent/US20230232113A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/743Bracketing, i.e. taking a series of images with varying exposure conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation

Definitions

  • the present application relates to the field of automatic driving, and more particularly, to a method and device for controlling the light-filling time of a camera module.
  • Artificial intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that responds in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision-making and reasoning, human-computer interaction, recommendation and search, and basic AI theory.
  • Autopilot is a mainstream application in the field of artificial intelligence.
  • Autopilot technology relies on the cooperation of computer vision, radar, monitoring devices and global positioning systems to allow motor vehicles to achieve autonomous driving without the need for human active operation.
  • Autonomous vehicles use various computing systems to help transport passengers or cargo from one location to another. Some autonomous vehicles may require some initial or continuous input from an operator, such as a pilot, driver, or passenger.
  • An autonomous vehicle allows the operator to switch from a manual mode of operation to an autonomous driving mode or a mode in between. Since automatic driving technology does not require humans to drive motor vehicles, it can theoretically effectively avoid human driving errors, reduce the occurrence of traffic accidents, and improve the efficiency of highway transportation. Therefore, autonomous driving technology is getting more and more attention.
  • an infrared (infrared, IR) camera can be used for fatigue detection of the driver, behavior recognition of the driver or other passengers in the cockpit, gesture recognition, and detection of leftover objects.
  • Infrared cameras are not affected by visible light and can work normally during the day or night.
  • the traditional infrared camera is exposed, the infrared light source is used as the light source, and each photosensitive chip row in the photosensitive chip is exposed row by row through the rolling shutter door, until all the photosensitive chip rows are exposed, then a complete full exposure.
  • the heat generation of this infrared camera module is relatively high.
  • the present application provides a method and device for controlling the light-filling time of a camera module, which can reduce the heat generation of the infrared camera module.
  • a method for controlling the light-filling time of a camera module comprising:
  • the first target photosensitive chip row is in the first exposure period of the current frame, and the first target photosensitive chip row refers to the chip row in the photosensitive chip used to generate the image content in the first target area; in the current frame When exposing the photosensitive chip in a frame, the infrared light source is instructed to perform supplementary light according to the first exposure period.
  • the camera module may include a camera, and the camera may include a photosensitive chip.
  • the first exposure period of the row of the first target photosensitive chip in the current frame is determined according to the area in the first image that needs to be filled with light, and when the photosensitive chip is exposed in the current frame, Instructing the infrared light source to perform supplementary light according to the first exposure period can reduce the working time of the infrared light source, thereby reducing the heat generation of the infrared camera module.
  • the method for controlling the light-filling time of the camera module in the embodiment of the present application does not increase or change the hardware modules (or units) in the camera module, but determines the location of the photosensitive chip in the camera module according to the first target area.
  • the supplementary light period of the current frame (for example, the first exposure period), and when the current frame exposes the photosensitive chip, the supplementary light is performed within the supplementary light period to reduce the infrared light source. Therefore, the heat generation of the infrared camera module can be reduced without increasing the cost.
  • the method for controlling the light-filling time of the camera module in the embodiment of the present application also does not increase the size of the camera module, which is beneficial to the camera.
  • the camera may further include an infrared light source and a rolling shutter.
  • the infrared light source in the embodiments of the present application may be either a built-in infrared light source of the camera, or an independent external infrared light source, which is not limited in the embodiments of the present application.
  • the determining the first target area in the first image captured by the camera before the current frame includes: determining the first target area according to a preset target object the first target area in an image.
  • the light-filling time of the photosensitive chip in the current frame can be flexibly controlled according to a preset target object.
  • the first target area is a face area in the first image.
  • the photosensitive chip includes a plurality of photosensitive chip rows, and the plurality of pixel rows in the first image correspond to the plurality of photosensitive chip rows;
  • the determining the first exposure period of the first target photosensitive chip row in the current frame according to the first target area includes: determining the first exposure period corresponding to the pixel row in the first target area in the first image Target photosensitive chip row; determine the first exposure period of the first target photosensitive chip row in the current frame.
  • the plurality of pixel rows in the first image correspond to the plurality of photosensitive chip rows, and the pixel rows in the first target area in the first image can be determined in a portable manner.
  • the first target photosensitive chip row is convenient to determine the first exposure period of the first target photosensitive chip row in the current frame.
  • the method further includes: determining a second target area in the second image acquired by the current frame, where the second target area is the second target area.
  • adjusting the light-filling period of the photosensitive chip in the subsequent frame can improve the quality of the image captured by the camera module. Effect.
  • the method further includes: determining a third exposure period of the first target photosensitive chip row in a subsequent frame according to the first exposure period; When exposing the photosensitive chip, the infrared light source is instructed to perform supplementary light according to the third exposure period.
  • the supplementary light period (for example, the third exposure period) of the subsequent frame can be conveniently determined based on the first exposure period.
  • the first exposure period or the third exposure period in the embodiments of the present application may be expressed in relative time or in absolute time.
  • the first exposure period may refer to: the start time is T 0 +T 1 , and the end time is T 0 + The time period of T 2 , that is, the interval between the start moment of the first exposure period and the exposure start moment of the current frame is T 1 , and the interval between the end moment of the first exposure period and the exposure start moment of the current frame is T 2 .
  • the third exposure period may refer to the time period when the start time is T 3 +T 1 and the end time is T 3 +T 2 , that is, the third exposure period
  • the interval between the start time of the period and the exposure start time of the subsequent frame is T 1
  • the interval between the end time of the third exposure period and the exposure start time of the subsequent frame is T 2 .
  • the first exposure period may refer to the time period with the start time being T 4 and the end time being T 5 .
  • the third exposure period may refer to the time period with the start time being T 6 and the end time being T 7 , that is, the start time of the third exposure period is the same as the time period T 7 .
  • a device for controlling the light-filling time of a camera module comprising:
  • a first determination unit configured to determine a first target area in a first image captured by the camera before the current frame, where the first target area is an area in the first image that needs to be filled with light
  • the second determination a unit configured to determine the first exposure period of the first target photosensitive chip row in the current frame according to the first target area, and the first target photosensitive chip row refers to the photosensitive chip used to generate the first target area
  • the chip row of the image content in the instructing unit is configured to instruct the infrared light source to perform supplementary light according to the first exposure period when the photosensitive chip is exposed in the current frame.
  • the camera module may include a camera, and the camera may include a photosensitive chip.
  • the first exposure period of the row of the first target photosensitive chip in the current frame is determined according to the area in the first image that needs to be filled with light, and when the photosensitive chip is exposed in the current frame, Instructing the infrared light source to perform supplementary light according to the first exposure period can reduce the working time of the infrared light source, thereby reducing the heat generation of the infrared camera module.
  • the method for controlling the light-filling time of the camera module in the embodiment of the present application does not increase or change the hardware modules (or units) in the camera module, but determines the location of the photosensitive chip in the camera module according to the first target area.
  • the supplementary light period of the current frame (for example, the first exposure period), and when the current frame exposes the photosensitive chip, the supplementary light is performed within the supplementary light period to reduce the infrared light source. Therefore, the heat generation of the infrared camera module can be reduced without increasing the cost.
  • the method for controlling the light-filling time of the camera module in the embodiment of the present application also does not increase the size of the camera module, which is beneficial to the camera.
  • the camera may further include an infrared light source and a rolling shutter.
  • the infrared light source in the embodiments of the present application may be either a built-in infrared light source of the camera, or an independent external infrared light source, which is not limited in the embodiments of the present application.
  • the first determining unit is specifically configured to: determine the first target area in the first image according to a preset target object.
  • the light-filling time of the photosensitive chip in the current frame can be flexibly controlled according to a preset target object.
  • the first target area is a face area in the first image.
  • the photosensitive chip includes a plurality of photosensitive chip rows, and the plurality of pixel rows in the first image correspond to the plurality of photosensitive chip rows;
  • the second determining unit is specifically configured to: determine the first target photosensitive chip row corresponding to the pixel row in the first target area in the first image; determine that the first target photosensitive chip row is in the current frame the first exposure period.
  • the plurality of pixel rows in the first image correspond to the plurality of photosensitive chip rows, and the pixel rows in the first target area in the first image can be determined in a portable manner.
  • the first target photosensitive chip row is convenient to determine the first exposure period of the first target photosensitive chip row in the current frame.
  • the first determining unit is further configured to: determine a second target area in the second image acquired by the current frame, where the second target area is The area in the second image that needs to be filled with light; the second determining unit is further configured to: determine the second exposure period of the second target photosensitive chip row in the subsequent frame according to the second target area, and the second The target photosensitive chip row refers to a chip row in the photosensitive chip used for generating the image content in the second target area; the indicating unit is further configured to: when exposing the photosensitive chip in a subsequent frame, according to the The second exposure period instructs the infrared light source to perform supplementary light.
  • adjusting the light-filling period of the photosensitive chip in the subsequent frame can improve the quality of the image captured by the camera module. Effect.
  • the indicating unit is further configured to: determine a third exposure period of the first target photosensitive chip row in a subsequent frame according to the first exposure period; When exposing the photosensitive chip in subsequent frames, the infrared light source is instructed to perform supplementary light according to the third exposure period.
  • the supplementary light period (for example, the third exposure period) of the subsequent frame can be conveniently determined based on the first exposure period.
  • the first exposure period or the third exposure period in the embodiments of the present application may be expressed in relative time or in absolute time.
  • the first exposure period may refer to: the start time is T 0 +T 1 , and the end time is T 0 + The time period of T 2 , that is, the interval between the start moment of the first exposure period and the exposure start moment of the current frame is T 1 , and the interval between the end moment of the first exposure period and the exposure start moment of the current frame is T 2 .
  • the third exposure period may refer to the time period when the start time is T 3 +T 1 and the end time is T 3 +T 2 , that is, the third exposure period
  • the interval between the start time of the period and the exposure start time of the subsequent frame is T 1
  • the interval between the end time of the third exposure period and the exposure start time of the subsequent frame is T 2 .
  • the first exposure period may refer to the time period with the start time being T 4 and the end time being T 5 .
  • the third exposure period may refer to the time period with the start time being T 6 and the end time being T 7 , that is, the start time of the third exposure period is the same as the time period T 7 .
  • a camera module in a third aspect, includes a storage medium and a central processing unit, the storage medium may be a non-volatile storage medium, and a computer-executable program is stored in the storage medium, so The central processing unit is connected to the non-volatile storage medium, and executes the computer-executable program to implement the first aspect or the method in any possible implementation manner of the first aspect.
  • a chip in a fourth aspect, includes a processor and a data interface, the processor reads an instruction stored in a memory through the data interface, and executes the first aspect or any possible implementation of the first aspect method in method.
  • the chip may further include a memory, in which instructions are stored, the processor is configured to execute the instructions stored in the memory, and when the instructions are executed, the The processor is configured to perform the method in the first aspect or any possible implementation of the first aspect.
  • a computer-readable storage medium stores program codes for device execution, the program codes including the first aspect or any possible implementation manner of the first aspect. method in the directive.
  • an automobile which includes the device for controlling the light-filling time of a camera module according to the second aspect or the camera module according to the third aspect.
  • the first exposure period of the row of the first target photosensitive chip in the current frame is determined according to the area in the first image that needs to be filled with light, and when the photosensitive chip is exposed in the current frame, Instructing the infrared light source to perform supplementary light according to the first exposure period can reduce the working time of the infrared light source, thereby reducing the heat generation of the infrared camera module.
  • FIG. 1 is a schematic structural diagram of an automatic driving vehicle according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an automatic driving system according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a camera module to which an embodiment of the present application is applied.
  • FIG. 4 is a schematic block diagram of a method for controlling a light-filling time of a camera module provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a photosensitive chip according to an embodiment of the present application.
  • FIG. 6 is a schematic block diagram of determining a fill light period of a current frame according to an embodiment of the present application.
  • FIG. 7 is a schematic block diagram of a method for controlling the light-filling time of a camera module provided by another embodiment of the present application.
  • FIG. 8 is a schematic block diagram of a method for calculating a photosensitive chip row corresponding to a face frame provided by an embodiment of the present application.
  • FIG. 9 is a schematic block diagram of a method for controlling a light-filling time of a camera module provided by another embodiment of the present application.
  • FIG. 10 is a schematic block diagram of a method for calculating a photosensitive chip row corresponding to a human body frame provided by an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of an apparatus for controlling the light-filling time of a camera module according to an embodiment of the present application.
  • FIG. 12 is a schematic block diagram of an apparatus for controlling the light-filling time of a camera module provided by another embodiment of the present application.
  • the vehicle may be a diesel locomotive, an intelligent electric vehicle, or a hybrid vehicle, or the vehicle may also be a vehicle of other power types. Not limited.
  • the vehicle in the embodiment of the present application may be an automatic driving vehicle.
  • the automatic driving vehicle may be configured with an automatic driving mode, and the automatic driving mode may be a fully automatic driving mode, or may also be a partial automatic driving mode.
  • the embodiment is not limited to this.
  • the vehicle in this embodiment of the present application may also be configured with other driving modes, and the other driving modes may include one of a variety of driving modes, such as a sport mode, an economical mode, a standard mode, an off-road mode, a snow mode, and a hill-climbing mode. or more.
  • the automatic driving vehicle can switch between the automatic driving mode and the above-mentioned various driving models (for the driver-driven vehicle), which is not limited in the embodiment of the present application.
  • FIG. 1 is a functional block diagram of a vehicle 100 provided by an embodiment of the present application.
  • the vehicle 100 is configured in a fully or partially autonomous driving mode.
  • the vehicle 100 can control itself while in an autonomous driving mode, and can determine the current state of the vehicle and its surroundings through human manipulation, determine the likely behavior of at least one other vehicle in the surrounding environment, and determine the other vehicle
  • the vehicle 100 is controlled based on the determined information with a confidence level corresponding to the likelihood of performing the possible behavior.
  • the vehicle 100 may be placed to operate without human interaction.
  • Vehicle 100 may include various subsystems, such as travel system 102 , sensor system 104 , control system 106 , one or more peripherals 108 and power supply 110 , computer system 112 , and user interface 116 .
  • vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the vehicle 100 may be interconnected by wire or wirelessly.
  • the travel system 102 may include components that provide powered motion for the vehicle 100 .
  • propulsion system 102 may include engine 118 , energy source 119 , transmission 120 , and wheels/tires 121 .
  • Engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of an air oil engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine.
  • Engine 118 converts energy source 119 into mechanical energy.
  • Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy to other systems of the vehicle 100 .
  • Transmission 120 may transmit mechanical power from engine 118 to wheels 121 .
  • Transmission 120 may include a gearbox, a differential, and a driveshaft.
  • transmission 120 may also include other devices, such as clutches.
  • the drive shaft may include one or more axles that may be coupled to one or more wheels 121 .
  • the sensor system 104 may include several sensors that sense information about the environment surrounding the vehicle 100 .
  • the sensor system 104 may include a positioning system 122 (which may be a GPS system, a Beidou system or other positioning system), an inertial measurement unit (IMU) 124, a radar 126, a laser rangefinder 128, and camera 130.
  • the sensor system 104 may also include sensors of the internal systems of the vehicle 100 being monitored (eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, orientation, velocity, etc.). This detection and identification is a critical function for the safe operation of the autonomous vehicle 100 .
  • the positioning system 122 may be used to estimate the geographic location of the vehicle 100 .
  • the IMU 124 is used to sense position and orientation changes of the vehicle 100 based on inertial acceleration.
  • IMU 124 may be a combination of an accelerometer and a gyroscope.
  • Radar 126 may utilize radio signals to sense objects within the surrounding environment of vehicle 100 . In some embodiments, in addition to sensing objects, radar 126 may be used to sense the speed and/or heading of objects.
  • the laser rangefinder 128 may utilize laser light to sense objects in the environment in which the vehicle 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
  • Camera 130 may be used to capture multiple images of the surrounding environment of vehicle 100 .
  • Camera 130 may be a still camera or a video camera.
  • the camera 130 may also include an infrared camera or other cameras.
  • the camera 130 may include a cockpit monitoring camera in a camera monitor system (CMS) and a driver in a driver monitor system (DMS). Surveillance cameras.
  • CMS camera monitor system
  • DMS driver monitor system
  • Control system 106 controls the operation of the vehicle 100 and its components.
  • Control system 106 may include various elements including steering system 132 , throttle 134 , braking unit 136 , sensor fusion algorithms 138 , computer vision system 140 , route control system 142 , and obstacle avoidance system 144 .
  • the steering system 132 is operable to adjust the heading of the vehicle 100 .
  • it may be a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the vehicle 100 .
  • the braking unit 136 is used to control the deceleration of the vehicle 100 .
  • the braking unit 136 may use friction to slow the wheels 121 .
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electrical current.
  • the braking unit 136 may also take other forms to slow the wheels 121 to control the speed of the vehicle 100 .
  • Computer vision system 140 may be operable to process and analyze images captured by camera 130 in order to identify objects and/or features in the environment surrounding vehicle 100 .
  • the objects and/or features may include traffic signals, road boundaries and obstacles.
  • Computer vision system 140 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM Structure from Motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the route control system 142 is used to determine the travel route of the vehicle 100 .
  • the route control system 142 may combine data from the sensors 138 , the GPS 122 , and one or more predetermined maps to determine a driving route for the vehicle 100 .
  • the obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise traverse potential obstacles in the environment of the vehicle 100 .
  • control system 106 may additionally or alternatively include components other than those shown and described. Alternatively, some of the components shown above may be reduced.
  • Peripherals 108 may include a wireless communication system 146 , an onboard computer 148 , a microphone 150 and/or a speaker 152 .
  • peripherals 108 provide a means for a user of vehicle 100 to interact with user interface 116 .
  • the onboard computer 148 may provide information to the user of the vehicle 100 .
  • User interface 116 may also operate on-board computer 148 to receive user input.
  • the onboard computer 148 can be operated via a touch screen.
  • peripheral devices 108 may provide a means for vehicle 100 to communicate with other devices located within the vehicle.
  • microphone 150 may receive audio (eg, voice commands or other audio input) from a user of vehicle 100 .
  • speakers 152 may output audio to a user of vehicle 100 .
  • Wireless communication system 146 may wirelessly communicate with one or more devices, either directly or via a communication network.
  • wireless communication system 146 may use 3G cellular communications, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communications, such as LTE. Or 5G cellular communications.
  • the wireless communication system 146 may communicate with a wireless local area network (WLAN) using WiFi.
  • WLAN wireless local area network
  • the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, wireless communication system 146 may include one or more dedicated short range communications (DSRC) devices, which may include communication between vehicles and/or roadside stations public and/or private data communications.
  • DSRC dedicated short range communications
  • the power supply 110 may provide power to various components of the vehicle 100 .
  • the power source 110 may be a rechargeable lithium-ion or lead-acid battery.
  • One or more battery packs of such a battery may be configured as a power source to provide power to various components of the vehicle 100 .
  • power source 110 and energy source 119 may be implemented together, such as in some all-electric vehicles.
  • Computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable medium such as data storage device 114 .
  • Computer system 112 may also be multiple computing devices that control individual components or subsystems of vehicle 100 in a distributed fashion.
  • the processor 113 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor may be a dedicated device such as an ASIC or other hardware-based processor.
  • FIG. 1 functionally illustrates the processor, memory, and other elements of the computer 110 in the same block, one of ordinary skill in the art will understand that the processor, computer, or memory may actually be included in the same physical Multiple processors, computers, or memories within a housing.
  • the memory may be a hard drive or other storage medium located within a housing other than computer 110.
  • reference to a processor or computer will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components may each have their own processor that only performs computations related to component-specific functions .
  • a processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle while others are performed by a remote processor, including taking steps necessary to perform a single maneuver.
  • data storage 114 may include instructions 115 (eg, program logic) executable by processor 113 to perform various functions of vehicle 100 , including those described above.
  • Data storage 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data processing on one or more of propulsion system 102 , sensor system 104 , control system 106 , and peripherals 108 . control commands.
  • the data storage device 114 may store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by the vehicle 100 and the computer system 112 during operation of the vehicle 100 in autonomous, semi-autonomous and/or manual modes.
  • a user interface 116 for providing information to or receiving information from a user of the vehicle 100 .
  • the user interface 116 may include one or more input/output devices within the set of peripheral devices 108 , such as a wireless communication system 146 , an onboard computer 148 , a microphone 150 and a speaker 152 .
  • Computer system 112 may control functions of vehicle 100 based on input received from various subsystems (eg, travel system 102 , sensor system 104 , and control system 106 ) and from user interface 116 .
  • computer system 112 may utilize input from control system 106 in order to control steering unit 132 to avoid obstacles detected by sensor system 104 and obstacle avoidance system 144 .
  • computer system 112 is operable to provide control of various aspects of vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • data storage device 114 may exist partially or completely separate from vehicle 100 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation on the embodiments of the present application.
  • An autonomous vehicle traveling on a road can recognize objects within its surroundings to determine adjustments to current speed.
  • the objects may be other vehicles, traffic control equipment, or other types of objects.
  • each identified object may be considered independently, and based on the object's respective characteristics, such as its current speed, acceleration, distance from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to adjust.
  • the vehicle 100 or a computing device associated with the vehicle 100 may be based on the characteristics of the identified objects and the state of the surrounding environment (eg, traffic, rain, ice on the road, etc.) to predict the behavior of the identified object.
  • each identified object is dependent on the behavior of the other, so it is also possible to predict the behavior of a single identified object by considering all identified objects together.
  • the vehicle 100 can adjust its speed based on the predicted behavior of the identified object.
  • the autonomous vehicle can determine what steady state the vehicle will need to adjust to (eg, accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and the like.
  • the computing device may also provide instructions to modify the steering angle of the vehicle 100 so that the autonomous vehicle follows a given trajectory and/or maintains contact with objects in the vicinity of the autonomous vehicle (eg, , cars in adjacent lanes on the road) safe lateral and longitudinal distances.
  • objects in the vicinity of the autonomous vehicle eg, , cars in adjacent lanes on the road
  • the above-mentioned vehicle 100 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc.
  • the embodiment is not particularly limited.
  • FIG. 2 is a schematic diagram of an automatic driving system provided by an embodiment of the present application.
  • the automatic driving system shown in FIG. 2 includes a computer system 101 , wherein the computer system 101 includes a processor 103 , and the processor 103 is coupled with a system bus 105 .
  • the processor 103 may be one or more processors, each of which may include one or more processor cores.
  • a video adapter 107 which can drive a display 109, is coupled to the system bus 105.
  • System bus 105 is coupled to input-output (I/O) bus 113 through bus bridge 111 .
  • I/O interface 115 is coupled to the I/O bus. I/O interface 115 communicates with various I/O devices, such as input device 117 (eg, keyboard, mouse, touch screen, etc.), media tray 121, (eg, CD-ROM, multimedia interface, etc.).
  • Transceiver 123 (which can send and/or receive radio communication signals), camera 155 (which can capture still and moving digital video images) and external USB interface 125 .
  • the interface connected to the I/O interface 115 may be a USB interface.
  • the processor 103 may be any conventional processor, including a reduced instruction set computing (reduced instruction set computer, RISC) processor, a complex instruction set computing (complex instruction set computer, CISC) processor, or a combination thereof.
  • the processor may be a dedicated device such as an application specific integrated circuit (ASIC).
  • the processor 103 may be a neural network processor or a combination of a neural network processor and the above-mentioned conventional processors.
  • computer system 101 may be located remotely from the autonomous vehicle (eg, computer system 101 may be located in a cloud or server) and may communicate wirelessly with the autonomous vehicle.
  • computer system 101 may be located remotely from the autonomous vehicle (eg, computer system 101 may be located in a cloud or server) and may communicate wirelessly with the autonomous vehicle.
  • some of the processes described herein are performed on a processor disposed within the autonomous vehicle, others are performed by a remote processor, including taking actions required to perform a single maneuver.
  • Network interface 129 is a hardware network interface, such as a network card.
  • the network 127 may be an external network, such as the Internet, or an internal network, such as an Ethernet network or a virtual private network (VPN).
  • the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and the like.
  • the hard disk drive interface is coupled to the system bus 105 .
  • the hard drive interface is connected to the hard drive.
  • System memory 135 is coupled to system bus 105 . Data running in system memory 135 may include operating system 137 and application programs 143 of computer 101 .
  • the operating system includes a parser 139 (shell) and a kernel (kernel) 141 .
  • the shell is an interface between the user and the kernel of the operating system.
  • the shell is the outermost layer of the operating system.
  • the shell manages the interaction between the user and the operating system: waiting for user input, interpreting user input to the operating system, and processing various operating system output.
  • Kernel 141 consists of those parts of the operating system that manage memory, files, peripherals, and system resources. Interacting directly with hardware, the operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and more.
  • the application program 143 includes a program related to controlling the camera module fill light time, for example, determining the first target area in the first image captured by the camera module before the current frame, and the first target area is the first target area.
  • Application 143 may also exist on the system of software deployment server 149 (deploying server).
  • the computer system 101 may download the application program 143 from a software deployment server 149 (deploying server) when the application program 143 needs to be executed.
  • Sensor 153 is associated with computer system 101 .
  • the sensor 153 is used to detect the environment around the computer 101, or the sensor 153 may also be used to monitor the situation in the cabin of the autonomous vehicle.
  • the computer 101 is located on the autonomous vehicle.
  • the senor 153 may include a driver monitoring camera in a driver monitor system (DMS), and the driver monitoring camera may be used to perform fatigue detection, face recognition, distraction, etc. detection, in-the-loop detection, etc.; alternatively, the sensor 153 may also include a cockpit monitoring camera in a camera monitor system (CMS), and the cockpit monitoring camera may be used for behavior recognition, gesture recognition of the driver or other passengers in the cockpit Identification and legacy detection, etc.
  • DMS driver monitor system
  • CMS camera monitor system
  • the application program 143 can detect the image collected by the sensor 153 to determine the area in the image that needs to be supplemented with light, and determine the supplementary light period of the photosensitive chip when the current frame is exposed in combination with the area that needs to be supplemented. At this time, by controlling the infrared light source to perform supplementary light within the supplementary light period through the rolling shutter, the heat generation of the infrared camera module can be reduced.
  • FIG. 3 is a schematic structural diagram of a camera module 300 to which the embodiment of the present application is applied. It should be understood that the camera module 300 shown in FIG. 3 is only an example and not a limitation, and the camera module 300 may include more or less steps, which are not limited in this embodiment of the present application.
  • the camera module 300 can be used as a driver monitoring camera in a driver monitor system (DMS) to perform fatigue detection, face recognition, distraction detection, in-the-loop detection, etc. on the driver of the autonomous vehicle; or , the camera module 300 can also be used as a cockpit monitoring camera in a camera monitor system (CMS) to perform behavior recognition, gesture recognition, and legacy detection for the driver or other passengers in the cockpit.
  • DMS driver monitor system
  • CMS camera monitor system
  • the camera module 300 may include a lens 301, a photosensitive chip 302, an image signal processor (ISP) 303, a central processing unit (CPU)/neural processing unit , NPU) 304, infrared (infrared, IR) light source 305 and light source controller 306.
  • ISP image signal processor
  • CPU central processing unit
  • NPU neural processing unit
  • IR infrared
  • the lens 301 may include a rolling shutter
  • the photosensitive chip 302 may be a complementary metal-oxide-semiconductor (CMOS)
  • the ISP 303 may be a static ISP (stack ISP) integrated on the CMOS
  • the ISP 303 can be integrated on the photosensitive chip 302, or the ISP 303 can also be an independent ISP (discrete ISP)
  • the infrared light source 305 can be a light emitting diode (light emitting diode, LED), or the infrared light source 305 It can also be a vertical external cavity surface emitting laser (vertical-extemal-cavity surface-emitting, VECSEL).
  • a conventional infrared camera uses an infrared light source as a light source, and exposes each photosensitive chip row in the photosensitive chip row by row through a rolling shutter door. As shown in FIG. 5 , the exposure periods of each photosensitive chip row in the photosensitive chip are different, and a complete exposure is completed until all the photosensitive chip rows in the photosensitive chip are exposed.
  • the infrared light source refers to a light source with a wavelength of 780-1400 nanometers (nm), which is invisible to the human eye.
  • the infrared light source and the infrared camera can make the shooting unaffected by visible light, and can shoot normally during the day or night.
  • the infrared light source is always working, that is, the exposure duration of the photosensitive chip (in a certain frame) is equal to the working duration of the infrared light source.
  • the infrared light source generates heat during operation. Therefore, the long-term operation of the infrared light source will lead to high heat generation of the camera module.
  • the present application proposes a method for controlling the light-filling time of a camera module, determining the first exposure period of the first target photosensitive chip row in the current frame according to the area in the first image that needs to be filled with light, and When exposing the photosensitive chip in the current frame, the supplementary light is only performed in the first exposure period, which can reduce the working time of the infrared light source, thereby reducing the heat generation of the infrared camera module.
  • FIG. 4 is a schematic flowchart of a method 400 for controlling the light-filling time of a camera module provided by an embodiment of the present application.
  • the method 400 shown in FIG. 4 may include steps 410, 420 and 430. It should be understood that the method 400 shown in FIG. 4 is only an example and not a limitation, and the method 400 may include more or less steps. This is not limited in the embodiment, and the steps are described in detail below.
  • the method 400 shown in FIG. 4 may be performed by the camera module in the camera 130 in the vehicle 100 in FIG. 1 , or the method 400 may also be performed by the camera 155 in the automatic driving system or the camera in the sensor 153 in FIG. 2 . Module execution.
  • the camera module in the method 400 may include a photosensitive chip, an infrared light source and a rolling shutter.
  • the camera module in the method 400 may be as shown in the camera module 300 in FIG. 3 .
  • S410 Determine the first target area in the first image captured by the camera before the current frame.
  • the first target area may be an area in the first image that needs to be supplemented with light.
  • the content contained in the area that needs to be supplemented with light in the first image may be related to the purpose of the camera module.
  • the first target area in the first image may be determined according to a preset target object.
  • the camera module can be used as a driver monitoring camera in a driver monitor system (DMS) to perform fatigue detection, face recognition, distraction detection, and in-the-loop detection on drivers of autonomous vehicles etc.
  • DMS driver monitor system
  • the preset target object may refer to the face area in the first image.
  • the camera module can also be used as a cockpit monitoring camera in a camera monitor system (CMS) to perform behavior recognition, gesture recognition, and legacy detection for the driver or other passengers in the cockpit, etc.
  • CMS camera monitor system
  • the preset target object may refer to the human body area in the first image.
  • the camera can also be used to detect the exterior cameras of other surrounding vehicles, to detect and identify other vehicles around the vehicle, etc.
  • the preset target object may refer to a vehicle (eg, other vehicles around the vehicle) area in the first image.
  • a deep learning algorithm or a Haar operator may be used to detect the first image to determine an area in the first image that needs to be supplemented with light, that is, the first target area.
  • a neural network model may be used to determine a face region (or a human body region) in the first image, that is, the first target region in the first image that needs to be supplemented with light.
  • S420 Determine the first exposure period of the first target photosensitive chip row in the current frame according to the first target area.
  • the first target photosensitive chip row may refer to a chip row in the photosensitive chip for generating the image content in the first target area.
  • the photosensitive chip may include multiple photosensitive chip rows
  • the first image may include multiple pixel rows
  • the multiple pixel rows in the first image may correspond to the multiple photosensitive chip rows.
  • the photosensitive chip may include 960 photosensitive chip rows
  • the first image may include 960 pixel rows
  • the 960 pixel rows included in the first image may be the same as the The 960 photosensitive chip rows included in the photosensitive chip are in one-to-one correspondence.
  • FIG. 6 shows the case where the number of photosensitive chip rows in the photosensitive chip is equal to the number of pixel rows in the first image, that is, the image generated after the photosensitive chip is exposed has not undergone scaling transformation. .
  • the embodiments of the present application do not limit the number of rows of photosensitive chip rows in the photosensitive chip and the number of rows of pixel rows in the first image, that is to say, the embodiments of the present application do not It is not limited that the number of photosensitive chip rows in the photosensitive chip must be equal to the number of pixel rows in the first image, nor is it limited that the number of pixel rows in the first image and the plurality of photosensitive Chip rows must be in one-to-one correspondence.
  • the determining the first exposure period of the first target photosensitive chip row in the current frame according to the first target area may include:
  • the pixel row a of the upper boundary of the first target area in the first image and the pixels of the lower boundary of the first target area in the first image may be determined Row b; determine the photosensitive chip row n in the photosensitive chip corresponding to the pixel row a, and the photosensitive chip row m in the photosensitive chip corresponding to the pixel row b, then start from the photosensitive chip row n to the photosensitive chip row m.
  • the chip row is the first target photosensitive chip row.
  • the exposure period of the first target photosensitive chip row in the current frame may be calculated.
  • the exposure time of the first row of photosensitive chip rows is T0
  • the exposure time of each row of photosensitive chip rows is T
  • the time difference between the exposure of two adjacent photosensitive chip rows is t
  • the nth row of photosensitive chip rows starts to be exposed.
  • the starting time T1 of the infrared light source, and the ending time T2 of the infrared light source supplementary light.
  • the exposure period (T1-T2) of the first target photosensitive chip row in the current frame is the first exposure period.
  • the first exposure period of the row of the first target photosensitive chip in the current frame is determined according to the area in the first image that needs to be filled with light, and when the photosensitive chip is exposed in the current frame, Instructing the infrared light source to perform supplementary light according to the first exposure period can reduce the working time of the infrared light source, thereby reducing the heat generation of the infrared camera module.
  • the method for controlling the light-filling time of the camera module in the embodiment of the present application does not increase or change the hardware modules (or units) in the camera module, but determines the location of the photosensitive chip in the camera module according to the first target area.
  • the supplementary light period of the current frame (for example, the first exposure period), and when the current frame exposes the photosensitive chip, the supplementary light is performed within the supplementary light period to reduce the infrared light source. Therefore, the heat generation of the infrared camera module can be reduced without increasing the cost.
  • the method for controlling the light-filling time of the camera module in the embodiment of the present application also does not increase the size of the camera module, which is beneficial to the camera.
  • the method 400 may further include step 432 .
  • the infrared light source is controlled to perform supplementary light only during the exposure period of the first target photosensitive chip row.
  • the supplementary light period (for example, the third exposure period) of the subsequent frame can be conveniently determined based on the first exposure period.
  • the method 400 may further include steps 434 , 436 and 438 .
  • S434 Determine a second target area in the second image acquired in the current frame, where the second target area is an area in the second image that needs to be supplemented with light.
  • S436 Determine the second exposure period of the second target photosensitive chip row in the subsequent frame according to the second target area, where the second target photosensitive chip row refers to the photosensitive chip used to generate the second target area in the second exposure period. Chip row for image content.
  • adjusting the light-filling period of the photosensitive chip in the subsequent frame can improve the quality of the image captured by the camera module. Effect.
  • FIG. 7 is a schematic flowchart of a method 700 for controlling the light-filling time of a camera module provided by an embodiment of the present application.
  • the method 700 shown in FIG. 7 may be performed by the camera module in the camera 130 in the vehicle 100 in FIG. 1 , or the method 700 may also be performed by the camera 155 or the camera in the sensor 153 in the automatic driving system in FIG. 2 . Module execution.
  • the camera module in the method 700 may include a photosensitive chip, an infrared light source and a rolling shutter.
  • the camera module in the method 700 may be as shown in the camera module 300 in FIG. 3 .
  • the camera module in the method 700 can be used in a vehicle-mounted driver monitoring camera.
  • the driver monitoring camera can be placed behind the steering wheel or at the position of the A-pillar of the car, and is used for face recognition, fatigue detection, distraction detection, and in-the-loop detection of the driver.
  • the method 700 shown in FIG. 7 may include steps 710 to 790. It should be understood that the method 700 shown in FIG. 7 is only an example and not a limitation, and the method 700 may include more or less steps. This is not limited, and these steps are described in detail below.
  • the infrared light source fills in the light in the whole process to obtain a first image.
  • the light source controller can control the infrared light source to fill in the light throughout the exposure period of the photosensitive chip to obtain the first image, so as to obtain the first image. It is ensured that all areas in the first image can be well illuminated.
  • the first image may be sent to the CPU/NPU, and correspondingly, the CPU/NPU may use a deep learning algorithm or use a Haar operator to perform face detection on the first image.
  • the face detection fails, and S710 is executed. That is, when the next frame of image is taken, the infrared light source fills in the light during the entire exposure period of the photosensitive chip, so as to obtain an image with good illumination in all areas, and perform face detection on the obtained image again.
  • a core area in the first image may be determined, and a photosensitive chip row corresponding to the core area in the first image may be calculated.
  • a margin may be considered when determining the core region in the first image.
  • a threshold may be preset, and a margin may be set when determining the core area in the first image, so that the gap between the face frame and the dark area (that is, the area in the first image that does not need to be supplemented with light) The distance is greater than or equal to the preset threshold.
  • the distance between the upper boundary of the face frame and the dark area is greater than or equal to the preset threshold, and at the same time, the distance between the lower border of the face frame and the dark area is greater than or equal to preset threshold.
  • the core area is equivalent to the first target area in the method 400 in FIG. 4 .
  • the exposure period corresponding to the photosensitive chip row, that is, the first supplementary light period is calculated.
  • the determined supplementary light period may be the first supplementary light period or the second supplementary light period.
  • the face detection fails and belongs to the failure type A, and S780 is executed to determine whether the number of failed face detections is less than 10 times.
  • the first supplementary light period can be maintained, that is, in the process of capturing images in subsequent frames, the infrared light source is controlled to continue to perform supplementary light in the first supplementary light period corresponding to the photosensitive chip row .
  • the infrared light source is controlled to continue on the first complement corresponding to the photosensitive chip row.
  • the supplementary light is performed within the light period; otherwise, when the number of face detection failures is greater than or equal to 10 times (or the first supplementary light period is periodically reset every second), S710 is performed.
  • Perform face detection on the second image obtain the face frame of the second image, calculate the photosensitive chip row corresponding to the face frame of the second image, calculate the photosensitive chip row (the second image The second fill light period corresponding to the photosensitive chip row corresponding to the face frame.
  • S750 is then executed, and in the process of capturing images in subsequent frames, the infrared light source is controlled to perform supplementary light within the second supplementary light period corresponding to the photosensitive chip row.
  • FIG. 9 is a schematic flowchart of a method 900 for controlling the light-filling time of a camera module provided by an embodiment of the present application.
  • the method 900 shown in FIG. 9 may be performed by the camera module in the camera 130 in the vehicle 100 in FIG. 1 , or the method 900 may also be performed by the camera 155 or the camera in the sensor 153 in the automatic driving system in FIG. 2 . Module execution.
  • the camera module in the method 900 may include a photosensitive chip, an infrared light source and a rolling shutter.
  • the camera module in the method 900 may be as shown in the camera module 300 in FIG. 3 .
  • the camera module in method 900 can be used for a vehicle-mounted cockpit monitoring camera.
  • the cockpit monitoring camera can be placed under the rear-view mirror inside the vehicle to perform behavior recognition, gesture recognition, and leftover detection for the driver and co-pilot in the cockpit.
  • the method 900 shown in FIG. 9 may include steps 910 to 990. It should be understood that the method 900 shown in FIG. 9 is only an example and not a limitation, and the method 900 may include more or less steps. This is not limited, and these steps are described in detail below.
  • the infrared light source fills the light in the whole process to obtain a first image.
  • the light source controller can control the infrared light source to fill in the light throughout the exposure period of the photosensitive chip to obtain the first image, so as to obtain the first image. It is ensured that all areas in the first image can be well illuminated.
  • the first image may be sent to the CPU/NPU, and correspondingly, the CPU/NPU may use a deep learning algorithm or use a Haar operator to perform human detection on the first image.
  • the human body frame in the first image is not detected (for example, as shown in part B in FIG. 10 , the human body area in the first image is occluded or there is no human body area in the first image), the human body If the detection fails, go to S910. That is, when the next frame of image is captured, the infrared light source fills in light during the entire exposure period of the photosensitive chip, so as to obtain an image with good illumination in all areas, and the obtained image is subjected to human detection again.
  • a core area in the first image may be determined, and a photosensitive chip row corresponding to the core area in the first image may be calculated.
  • a margin may be considered when determining the core region in the first image.
  • a threshold may be preset, and a margin may be set when determining the core area in the first image, so that the distance between the human body frame and the dark area (that is, the area in the first image that does not need to be supplemented with light) The distance is greater than or equal to the preset threshold.
  • the distance between the upper boundary of the human body frame and the dark area is greater than or equal to the preset threshold, and at the same time, the distance between the lower boundary of the human body frame and the dark area is greater than or equal to the preset threshold the threshold value.
  • the core area is equivalent to the first target area in the method 400 in FIG. 4 .
  • the exposure period corresponding to the photosensitive chip row, that is, the first supplementary light period is calculated.
  • the determined supplementary light period may be the first supplementary light period or the second supplementary light period.
  • S980 is executed to determine whether the number of human body detection failures is less than 10 times.
  • the first supplementary light period can be maintained, that is, in the process of capturing images in subsequent frames, the infrared light source is controlled to continue to perform supplementary light in the first supplementary light period corresponding to the photosensitive chip row .
  • the infrared light source is controlled to continue in the first fill light period corresponding to the photosensitive chip row. Fill light is performed inside; otherwise, when the number of human body detection failures is greater than or equal to 10 times (or the first fill light period is reset periodically every second), S910 is performed.
  • S950 is then executed, and in the process of capturing images in subsequent frames, the infrared light source is controlled to perform supplementary light within the second supplementary light period corresponding to the photosensitive chip row.
  • FIG. 11 is a schematic block diagram of an apparatus 1100 for controlling the light-filling time of a camera module provided by an embodiment of the present application. It should be understood that the apparatus 1100 for controlling the light-filling time of the camera module shown in FIG. 11 is only an example, and the apparatus 1100 in the embodiment of the present application may further include other modules or units. It should be understood that the device 1100 is capable of executing various steps in the methods of FIGS. 4 , 7 and 9 , and in order to avoid repetition, details are not described here.
  • the camera module may include a camera, and the camera may include a photosensitive chip, an infrared light source and a rolling shutter.
  • the camera module may be as shown in the camera module 300 in FIG. 3 .
  • the device 1100 for controlling the light-filling time of the camera module may include:
  • a first determining unit 1110 configured to determine a first target area in a first image captured by the camera before the current frame, where the first target area is an area in the first image that needs to be supplemented with light;
  • the second determining unit 1120 is configured to determine, according to the first target area, the first exposure period of the first target photosensitive chip row in the current frame, where the first target photosensitive chip row refers to the photosensitive chip used to generate the a chip row of image content in the first target area;
  • the instructing unit 1130 is configured to instruct the infrared light source to perform supplementary light according to the first exposure period when the photosensitive chip is exposed in the current frame.
  • the first determining unit and the first determining unit may be the same module or unit, which is not limited in this embodiment of the present application.
  • the first determining unit is specifically configured to: determine the first target area in the first image according to a preset target object.
  • the first target area is a face area in the first image.
  • the photosensitive chip includes a plurality of photosensitive chip rows, and the plurality of pixel rows in the first image correspond to the plurality of photosensitive chip rows; wherein, the second determining unit 1120 is specifically configured to:
  • the first determining unit 1110 is further configured to: determine a second target area in the second image obtained by the current frame, where the second target area is the area in the second image that needs to be filled with light
  • the second determining unit 1120 is further configured to: determine the second exposure period of the second target photosensitive chip row in the subsequent frame according to the second target area, and the second target photosensitive chip row refers to the photosensitive chip in the photosensitive chip.
  • a chip row for generating the image content in the second target area; the instructing unit 1130 is further configured to: instruct the infrared light source to perform compensation according to the second exposure period when exposing the photosensitive chip in a subsequent frame Light.
  • the indicating unit 1130 is further configured to: determine the third exposure period of the first target photosensitive chip row in the subsequent frame according to the first exposure period; when exposing the photosensitive chip in the subsequent frame, The infrared light source is instructed to perform supplementary light according to the second exposure period.
  • the apparatus 1100 for controlling the light-filling time of the camera module here is embodied in the form of functional modules.
  • the term “module” here can be implemented in the form of software and/or hardware, which is not specifically limited.
  • a “module” may be a software program, a hardware circuit, or a combination of the two that implement the above-mentioned functions.
  • the hardware circuits may include application specific integrated circuits (ASICs), electronic circuits, processors for executing one or more software or firmware programs (eg, shared processors, proprietary processors, or group processors) etc.) and memory, merge logic and/or other suitable components to support the described functions.
  • ASICs application specific integrated circuits
  • processors for executing one or more software or firmware programs (eg, shared processors, proprietary processors, or group processors) etc.) and memory, merge logic and/or other suitable components to support the described functions.
  • the device 1100 for controlling the light-filling time of a camera module may be a camera module in an automatic driving system, or may be a camera module configured in a vehicle, or may also be It is an in-vehicle machine (or processor) in an autonomous driving vehicle, or, it may also be a chip configured in the in-vehicle machine, for executing the method described in the embodiments of the present application.
  • FIG. 12 is a schematic block diagram of an apparatus 800 for controlling the light-filling time of a camera module according to an embodiment of the present application.
  • the apparatus 800 shown in FIG. 12 includes a memory 801 , a processor 802 , a communication interface 803 and a bus 804 .
  • the memory 801 , the processor 802 , and the communication interface 803 are connected to each other through the bus 804 for communication.
  • the memory 801 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
  • the memory 801 can store a program, and when the program stored in the memory 801 is executed by the processor 802, the processor 802 is used to execute the various steps of the method for controlling the light-filling time of the camera module according to the embodiment of the present application, for example, Fig. 4 can be executed. , each step of the embodiment shown in FIG. 7 and FIG. 9 .
  • the processor 802 can be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for executing related programs to The method for controlling the light-filling time of the camera module according to the method embodiment of the present application is realized.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the processor 802 may also be an integrated circuit chip with signal processing capability.
  • each step of the method for controlling the light-filling time of the camera module according to the embodiment of the present application may be completed by an integrated logic circuit of hardware in the processor 802 or an instruction in the form of software.
  • the above-mentioned processor 802 may also be a general-purpose processor, a digital signal processor (digital signal processing, DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, Discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 801, and the processor 802 reads the information in the memory 801, and combines its hardware to complete the functions required to be performed by the units included in the device for controlling the light-filling time of the camera module in the embodiment of the present application, or to execute the functions of the present application.
  • the method for controlling the light-filling time of the camera module according to the method embodiment may, for example, execute each step/function of the embodiments shown in FIG. 4 , FIG. 7 , and FIG. 9 .
  • the communication interface 803 may use, but is not limited to, a transceiver such as a transceiver to implement communication between the apparatus 800 and other devices or a communication network.
  • the bus 804 may include a pathway for communicating information between the various components of the apparatus 800 (eg, the memory 801, the processor 802, the communication interface 803).
  • the device 800 shown in this embodiment of the present application may be a camera module in an automatic driving system, or may be a camera module configured in a vehicle, or may also be a vehicle in an autonomous vehicle (or processor), or it can also be a chip configured in a vehicle machine, so as to execute the method described in the embodiments of the present application.
  • the processor in the embodiment of the present application may be a central processing unit (central processing unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (digital signal processors, DSP), application-specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory in the embodiments of the present application may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDR SDRAM double data rate synchronous dynamic random access memory
  • enhanced SDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous connection dynamic random access memory Fetch memory
  • direct memory bus random access memory direct rambus RAM, DR RAM
  • the above embodiments may be implemented in whole or in part by software, hardware, firmware or any other combination.
  • the above-described embodiments may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions or computer programs. When the computer instructions or computer programs are loaded or executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center by wire (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that a computer can access, or a data storage device such as a server, a data center, or the like containing one or more sets of available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media.
  • the semiconductor medium may be a solid state drive.
  • At least one means one or more, and “plurality” means two or more.
  • At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • at least one item (a) of a, b, or c can represent: a, b, c, ab, ac, bc, or abc, where a, b, c can be single or multiple .
  • the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be dealt with in the embodiments of the present application. implementation constitutes any limitation.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供了一种控制摄像头模组补光时间的方法及装置,可应用到自动驾驶领域的智能汽车上。该方法包括:确定所述摄像头在当前帧之前拍摄的第一图像中的第一目标区域,所述第一目标区域为所述第一图像中需要进行补光的区域;根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段,所述第一目标感光芯片行指所述感光芯片中用于生成所述第一目标区域中的图像内容的芯片行;在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝光时段指示红外光源进行补光。本申请实施例中的方法,可以减少所述红外光源的工作时间,从而可以降低红外摄像头模组的发热量。

Description

控制摄像头模组补光时间的方法及装置
本申请要求于2020年09月25日提交中国国家知识产权局、申请号为202011020997.9、申请名称为“控制摄像头模组补光时间的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及自动驾驶领域,并且更具体地,涉及一种控制摄像头模组补光时间的方法及装置。
背景技术
人工智能(artificial intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式作出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。人工智能领域的研究包括机器人,自然语言处理,计算机视觉,决策与推理,人机交互,推荐与搜索,AI基础理论等。
自动驾驶是人工智能领域的一种主流应用,自动驾驶技术依靠计算机视觉、雷达、监控装置和全球定位系统等协同合作,让机动车辆可以在不需要人类主动操作下,实现自动驾驶。自动驾驶的车辆使用各种计算系统来帮助将乘客或货物从一个位置运输到另一位置。一些自动驾驶车辆可能要求来自操作者(诸如,领航员、驾驶员、或者乘客)的一些初始输入或者连续输入。自动驾驶车辆允许操作者从手动模操作式切换到自动驾驶模式或者介于两者之间的模式。由于自动驾驶技术无需人类来驾驶机动车辆,所以理论上能够有效避免人类的驾驶失误,减少交通事故的发生,且能够提高公路的运输效率。因此,自动驾驶技术越来越受到重视。
随着车辆的数量越来越多,自动驾驶的安全性也得到越来越多的关注。目前的自动驾驶技术还无法实现完全的无人驾驶,可以通过监控摄像头对座舱内进行实时监控,以提高自动驾驶的安全性。例如,可以通过红外(infrared,IR)摄像头对驾驶员进行疲劳检测、对座舱内的驾驶员或其他乘客进行行为识别、手势识别以及遗留物检测等。
红外摄像头可以不受可见光的影响,无论在白天或晚上都可以正常工作。传统的红外摄像头在曝光时,以红外光源作为光源,通过卷帘快门(rolling shutter)门对感光芯片中的各个感光芯片行进行逐行曝光,直至所有的感光芯片行均被曝光,则完成一次完整的曝光。但是,这种红外摄像头模组的发热量较高。
发明内容
本申请提供一种控制摄像头模组补光时间的方法及装置,可以降低红外摄像头模组的发热量。
第一方面,提供了一种控制摄像头模组补光时间的方法,所述方法包括:
确定所述摄像头模组在当前帧之前拍摄的第一图像中的第一目标区域,所述第一目标区域为所述第一图像中需要进行补光的区域;根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段,所述第一目标感光芯片行指所述感光芯片中用于生成所述第一目标区域中的图像内容的芯片行;在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝光时段指示红外光源进行补光。
其中,所述摄像头模组可以包括摄像头,所述摄像头可以包括感光芯片。
在本申请实施例中,根据所述第一图像中需要进行补光的区域确定第一目标感光芯片行在当前帧的第一曝光时段,在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝光时段指示红外光源进行补光,可以减少所述红外光源的工作时间,从而可以降低红外摄像头模组的发热量。
同时,本申请实施例中的控制摄像头模组补光时间的方法并未增加或改变摄像头模组中的硬件模块(或单元),而是根据所述第一目标区域确定所述感光芯片在所述当前帧的补光时段(例如,所述第一曝光时段),并在所述当前帧对所述感光芯片进行曝光时,在所述补光时段内进行补光,以减少所述红外光源的工作时间,从而可以在不增加成本的前提下,降低红外摄像头模组的发热量。
进一步地,由于并未增加或改变摄像头模组中的硬件模块(或单元),本申请实施例中的控制摄像头模组补光时间的方法也没有增大摄像头模组的尺寸,有利于该摄像头模组在车内的配置及使用。
可选地,所述摄像头还可以包括红外光源及卷帘快门。
需要说明的是,本申请实施例中的红外光源即可以为所述摄像头的内置的红外光源,也可以为独立的外置的红外光源,本申请实施例中对此并不限定。
结合第一方面,在第一方面的某些实现方式中,所述确定所述摄像头在当前帧之前拍摄的第一图像中的第一目标区域,包括:根据预设的目标对象确定所述第一图像中的所述第一目标区域。
在本申请实施例中,可以根据预设的目标对象灵活地控制所述感光芯片在当前帧的补光时间。
结合第一方面,在第一方面的某些实现方式中,所述第一目标区域为所述第一图像中的人脸区域。
结合第一方面,在第一方面的某些实现方式中,所述感光芯片包括多个感光芯片行,所述第一图像中的多个像素行对应所述多个感光芯片行;其中,所述根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段,包括:确定所述第一图像中处于所述第一目标区域中的像素行对应的所述第一目标感光芯片行;确定所述第一目标感光芯片行在当前帧的第一曝光时段。
在本申请实施例中,所述第一图像中的多个像素行对应所述多个感光芯片行,根据所述第一图像中处于所述第一目标区域中的像素行可以便携地确定所述第一目标感光芯片行,从而便于确定所述第一目标感光芯片行在当前帧的第一曝光时段。
结合第一方面,在第一方面的某些实现方式中,所述方法还包括:确定所述当前帧获取的第二图像中的第二目标区域,所述第二目标区域为所述第二图像中需要进行补光的区域;根据所述第二目标区域确定第二目标感光芯片行在后续帧的第二曝光时段,所述第二目标感光芯片行指所述感光芯片中用于生成所述第二目标区域中的图像内容的芯片行;在后续帧对所述感光芯片进行曝光时,根据所述第二曝光时段指示红外光源进行补光。
在本申请实施例中,基于所述当前帧获取的第二图像中的第二目标区域,对所述感光芯片在所述后续帧的补光时段进行调整,可以提升摄像头模组拍摄的图像的效果。
结合第一方面,在第一方面的某些实现方式中,所述方法还包括:根据所述第一曝光时段确定所述第一目标感光芯片行在后续帧的第三曝光时段;在后续帧对所述感光芯片进行曝光时,根据所述第三曝光时段指示红外光源进行补光。
在本申请实施例中,基于所述第一曝光时段可以便捷地确定后续帧的补光时段(例如,所述第三曝光时段)。
需要说明的是,本申请实施例中的所述第一曝光时段或所述第三曝光时段即可以以相对时间表示,也可以以绝对时间表示。
在以相对时间表示曝光时段的情况下,例如,假设当前帧的曝光起始时刻为T 0,所述第一曝光时段可以指:起始时刻为T 0+T 1,结束时刻为T 0+T 2的时间段,即第一曝光时段的起始时刻与当前帧的曝光起始时刻之间的间隔为T 1,第一曝光时段的结束时刻与当前帧的曝光起始时刻之间的间隔为T 2
类似地,假设后续帧的曝光起始时刻为T 3,所述第三曝光时段可以指:起始时刻为T 3+T 1,结束时刻为T 3+T 2的时间段,即第三曝光时段的起始时刻与后续帧的曝光起始时刻之间的间隔为T 1,第三曝光时段的结束时刻与后续帧的曝光起始时刻之间的间隔为T 2
在以绝对时间表示曝光时段的情况下,例如,假设当前帧的曝光起始时刻为T 0,所述第一曝光时段可以指:起始时刻为T 4,结束时刻为T 5的时间段,此时,第一曝光时段的起始时刻与当前帧的曝光起始时刻之间的间隔为T 4-T 0=T 1,第一曝光时段的结束时刻与当前帧的曝光起始时刻之间的间隔为T 5-T 0=T 2
同样地,假设后续帧的曝光起始时刻为T 3,所述第三曝光时段可以指:起始时刻为T 6,结束时刻为T 7的时间段,即第三曝光时段的起始时刻与后续帧的曝光起始时刻之间的间隔为T 6-T 3=T 1,第三曝光时段的结束时刻与后续帧的曝光起始时刻之间的间隔为T 7-T 3=T 2
第二方面,提供了一种控制摄像头模组补光时间的装置,所述装置包括:
第一确定单元,用于确定所述摄像头在当前帧之前拍摄的第一图像中的第一目标区域,所述第一目标区域为所述第一图像中需要进行补光的区域;第二确定单元,用于根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段,所述第一目标感光芯片行指所述感光芯片中用于生成所述第一目标区域中的图像内容的芯片行;指示单元,用于在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝光时段指示红外光源进行补光。
其中,所述摄像头模组可以包括摄像头,所述摄像头可以包括感光芯片。
在本申请实施例中,根据所述第一图像中需要进行补光的区域确定第一目标感光芯片行在当前帧的第一曝光时段,在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝 光时段指示红外光源进行补光,可以减少所述红外光源的工作时间,从而可以降低红外摄像头模组的发热量。
同时,本申请实施例中的控制摄像头模组补光时间的方法并未增加或改变摄像头模组中的硬件模块(或单元),而是根据所述第一目标区域确定所述感光芯片在所述当前帧的补光时段(例如,所述第一曝光时段),并在所述当前帧对所述感光芯片进行曝光时,在所述补光时段内进行补光,以减少所述红外光源的工作时间,从而可以在不增加成本的前提下,降低红外摄像头模组的发热量。
进一步地,由于并未增加或改变摄像头模组中的硬件模块(或单元),本申请实施例中的控制摄像头模组补光时间的方法也没有增大摄像头模组的尺寸,有利于该摄像头模组在车内的配置及使用。
可选地,所述摄像头还可以包括红外光源及卷帘快门。
需要说明的是,本申请实施例中的红外光源即可以为所述摄像头的内置的红外光源,也可以为独立的外置的红外光源,本申请实施例中对此并不限定。
结合第二方面,在第二方面的某些实现方式中,所述第一确定单元具体用于:根据预设的目标对象确定所述第一图像中的所述第一目标区域。
在本申请实施例中,可以根据预设的目标对象灵活地控制所述感光芯片在当前帧的补光时间。
结合第二方面,在第二方面的某些实现方式中,所述第一目标区域为所述第一图像中的人脸区域。
结合第二方面,在第二方面的某些实现方式中,所述感光芯片包括多个感光芯片行,所述第一图像中的多个像素行对应所述多个感光芯片行;其中,所述第二确定单元具体用于:确定所述第一图像中处于所述第一目标区域中的像素行对应的所述第一目标感光芯片行;确定所述第一目标感光芯片行在当前帧的第一曝光时段。
在本申请实施例中,所述第一图像中的多个像素行对应所述多个感光芯片行,根据所述第一图像中处于所述第一目标区域中的像素行可以便携地确定所述第一目标感光芯片行,从而便于确定所述第一目标感光芯片行在当前帧的第一曝光时段。
结合第二方面,在第二方面的某些实现方式中,所述第一确定单元还用于:确定所述当前帧获取的第二图像中的第二目标区域,所述第二目标区域为所述第二图像中需要进行补光的区域;所述第二确定单元还用于:根据所述第二目标区域确定第二目标感光芯片行在后续帧的第二曝光时段,所述第二目标感光芯片行指所述感光芯片中用于生成所述第二目标区域中的图像内容的芯片行;所述指示单元还用于:在后续帧对所述感光芯片进行曝光时,根据所述第二曝光时段指示红外光源进行补光。
在本申请实施例中,基于所述当前帧获取的第二图像中的第二目标区域,对所述感光芯片在所述后续帧的补光时段进行调整,可以提升摄像头模组拍摄的图像的效果。
结合第二方面,在第二方面的某些实现方式中,所述指示单元还用于:根据所述第一曝光时段确定所述第一目标感光芯片行在后续帧的第三曝光时段;在后续帧对所述感光芯片进行曝光时,根据所述第三曝光时段指示红外光源进行补光。
在本申请实施例中,基于所述第一曝光时段可以便捷地确定后续帧的补光时段(例如,所述第三曝光时段)。
需要说明的是,本申请实施例中的所述第一曝光时段或所述第三曝光时段即可以以相对时间表示,也可以以绝对时间表示。
在以相对时间表示曝光时段的情况下,例如,假设当前帧的曝光起始时刻为T 0,所述第一曝光时段可以指:起始时刻为T 0+T 1,结束时刻为T 0+T 2的时间段,即第一曝光时段的起始时刻与当前帧的曝光起始时刻之间的间隔为T 1,第一曝光时段的结束时刻与当前帧的曝光起始时刻之间的间隔为T 2
类似地,假设后续帧的曝光起始时刻为T 3,所述第三曝光时段可以指:起始时刻为T 3+T 1,结束时刻为T 3+T 2的时间段,即第三曝光时段的起始时刻与后续帧的曝光起始时刻之间的间隔为T 1,第三曝光时段的结束时刻与后续帧的曝光起始时刻之间的间隔为T 2
在以绝对时间表示曝光时段的情况下,例如,假设当前帧的曝光起始时刻为T 0,所述第一曝光时段可以指:起始时刻为T 4,结束时刻为T 5的时间段,此时,第一曝光时段的起始时刻与当前帧的曝光起始时刻之间的间隔为T 4-T 0=T 1,第一曝光时段的结束时刻与当前帧的曝光起始时刻之间的间隔为T 5-T 0=T 2
同样地,假设后续帧的曝光起始时刻为T 3,所述第三曝光时段可以指:起始时刻为T 6,结束时刻为T 7的时间段,即第三曝光时段的起始时刻与后续帧的曝光起始时刻之间的间隔为T 6-T 3=T 1,第三曝光时段的结束时刻与后续帧的曝光起始时刻之间的间隔为T 7-T 3=T 2
第三方面,提供了一种摄像头模组,所述摄像头模组包括存储介质和中央处理器,所述存储介质可以是非易失性存储介质,所述存储介质中存储有计算机可执行程序,所述中央处理器与所述非易失性存储介质连接,并执行所述计算机可执行程序以实现所述第一方面或者第一方面的任一可能的实现方式中的方法。
第四方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行第一方面或第一方面的任一可能的实现方式中的方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面或第一方面的任一可能的实现方式中的方法。
第五方面,提供一种计算机可读存储介质,所述计算机可读介质存储用于设备执行的程序代码,所述程序代码包括用于执行第一方面或者第一方面的任一可能的实现方式中的方法的指令。
第六方面,提供一种汽车,所述汽车包括上述第二方面所述的控制摄像头模组补光时间的装置或第三方面所述的摄像头模组。
在本申请实施例中,根据所述第一图像中需要进行补光的区域确定第一目标感光芯片行在当前帧的第一曝光时段,在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝光时段指示红外光源进行补光,可以减少所述红外光源的工作时间,从而可以降低红外摄像头模组的发热量。
附图说明
图1为本申请实施例提供的一种自动驾驶车辆的结构示意图。
图2为本申请实施例提供的一种自动驾驶系统的结构示意图。
图3为本申请实施例适用的一种摄像头模组的结构示意图。
图4为本申请一个实施例提供的控制摄像头模组补光时间的方法的示意性框图。
图5为本申请实施例提供的一种感光芯片的结构示意图。
图6为本申请实施例提供的一种确定当前帧的补光时段的示意性框图。
图7为本申请另一个实施例提供的控制摄像头模组补光时间的方法的示意性框图。
图8为本申请一个实施例提供的计算人脸框对应的感光芯片行的方法的示意性框图。
图9为本申请另一个实施例提供的控制摄像头模组补光时间的方法的示意性框图。
图10为本申请一个实施例提供的计算人体框对应的感光芯片行的方法的示意性框图。
图11为本申请一个实施例提供的控制摄像头模组补光时间的装置的示意性框图。
图12为本申请另一个实施例提供的控制摄像头模组补光时间的装置的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请实施例的技术方案可以应用于各种车辆,该车辆具体可以为内燃机车、智能电动车或者混合动力车,或者,该车辆也可以为其他动力类型的车辆等,本申请实施例对此并不限定。
本申请实施例中的车辆可以为自动驾驶车辆,例如,所述自动驾驶车辆可以配置有自动驾驶模式,该自动驾驶模式可以为完全自动驾驶模式,或者,也可以为部分自动驾驶模式,本申请实施例对此并不限定。
本申请实施例中的车辆还可以配置有其他驾驶模式,所述其他驾驶模式可以包括运动模式、经济模式、标准模式、越野模式、雪地模式及爬坡模式等多种驾驶模式中的一种或多种。所述自动驾驶车辆可以在自动驾驶模式和上述多种(驾驶员驾驶车辆的)驾驶模型之间进行切换,本申请实施例对此并不限定。
图1是本申请实施例提供的车辆100的功能框图。
在一个实施例中,将车辆100配置为完全或部分地自动驾驶模式。
例如,车辆100可以在处于自动驾驶模式中的同时控制自身,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定该其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制车辆100。在车辆100处于自动驾驶模式中时,可以将车辆100置为在没有和人交互的情况下操作。
车辆100可包括各种子系统,例如行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。
可选地,车辆100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。
行进系统102可包括为车辆100提供动力运动的组件。在一个实施例中,推进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如,气油发动机和电动机组成的 混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。
能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为车辆100的其他系统提供能量。
传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。
在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
传感器系统104可包括感测关于车辆100周边的环境的信息的若干个传感器。
例如,传感器系统104可包括定位系统122(定位系统可以是GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器系统104还可包括被监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主车辆100的安全操作的关键功能。
定位系统122可用于估计车辆100的地理位置。IMU 124用于基于惯性加速度来感测车辆100的位置和朝向变化。在一个实施例中,IMU 124可以是加速度计和陀螺仪的组合。
雷达126可利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达126还可用于感测物体的速度和/或前进方向。
激光测距仪128可利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
相机130可用于捕捉车辆100的周边环境的多个图像。相机130可以是静态相机或视频相机。其中,相机130还可以包括红外相机、其他相机,例如,相机130可以包括摄像头监控系统(camera monitor system,CMS)中的座舱监控摄像头及驾驶员监控系统(driver monitor system,DMS)中的驾驶员监控摄像头。
控制系统106为控制车辆100及其组件的操作。控制系统106可包括各种元件,其中包括转向系统132、油门134、制动单元136、传感器融合算法138、计算机视觉系统140、路线控制系统142以及障碍物避免系统144。
转向系统132可操作来调整车辆100的前进方向。例如在一个实施例中可以为方向盘系统。
油门134用于控制引擎118的操作速度并进而控制车辆100的速度。
制动单元136用于控制车辆100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制车辆100的速度。
计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别车辆100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from Motion,SFM) 算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。
路线控制系统142用于确定车辆100的行驶路线。在一些实施例中,路线控制系统142可结合来自传感器138、GPS 122和一个或多个预定地图的数据以为车辆100确定行驶路线。
障碍物避免系统144用于识别、评估和避免或者以其他方式越过车辆100的环境中的潜在障碍物。
当然,在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
车辆100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。
在一些实施例中,外围设备108提供车辆100的用户与用户接口116交互的手段。例如,车载电脑148可向车辆100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风150可从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向车辆100的用户输出音频。
无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如CDMA、EVD0、GSM/GPRS,或者4G蜂窝通信,例如LTE。或者5G蜂窝通信。无线通信系统146可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
电源110可向车辆100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为车辆100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。
车辆100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。
处理器113可以是任何常规的处理器,诸如商业可获得的CPU。替选地,该处理器可以是诸如ASIC或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机110的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机110的外壳内的其它存储 介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,数据存储装置114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行车辆100的各种功能,包括以上描述的那些功能。数据存储装置114也可包含额外的指令,包括向推进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令115以外,数据存储装置114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算机系统112使用。
用户接口116,用于向车辆100的用户提供信息或从其接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车车在电脑148、麦克风150和扬声器152。
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制车辆100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向单元132来避免由传感器系统104和障碍物避免系统144检测到的障碍物。在一些实施例中,计算机系统112可操作来对车辆100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,数据存储装置114可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
在道路行进的自动驾驶车辆,如上面的车辆100,可以识别其周围环境内的物体以确定对当前速度的调整。所述物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶车辆所要调整的速度。
可选地,车辆100或者与车辆100相关联的计算设备(如图1的计算机系统112、计算机视觉系统140、数据存储装置114)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所述识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。车辆100能够基于预测的所述识别的物体的行为来调整它的速度。换句话说,自动驾驶车辆能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定车辆 100的速度,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。
除了提供调整自动驾驶车辆的速度的指令之外,计算设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶车辆遵循给定的轨迹和/或维持与自动驾驶车辆附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车和手推车等,本申请实施例不做特别的限定。
图2是本申请实施例提供的自动驾驶系统的示意图。
如图2所示的自动驾驶系统包括计算机系统101,其中,计算机系统101包括处理器103,处理器103和系统总线105耦合。处理器103可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)107,显示适配器可以驱动显示器109,显示器109和系统总线105耦合。系统总线105通过总线桥111和输入输出(I/O)总线113耦合。I/O接口115和I/O总线耦合。I/O接口115和多种I/O设备进行通信,比如输入设备117(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)121,(例如,CD-ROM,多媒体接口等)。收发器123(可以发送和/或接受无线电通信信号),摄像头155(可以捕捉静态和动态数字视频图像)和外部USB接口125。其中,可选地,和I/O接口115相连接的接口可以是USB接口。
其中,处理器103可以是任何传统处理器,包括精简指令集计算(reduced instruction set computer,RISC)处理器、复杂指令集计算(complex instruction set computer,CISC)处理器或上述的组合。可选地,处理器可以是诸如专用集成电路(application specific integrated circuit,ASIC)的专用装置。可选地,处理器103可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。
可选地,在本文所述的各种实施例中,计算机系统101可位于远离自动驾驶车辆的地方(例如,计算机系统101可位于云端或服务器),并且可与自动驾驶车辆无线通信。在其它方面,本文所述的一些过程在设置在自动驾驶车辆内的处理器上执行,其它由远程处理器执行,包括采取执行单个操纵所需的动作。
计算机101可以通过网络接口129和软件部署服务器149通信。网络接口129是硬件网络接口,比如,网卡。网络127可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(virtual private network,VPN)。可选地,网络127还可以是无线网络,比如WiFi网络,蜂窝网络等。
硬盘驱动接口和系统总线105耦合。硬件驱动接口和硬盘驱动器相连接。系统内存135和系统总线105耦合。运行在系统内存135的数据可以包括计算机101的操作系统137和应用程序143。
操作系统包括解析器139(shell)和内核(kernel)141。shell是介于使用者和操作系统之内核(kernel)间的一个接口。shell是操作系统最外面的一层。shell管理使用者与操作系统之间的交互:等待使用者的输入,向操作系统解释使用者的输入,并且处理各种各样的操作系统的输出结果。
内核141由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。直 接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等等。
应用程序143包括控制摄像头模组补光时间相关的程序,比如,确定所述摄像头模组在当前帧之前拍摄的第一图像中的第一目标区域,所述第一目标区域为所述第一图像中需要进行补光的区域;根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段,所述第一目标感光芯片行指所述感光芯片中用于生成所述第一目标区域中的图像内容的芯片行;在所述当前帧对所述感光芯片进行曝光时,通过卷帘快门,控制红外光源仅在所述第一曝光时段内进行补光。
应用程序143也可以存在于软件部署服务器149(deploying server)的系统上。在一个实施例中,在需要执行应用程序143时,计算机系统101可以从软件部署服务器149(deploying server)下载应用程序143。
传感器153和计算机系统101关联。传感器153用于探测计算机101周围的环境,或者,传感器153也可以用于监控自动驾驶车辆座舱内的情况。可选地,计算机101位于自动驾驶车辆上。
举例来说,传感器153可以包括驾驶员监控系统(driver monitor system,DMS)中的驾驶员监控摄像头,驾驶员监控摄像头可以用于对自动驾驶车辆的驾驶员进行疲劳检测、人脸识别、分神检测、在环检测等;或者,传感器153也可以包括摄像头监控系统(camera monitor system,CMS)中的座舱监控摄像头,座舱监控摄像头可以用于对座舱内的驾驶员或其他乘客进行行为识别、手势识别以及遗留物检测等。
例如,应用程序143可以对传感器153采集到的图像进行检测,以确定图像中需要进行补光的区域,并结合该需要进行补光的区域确定感光芯片在当前帧曝光时的补光时段。此时,通过卷帘快门,控制红外光源在所述补光时段内进行补光,可以降低红外摄像头模组的发热量。
图3为本申请实施例适用的一种摄像头模组300的架构示意图。应理解,图3所示的摄像头模组300仅为示例而非限定,摄像头模组300中可以包括更多或更少的步骤,本申请实施例中对此并不限定。
摄像头模组300可以用于驾驶员监控系统(driver monitor system,DMS)中的驾驶员监控摄像头,对自动驾驶车辆的驾驶员进行疲劳检测、人脸识别、分神检测、在环检测等;或者,摄像头模组300也可以用于摄像头监控系统(camera monitor system,CMS)中的座舱监控摄像头,对座舱内的驾驶员或其他乘客进行行为识别、手势识别以及遗留物检测等。
如图3所示,摄像头模组300可以包括镜头301、感光芯片302、图像信号处理器(image signal processor,ISP)303、中央处理器(central processing unit,CPU)/神经处理器(neural processing unit,NPU)304、红外(infrared,IR)光源305及光源控制器306。
其中,镜头301可以包括卷帘快门(rolling shutter),感光芯片302可以为互补金属氧化物半导体(complementary metal–oxide–semiconductor,CMOS),ISP 303可以为集成于COMS上的静态ISP(stack ISP),例如,ISP 303可以集成在感光芯片302上,或者,ISP 303也可以为独立的ISP(discrete ISP),红外光源305可以为发光二级管(light emitting diode,LED),或者,红外光源305也可以为垂直外腔面发射激光器(vertical-extemal-cavity  surface-emitting,VECSEL)。
现有技术中,传统的红外摄像头在曝光时,以红外光源作为光源,通过卷帘快门(rolling shutter)门对感光芯片中的各个感光芯片行进行逐行曝光。如图5所示,感光芯片中的各个感光芯片行的曝光时段不同,直至感光芯片中的所有感光芯片行均被曝光,则完成一次完整的曝光。
其中,红外光源指波长780~1400纳米(nm)的光源,是人类肉眼不可见的光,红外光源配合红外摄像头可以使得拍摄不受可见光的影响,无论在白天或晚上都可以正常拍摄。在传统的红外摄像头进行曝光的过程中,红外光源一直都在工作,即,感光芯片(在某一帧)的曝光时长与红外光源的工作时长相等。但是,红外光源在工作会发热,因此,红外光源的长时间工作会导致摄像头模组的发热量较高。
基于上述问题,本申请提出了一种控制摄像头模组补光时间的方法,根据所述第一图像中需要进行补光的区域确定第一目标感光芯片行在当前帧的第一曝光时段,并在当前帧对所述感光芯片进行曝光时,仅在所述第一曝光时段内进行补光,可以减少所述红外光源的工作时间,从而可以降低红外摄像头模组的发热量。
下面结合图4至图10对本申请实施例中的控制摄像头模组补光时间的方法进行详细说明。
图4是本申请实施例提供的控制摄像头模组补光时间的方法400的示意性流程图。
图4所示的方法400可以包括步骤410、步骤420及步骤430,应理解,图4所示的方法400仅为示例而非限定,方法400中可以包括更多或更少的步骤,本申请实施例中对此并不限定,下面分别对这几个步骤进行详细的介绍。
图4所示的方法400可以由图1中的车辆100中的相机130中的摄像头模组执行、或者,方法400也可以由图2中的自动驾驶系统中的摄像头155或传感器153中的摄像头模组执行。
其中,所述方法400中的摄像头模组可以包括感光芯片、红外光源及卷帘快门。例如,所述方法400中的摄像头模组可以如图3中的摄像头模组300所示。
S410,确定所述摄像头在当前帧之前拍摄的第一图像中的第一目标区域。
其中,所述第一目标区域可以为所述第一图像中需要进行补光的区域。
可选地,所述第一图像中需要进行补光的区域中包含的内容可以与所述摄像头模组的用途有关。比如,可以根据预设的目标对象确定所述第一图像中的所述第一目标区域。
例如,所述摄像头模组可以用于驾驶员监控系统(driver monitor system,DMS)中的驾驶员监控摄像头,对自动驾驶车辆的驾驶员进行疲劳检测、人脸识别、分神检测、在环检测等,相应地,在所述摄像头模组用于驾驶员监控摄像头时,所述预设的目标对象可以指所述第一图像中的人脸区域。
又例如,所述摄像头模组也可以用于摄像头监控系统(camera monitor system,CMS)中的座舱监控摄像头,对座舱内的驾驶员或其他乘客进行行为识别、手势识别以及遗留物检测等,相应地,在所述摄像头模组用于座舱监控摄像头时,所述预设的目标对象可以指所述第一图像中的人体区域。
再例如,所述摄像头还可以用于检测周围其他车辆的外摄像头,对所述车辆周围的其他车辆进行检测和识别等,相应地,在所述摄像头用于检测周围其他车辆的外摄像头时, 所述预设的目标对象可以指所述第一图像中的车辆(例如,所述车辆周围的其他车辆)区域。
可选地,可以使用深度学习算法或Haar算子对所述第一图像进行检测,以确定出所述第一图像中需要进行补光的区域,即所述第一目标区域。
例如,可以使用神经网络模型确定所述第一图像中人脸区域(或人体区域),即所述第一图像中需要进行补光的所述第一目标区域。
S420,根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段。
其中,所述第一目标感光芯片行可以指所述感光芯片中用于生成所述第一目标区域中的图像内容的芯片行。
可选地,所述感光芯片可以包括多个感光芯片行,所述第一图像中可以包括多个像素行,所述第一图像中的多个像素行可以对应所述多个感光芯片行。
例如,如图6所示,所述感光芯片可以包括960个感光芯片行,所述第一图像可以包括960个像素(pixel)行,所述第一图像中包括的960个像素行可以与所述感光芯片包括的960个感光芯片行一一对应。
图6中示出的是所述感光芯片中的感光芯片行的行数与所述第一图像中的像素行的行数相等的情况,即感光芯片曝光后生成的图像未经过伸缩变换的情况。
需要说明的是,本申请实施例对所述感光芯片中的感光芯片行的行数,以及所述第一图像中的像素行的行数并不限定,也就是说,本申请实施例中并不限定所述感光芯片中的感光芯片行的行数与所述第一图像中的像素行的行数必须相等,也不限定所述第一图像中的多个像素行与所述多个感光芯片行必须一一对应。
可选地,所述根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段,可以包括:
确定所述第一图像中处于所述第一目标区域中的像素行对应的所述第一目标感光芯片行;确定所述第一目标感光芯片行在当前帧的第一曝光时段。
例如,如图6所示,可以确定所述第一目标区域的上边界在所述第一图像中的像素行a、以及所述第一目标区域的下边界在所述第一图像中的像素行b;确定像素行a对应的感光芯片中的感光芯片行n,以及像素行b对应的感光芯片中的感光芯片行m,则从感光芯片行n开始,到感光芯片行m为止的所有感光芯片行,即为所述第一目标感光芯片行。
进一步地,可以计算所述第一目标感光芯片行在所述当前帧的曝光时段。
例如,假设第一行感光芯片行开始曝光的时刻为T0,每行感光芯片行曝光所用的时间为T,相邻两行感光芯片行开始曝光的时差为t,则第n行感光芯片行开始曝光的时刻T1为:T1=T0+n*t,第m行感光芯片行开始曝光的时刻T2为:T2=T0+T+m*t,即图6中所示的所述红外光源补光的起始时刻T1,以及所述红外光源补光的结束时刻T2。
此时,所述第一目标感光芯片行在所述当前帧的曝光时段(T1-T2)即为所述第一曝光时段。
S430,在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝光时段指示红外光源进行补光。
在本申请实施例中,根据所述第一图像中需要进行补光的区域确定第一目标感光芯片行在当前帧的第一曝光时段,在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝 光时段指示红外光源进行补光,可以减少所述红外光源的工作时间,从而可以降低红外摄像头模组的发热量。
同时,本申请实施例中的控制摄像头模组补光时间的方法并未增加或改变摄像头模组中的硬件模块(或单元),而是根据所述第一目标区域确定所述感光芯片在所述当前帧的补光时段(例如,所述第一曝光时段),并在所述当前帧对所述感光芯片进行曝光时,在所述补光时段内进行补光,以减少所述红外光源的工作时间,从而可以在不增加成本的前提下,降低红外摄像头模组的发热量。
进一步地,由于并未增加或改变摄像头模组中的硬件模块(或单元),本申请实施例中的控制摄像头模组补光时间的方法也没有增大摄像头模组的尺寸,有利于该摄像头模组在车内的配置及使用。
可选地,所述方法400还可以包括步骤432。
S432,根据所述第一曝光时段确定所述第一目标感光芯片行在后续帧的第三曝光时段;在后续帧对所述感光芯片进行曝光时,根据所述第三曝光时段指示红外光源进行补光。
也就是说,在后续帧对所述感光芯片进行曝光时,仅在所述第一目标感光芯片行进行曝光的时段内,控制红外光源进行补光。
在本申请实施例中,基于所述第一曝光时段可以便捷地确定后续帧的补光时段(例如,所述第三曝光时段)。
可选地,所述方法400还可以包括步骤434、步骤436及步骤438。
S434,确定所述当前帧获取的第二图像中的第二目标区域,所述第二目标区域为所述第二图像中需要进行补光的区域。
S436,根据所述第二目标区域确定第二目标感光芯片行在后续帧的第二曝光时段,所述第二目标感光芯片行指所述感光芯片中用于生成所述第二目标区域中的图像内容的芯片行。
S438,在后续帧对所述感光芯片进行曝光时,根据所述第二曝光时段指示红外光源进行补光。
在本申请实施例中,基于所述当前帧获取的第二图像中的第二目标区域,对所述感光芯片在所述后续帧的补光时段进行调整,可以提升摄像头模组拍摄的图像的效果。
图7是本申请实施例提供的控制摄像头模组补光时间的方法700的示意性流程图。
图7所示的方法700可以由图1中的车辆100中的相机130中的摄像头模组执行、或者,方法700也可以由图2中的自动驾驶系统中的摄像头155或传感器153中的摄像头模组执行。
其中,所述方法700中的摄像头模组可以包括感光芯片、红外光源及卷帘快门。例如,所述方法700中的摄像头模组可以如图3中的摄像头模组300所示。
例如,方法700中的摄像头模组可以用于车载的驾驶员监控摄像头。该驾驶员监控摄像头可以放置于方向盘后的位置或汽车A柱的位置,用于对驾驶员的人脸识别、疲劳检测、分神检测、在环检测等。
图7所示的方法700可以包括步骤710至步骤790,应理解,图7所示的方法700仅为示例而非限定,方法700中可以包括更多或更少的步骤,本申请实施例中对此并不限定,下面分别对这几个步骤进行详细的介绍。
S710,在感光芯片曝光期间,红外光源全程补光,得到第一图像。
例如,在拍摄每一秒中的第一帧图像时,或在摄像头模组进入重置状态时,光源控制器可以控制红外光源在感光芯片曝光期间全程补光,得到所述第一图像,以保证所述第一图像中所有区域都能够得到良好的光照。
S720,对所述第一图像进行人脸检测。
例如,可以将所述第一图像送入CPU/NPU,相应地,CPU/NPU可以使用深度学习算法或使用Haar算子,对所述第一图像进行人脸检测。
如果检测出所述第一图像中的人脸框,则人脸检测成功,执行S730。
如果没有检测出所述第一图像中的人脸框(比如,如图8中B部分所示,所述第一图像中的人脸区域被遮挡或所述第一图像中无人脸区域),则人脸检测失败,执行S710。即在拍摄下一帧图像时,红外光源在感光芯片曝光期间全程补光,得到所有区域都光照良好的图像,并对该得到的图像重新进行人脸检测。
S730,计算人脸框对应的感光芯片行。
可以基于所述人脸框,确定出所述第一图像中的核心区域,计算出所述第一图像中的核心区域对应的感光芯片行。
可选地,在确定所述第一图像中的核心区域时可以考虑裕量。
例如,可以预先设置阈值,在确定所述第一图像中的核心区域时设置裕量,以使得人脸框与无光区域(即所述第一图像中不需要进行补光的区域)之间的距离大于或等于预设的阈值。
如图8中A部分所示,人脸框的上边界与无光区域之间的距离大于或等于预设的阈值,同时,人脸框的下边界与无光区域之间的距离大于或等于预设的阈值。
其中,所述核心区域相当于图4中方法400中的所述第一目标区域,具体可以参考方法400中实施例的描述,这里不再赘述。
S740,计算所述感光芯片行对应的第一补光时段。
计算出所述感光芯片行对应的曝光时段,即所述第一补光时段。
S750,在感光芯片曝光期间,在所述确定的补光时段内进行补光,得到第二图像。
例如,所述确定的补光时段可以为所述第一补光时段或所述第二补光时段。
S760,对所述第二图像进行人脸检测。
如图8中A部分所示,如果检测出所述第二图像中的人脸框,且该人脸框与无光区域(即所述第二图像中不需要进行补光的区域)之间的距离大于或等于所述预设的阈值,则人脸检测成功,执行S770。
如图8中B部分所示,如果没有检测出所述第二图像中的人脸框(比如,所述第二图像中的人脸区域被遮挡或所述第二图像中无人脸区域),则人脸检测失败,且属于失败类型A,执行S780,以确定人脸检测失败的次数是否小于10次。
如图8中C部分所示,如果检测出所述第二图像中的人脸框,但人脸框与无光区域(即所述第二图像中不需要进行补光的区域)之间的距离小于所述预设的阈值,则人脸检测失败,且属于失败类型B,执行S790。
S770,基于所述第一补光时段继续拍摄下一帧图像。
在后续帧拍摄图像的过程中,可以保持所述第一补光时段,即在后续帧拍摄图像的过 程中,控制红外光源继续在所述感光芯片行对应的第一补光时段内进行补光。
S780,确定人脸检测失败的次数是否小于10次。
如果没有检测出所述第一图像中的人脸框,且人脸检测失败的次数小于10次,在后续帧拍摄图像的过程中,控制红外光源继续在所述感光芯片行对应的第一补光时段内进行补光;否则,在人脸检测失败的次数大于或等于10次(或每秒定期重置所述第一补光时段)时,则执行S710。
S790,基于所述第二图像确定第二补光时段。
对所述第二图像进行人脸检测,得到所述第二图像的人脸框,计算所述第二图像的人脸框对应的感光芯片行,计算该感光芯片行(所述第二图像的人脸框对应的感光芯片行)对应的第二补光时段。
随后执行S750,并在后续帧拍摄图像的过程中,控制红外光源在所述感光芯片行对应的第二补光时段内进行补光。
图9是本申请实施例提供的控制摄像头模组补光时间的方法900的示意性流程图。
图9所示的方法900可以由图1中的车辆100中的相机130中的摄像头模组执行、或者,方法900也可以由图2中的自动驾驶系统中的摄像头155或传感器153中的摄像头模组执行。
其中,所述方法900中的摄像头模组可以包括感光芯片、红外光源及卷帘快门。例如,所述方法900中的摄像头模组可以如图3中的摄像头模组300所示。
例如,方法900中的摄像头模组可以用于车载的座舱监控摄像头。该座舱监控摄像头可以放置于车辆内部的后视镜的下方,用于对座舱内的驾驶员、副驾进行行为识别、手势识别以及遗留物检测等。
图9所示的方法900可以包括步骤910至步骤990,应理解,图9所示的方法900仅为示例而非限定,方法900中可以包括更多或更少的步骤,本申请实施例中对此并不限定,下面分别对这几个步骤进行详细的介绍。
S910,在感光芯片曝光期间,红外光源全程补光,得到第一图像。
例如,在拍摄每一秒中的第一帧图像时,或在摄像头模组进入重置状态时,光源控制器可以控制红外光源在感光芯片曝光期间全程补光,得到所述第一图像,以保证所述第一图像中所有区域都能够得到良好的光照。
S920,对所述第一图像进行人体检测。
例如,可以将所述第一图像送入CPU/NPU,相应地,CPU/NPU可以使用深度学习算法或使用Haar算子,对所述第一图像进行人体检测。
如果检测出所述第一图像中的人体框,则人体检测成功,执行S930。
如果没有检测出所述第一图像中的人体框(比如,如图10中B部分所示,所述第一图像中的人体区域被遮挡或所述第一图像中无人体区域),则人体检测失败,执行S910。即在拍摄下一帧图像时,红外光源在感光芯片曝光期间全程补光,得到所有区域都光照良好的图像,并对该得到的图像重新进行人体检测。
S930,计算人体框对应的感光芯片行。
可以基于所述人体框,确定出所述第一图像中的核心区域,计算出所述第一图像中的核心区域对应的感光芯片行。
可选地,在确定所述第一图像中的核心区域时可以考虑裕量。
例如,可以预先设置阈值,在确定所述第一图像中的核心区域时设置裕量,以使得人体框与无光区域(即所述第一图像中不需要进行补光的区域)之间的距离大于或等于预设的阈值。
如图10中A部分所示,人体框的上边界与无光区域之间的距离大于或等于预设的阈值,同时,人体框的下边界与无光区域之间的距离大于或等于预设的阈值。
其中,所述核心区域相当于图4中方法400中的所述第一目标区域,具体可以参考方法400中实施例的描述,这里不再赘述。
S940,计算所述感光芯片行对应的第一补光时段。
计算出所述感光芯片行对应的曝光时段,即所述第一补光时段。
S950,在感光芯片曝光期间,在确定的补光时段内进行补光,得到第二图像。
例如,所述确定的补光时段可以为所述第一补光时段或所述第二补光时段。
S960,对所述第二图像进行人体检测。
如图10中A部分所示,如果检测出所述第一图像中的人体框,且该人体框与无光区域(即所述第一图像中不需要进行补光的区域)之间的距离大于或等于所述预设的阈值,则人体检测成功,执行S970。
如图10中B部分所示,如果没有检测出所述第一图像中的人体框(比如,所述第一图像中的人体区域被遮挡或所述第一图像中无人体区域),则人体检测失败,执行S980,以确定人体检测失败的次数是否小于10次。
如图10中C部分所示,如果检测出所述第一图像中的人体框,但人体框与无光区域(即所述第一图像中不需要进行补光的区域)之间的距离小于所述预设的阈值,则人体检测失败,执行S990。
S970,基于所述第一补光时段继续拍摄下一帧图像。
在后续帧拍摄图像的过程中,可以保持所述第一补光时段,即在后续帧拍摄图像的过程中,控制红外光源继续在所述感光芯片行对应的第一补光时段内进行补光。
S980,确定人体检测失败的次数是否小于10次。
如果没有检测出所述第一图像中的人体框,且人体检测失败的次数小于10次,在后续帧拍摄图像的过程中,控制红外光源继续在所述感光芯片行对应的第一补光时段内进行补光;否则,在人体检测失败的次数大于或等于10次(或每秒定期重置所述第一补光时段)时,则执行S910。
S990,基于所述第二图像确定第二补光时段。
对所述第二图像进行人体检测,得到所述第二图像的人体框,计算所述第二图像的人体框对应的感光芯片行,计算该感光芯片行(所述第二图像的人体框对应的感光芯片行)对应的第二补光时段。
随后执行S950,并在后续帧拍摄图像的过程中,控制红外光源在所述感光芯片行对应的第二补光时段内进行补光。
图11是本申请一个实施例提供的控制摄像头模组补光时间的装置1100的示意性框图。应理解,图11示出的控制摄像头模组补光时间的装置1100仅是示例,本申请实施例的装置1100还可包括其他模块或单元。应理解,装置1100能够执行图4、图7及图9的 方法中的各个步骤,为了避免重复,此处不再详述。
其中,所述摄像头模组可以包括摄像头,所述摄像头可以包括感光芯片、红外光源及卷帘快门。例如,所述摄像头模组可以如图3中的摄像头模组300所示。
在本申请实施例的一种可能的实现方式中,所述控制摄像头模组补光时间的装置1100可以包括:
第一确定单元1110,用于确定所述摄像头在当前帧之前拍摄的第一图像中的第一目标区域,所述第一目标区域为所述第一图像中需要进行补光的区域;
第二确定单元1120,用于根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段,所述第一目标感光芯片行指所述感光芯片中用于生成所述第一目标区域中的图像内容的芯片行;
指示单元1130,用于在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝光时段指示红外光源进行补光。
其中,所述第一确定单元和所述第一确定单元可以为同一个模块或单元,本申请实施例中对此并不限定。
可选地,所述第一确定单元具体用于:根据预设的目标对象确定所述第一图像中的所述第一目标区域。
可选地,所述第一目标区域为所述第一图像中的人脸区域。
可选地,所述感光芯片包括多个感光芯片行,所述第一图像中的多个像素行对应所述多个感光芯片行;其中,所述第二确定单元1120具体用于:
确定所述第一图像中处于所述第一目标区域中的像素行对应的所述第一目标感光芯片行;确定所述第一目标感光芯片行在当前帧的第一曝光时段。
可选地,所述第一确定单元1110还用于:确定所述当前帧获取的第二图像中的第二目标区域,所述第二目标区域为所述第二图像中需要进行补光的区域;所述第二确定单元1120还用于:根据所述第二目标区域确定第二目标感光芯片行在后续帧的第二曝光时段,所述第二目标感光芯片行指所述感光芯片中用于生成所述第二目标区域中的图像内容的芯片行;所述指示单元1130还用于:在后续帧对所述感光芯片进行曝光时,根据所述第二曝光时段指示红外光源进行补光。
可选地,所述指示单元1130还用于:根据所述第一曝光时段确定所述第一目标感光芯片行在后续帧的第三曝光时段;在后续帧对所述感光芯片进行曝光时,根据所述第二曝光时段指示红外光源进行补光。
应理解,这里的控制摄像头模组补光时间的装置1100以功能模块的形式体现。这里的术语“模块”可以通过软件和/或硬件形式实现,对此不作具体限定。例如,“模块”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件电路可能包括应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。
作为一个示例,本申请实施例提供的控制摄像头模组补光时间的装置1100可以是自动驾驶系统中的摄像头模组,或者,也可以是配置于车机中的摄像头模组,或者,还可以是自动驾驶车辆中的车机(或处理器),或者,还可以是配置于车机中的芯片,以用于执 行本申请实施例所述的方法。
图12是本申请一个实施例的控制摄像头模组补光时间的装置800的示意性框图。图12所示的装置800包括存储器801、处理器802、通信接口803以及总线804。其中,存储器801、处理器802、通信接口803通过总线804实现彼此之间的通信连接。
存储器801可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器801可以存储程序,当存储器801中存储的程序被处理器802执行时,处理器802用于执行本申请实施例的控制摄像头模组补光时间的方法的各个步骤,例如,可以执行图4、图7及图9所示实施例的各个步骤。
处理器802可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC),或者一个或多个集成电路,用于执行相关程序,以实现本申请方法实施例的控制摄像头模组补光时间的方法。
处理器802还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请实施例的控制摄像头模组补光时间的方法的各个步骤可以通过处理器802中的硬件的集成逻辑电路或者软件形式的指令完成。
上述处理器802还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器801,处理器802读取存储器801中的信息,结合其硬件完成本申请实施例中控制摄像头模组补光时间的装置包括的单元所需执行的功能,或者,执行本申请方法实施例的控制摄像头模组补光时间的方法,例如,可以执行图4、图7及图9所示实施例的各个步骤/功能。
通信接口803可以使用但不限于收发器一类的收发装置,来实现装置800与其他设备或通信网络之间的通信。
总线804可以包括在装置800各个部件(例如,存储器801、处理器802、通信接口803)之间传送信息的通路。
应理解,本申请实施例所示的装置800可以是自动驾驶系统中的摄像头模组,或者,也可以是配置于车机中的摄像头模组,或者,还可以是自动驾驶车辆中的车机(或处理器),或者,还可以是配置于车机中的芯片,以用于执行本申请实施例所述的方法。
应理解,本申请实施例中的处理器可以为中央处理单元(central processing unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器 等。
还应理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的随机存取存储器(random access memory,RAM)可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
上述实施例,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令或计算机程序。在计算机上加载或执行所述计算机指令或计算机程序时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系,但也可能表示的是一种“和/或”的关系,具体可参考前后文进行理解。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可 以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (16)

  1. 一种控制摄像头模组补光时间的方法,其特征在于,所述摄像头模组包括摄像头,所述摄像头包括感光芯片,所述方法包括:
    确定所述摄像头在当前帧之前拍摄的第一图像中的第一目标区域,所述第一目标区域为所述第一图像中需要进行补光的区域;
    根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段,所述第一目标感光芯片行指所述感光芯片中用于生成所述第一目标区域中的图像内容的芯片行;
    在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝光时段指示红外光源进行补光。
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述摄像头在当前帧之前拍摄的第一图像中的第一目标区域,包括:
    根据预设的目标对象确定所述第一图像中的所述第一目标区域。
  3. 根据权利要求1或2所述的方法,其特征在于,所述第一目标区域为所述第一图像中的人脸区域。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述感光芯片包括多个感光芯片行,所述第一图像中的多个像素行对应所述多个感光芯片行;
    其中,所述根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一曝光时段,包括:
    确定所述第一图像中处于所述第一目标区域中的像素行对应的所述第一目标感光芯片行;
    确定所述第一目标感光芯片行在当前帧的第一曝光时段。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    确定所述当前帧获取的第二图像中的第二目标区域,所述第二目标区域为所述第二图像中需要进行补光的区域;
    根据所述第二目标区域确定第二目标感光芯片行在后续帧的第二曝光时段,所述第二目标感光芯片行指所述感光芯片中用于生成所述第二目标区域中的图像内容的芯片行;
    在后续帧对所述感光芯片进行曝光时,根据所述第二曝光时段指示红外光源进行补光。
  6. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一曝光时段确定所述第一目标感光芯片行在后续帧的第三曝光时段;
    在后续帧对所述感光芯片进行曝光时,根据所述第三曝光时段指示红外光源进行补光。
  7. 一种控制摄像头模组补光时间的装置,其特征在于,所述摄像头模组包括摄像头,所述摄像头包括感光芯片,所述装置包括:
    第一确定单元,用于确定所述摄像头在当前帧之前拍摄的第一图像中的第一目标区域,所述第一目标区域为所述第一图像中需要进行补光的区域;
    第二确定单元,用于根据所述第一目标区域确定第一目标感光芯片行在当前帧的第一 曝光时段,所述第一目标感光芯片行指所述感光芯片中用于生成所述第一目标区域中的图像内容的芯片行;
    指示单元,用于在所述当前帧对所述感光芯片进行曝光时,根据所述第一曝光时段指示红外光源进行补光。
  8. 根据权利要求7所述的装置,其特征在于,所述第一确定单元具体用于:
    根据预设的目标对象确定所述第一图像中的所述第一目标区域。
  9. 根据权利要求7或8所述的装置,其特征在于,所述第一目标区域为所述第一图像中的人脸区域。
  10. 根据权利要求7至9中任一项所述的装置,其特征在于,所述感光芯片包括多个感光芯片行,所述第一图像中的多个像素行对应所述多个感光芯片行;
    其中,所述第二确定单元具体用于:
    确定所述第一图像中处于所述第一目标区域中的像素行对应的所述第一目标感光芯片行;
    确定所述第一目标感光芯片行在当前帧的第一曝光时段。
  11. 根据权利要求7至10中任一项所述的装置,其特征在于,所述第一确定单元还用于:确定所述当前帧获取的第二图像中的第二目标区域,所述第二目标区域为所述第二图像中需要进行补光的区域;
    所述第二确定单元还用于:根据所述第二目标区域确定第二目标感光芯片行在后续帧的第二曝光时段,所述第二目标感光芯片行指所述感光芯片中用于生成所述第二目标区域中的图像内容的芯片行;
    所述指示单元还用于:在后续帧对所述感光芯片进行曝光时,根据所述第二曝光时段指示红外光源进行补光。
  12. 根据权利要求7至10中任一项所述的装置,其特征在于,所述指示单元还用于:
    根据所述第一曝光时段确定所述第一目标感光芯片行在后续帧的第三曝光时段;
    在后续帧对所述感光芯片进行曝光时,根据所述第三曝光时段指示红外光源进行补光。
  13. 一种摄像头模组,其特征在于,包括处理器和存储器,所述存储器用于存储程序指令,所述处理器用于调用所述程序指令来执行权利要求1至6中任一项所述的方法。
  14. 一种汽车,其特征在于,所述汽车包括权利要求7至12中任一项所述的装置或权利要求13所述的摄像头模组。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序指令,当所述程序指令由处理器运行时,实现权利要求1至6中任一项所述的方法。
  16. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1至6中任一项所述的方法。
PCT/CN2021/106061 2020-09-25 2021-07-13 控制摄像头模组补光时间的方法及装置 WO2022062582A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21870960.8A EP4207731A4 (en) 2020-09-25 2021-07-13 METHOD AND APPARATUS FOR CONTROLLING LIGHT SUPPLY TIME OF A CAMERA MODULE
US18/189,362 US20230232113A1 (en) 2020-09-25 2023-03-24 Method and apparatus for controlling light compensation time of camera module

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011020997.9 2020-09-25
CN202011020997.9A CN114257712A (zh) 2020-09-25 2020-09-25 控制摄像头模组补光时间的方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/189,362 Continuation US20230232113A1 (en) 2020-09-25 2023-03-24 Method and apparatus for controlling light compensation time of camera module

Publications (1)

Publication Number Publication Date
WO2022062582A1 true WO2022062582A1 (zh) 2022-03-31

Family

ID=80789044

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106061 WO2022062582A1 (zh) 2020-09-25 2021-07-13 控制摄像头模组补光时间的方法及装置

Country Status (4)

Country Link
US (1) US20230232113A1 (zh)
EP (1) EP4207731A4 (zh)
CN (1) CN114257712A (zh)
WO (1) WO2022062582A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11902671B2 (en) * 2021-12-09 2024-02-13 Fotonation Limited Vehicle occupant monitoring system including an image acquisition device with a rolling shutter image sensor
CN115118847A (zh) * 2022-04-27 2022-09-27 一汽奔腾轿车有限公司 一种汽车用小型化摄像头

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853107A (zh) * 2014-02-19 2015-08-19 联想(北京)有限公司 信息处理的方法及电子设备
CN106454044A (zh) * 2016-10-25 2017-02-22 浙江宇视科技有限公司 一种爆闪补光装置及方法
CN107241558A (zh) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 曝光处理方法、装置和终端设备
CN107846556A (zh) * 2017-11-30 2018-03-27 广东欧珀移动通信有限公司 成像方法、装置、移动终端和存储介质
JP2018113662A (ja) * 2017-01-13 2018-07-19 パナソニックIpマネジメント株式会社 撮像装置
CN111601046A (zh) * 2020-04-22 2020-08-28 惠州市德赛西威汽车电子股份有限公司 一种暗光环境驾驶状态监测方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9674465B2 (en) * 2015-06-03 2017-06-06 Omnivision Technologies, Inc. Non-visible illumination scheme
JPWO2017073045A1 (ja) * 2015-10-28 2018-08-02 京セラ株式会社 撮像装置、撮像システム、対象者監視システム、および撮像装置の制御方法
CN106446873B (zh) * 2016-11-03 2021-01-26 北京旷视科技有限公司 人脸检测方法及装置
CN106572310B (zh) * 2016-11-04 2019-12-13 浙江宇视科技有限公司 一种补光强度控制方法与摄像机
CN109314751B (zh) * 2018-08-30 2021-01-12 深圳市锐明技术股份有限公司 一种补光方法、补光装置及电子设备
CN110084207A (zh) * 2019-04-30 2019-08-02 惠州市德赛西威智能交通技术研究院有限公司 自动调节人脸曝光量的曝光方法、装置和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853107A (zh) * 2014-02-19 2015-08-19 联想(北京)有限公司 信息处理的方法及电子设备
CN106454044A (zh) * 2016-10-25 2017-02-22 浙江宇视科技有限公司 一种爆闪补光装置及方法
JP2018113662A (ja) * 2017-01-13 2018-07-19 パナソニックIpマネジメント株式会社 撮像装置
CN107241558A (zh) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 曝光处理方法、装置和终端设备
CN107846556A (zh) * 2017-11-30 2018-03-27 广东欧珀移动通信有限公司 成像方法、装置、移动终端和存储介质
CN111601046A (zh) * 2020-04-22 2020-08-28 惠州市德赛西威汽车电子股份有限公司 一种暗光环境驾驶状态监测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4207731A4

Also Published As

Publication number Publication date
CN114257712A (zh) 2022-03-29
US20230232113A1 (en) 2023-07-20
EP4207731A4 (en) 2024-01-17
EP4207731A1 (en) 2023-07-05

Similar Documents

Publication Publication Date Title
US11675050B2 (en) LiDAR detection systems and methods
CN110550029B (zh) 障碍物避让方法及装置
US10962981B2 (en) Assisted perception for autonomous vehicles
CN110379193B (zh) 自动驾驶车辆的行为规划方法及行为规划装置
US10444754B2 (en) Remote assistance for an autonomous vehicle in low confidence situations
WO2022027304A1 (zh) 一种自动驾驶车辆的测试方法及装置
CN112230642B (zh) 道路可行驶区域推理方法及装置
EP4067821A1 (en) Path planning method for vehicle and path planning apparatus for vehicle
WO2021212379A1 (zh) 车道线检测方法及装置
US20220215639A1 (en) Data Presentation Method and Terminal Device
WO2022205211A1 (zh) 控制车辆行驶的方法、装置及车辆
US20230232113A1 (en) Method and apparatus for controlling light compensation time of camera module
WO2022062825A1 (zh) 车辆的控制方法、装置及车辆
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
CN112810603B (zh) 定位方法和相关产品
CN114531913A (zh) 车道线检测方法、相关设备及计算机可读存储介质
WO2022061702A1 (zh) 驾驶提醒的方法、装置及系统
WO2021217575A1 (zh) 用户感兴趣对象的识别方法以及识别装置
EP4159564A1 (en) Method and device for planning vehicle longitudinal motion parameters
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
WO2023015510A1 (zh) 防碰撞的方法和控制装置
WO2022001432A1 (zh) 推理车道的方法、训练车道推理模型的方法及装置
US12003894B1 (en) Systems, methods, and apparatus for event detection
WO2022061725A1 (zh) 交通元素的观测方法和装置
WO2022041820A1 (zh) 换道轨迹的规划方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21870960

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021870960

Country of ref document: EP

Effective date: 20230329

NENP Non-entry into the national phase

Ref country code: DE