WO2019232969A1 - 一种摄像机及抓拍图片融合方法 - Google Patents

一种摄像机及抓拍图片融合方法 Download PDF

Info

Publication number
WO2019232969A1
WO2019232969A1 PCT/CN2018/105225 CN2018105225W WO2019232969A1 WO 2019232969 A1 WO2019232969 A1 WO 2019232969A1 CN 2018105225 W CN2018105225 W CN 2018105225W WO 2019232969 A1 WO2019232969 A1 WO 2019232969A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
image sensor
visible light
infrared light
video image
Prior art date
Application number
PCT/CN2018/105225
Other languages
English (en)
French (fr)
Inventor
赵国辉
李转强
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to EP18921360.6A priority Critical patent/EP3806444A4/en
Priority to US15/734,835 priority patent/US11477369B2/en
Publication of WO2019232969A1 publication Critical patent/WO2019232969A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/665Control of cameras or camera modules involving internal camera communication with the image sensor, e.g. synchronising or multiplexing SSIS control signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Definitions

  • This application relates to video surveillance technology, and in particular, to a camera and a fusion method of captured pictures.
  • Intelligent traffic cameras mainly use infrared flashes or white light flashes to capture photos through a single sensor.
  • the image obtained by infrared flash capture is color cast or black and white.
  • White light strobe capture can capture color pictures, but it needs to be equipped with white light strobe light to fill light.
  • White light strobes can cause severe light pollution; moreover, sudden white light strobes on night roads can cause temporary blindness for drivers and can be dangerous.
  • the present application provides a camera and a method for fusing captured pictures.
  • a camera which is applied to a video surveillance system.
  • the camera includes a lens, a light splitting module, a first image sensor, a second image sensor, and a main processing chip.
  • a first image sensor configured to receive the visible light output by the spectroscopic module, and perform visible light video image acquisition according to a first shutter and a first gain to obtain a visible light video image;
  • a second image sensor configured to receive the infrared light output by the spectroscopic module, and perform infrared light video image acquisition according to the first shutter and the first gain to obtain an infrared light video image;
  • a main processing chip for outputting a video image after fusing the visible light video image and the infrared light video image wherein the fusion processing includes fusing brightness information of the visible light video image and the infrared light video image , Or fuse detailed information of the visible light video image and the infrared light video image.
  • the main processing chip is further configured to, when a capture instruction is received, transmit the capture instruction to the first image sensor and the second image sensor, respectively;
  • the first image sensor is further configured to, when the capture instruction is received, perform a photo capture according to a second shutter and a second gain to obtain a visible light picture;
  • the second image sensor is further configured to, when the capture instruction is received, perform a photo capture according to the second shutter and the second gain to obtain an infrared light picture;
  • the main processing chip is further configured to output a captured picture obtained by fusing the visible light picture and the infrared light picture.
  • the first image sensor is further configured to interrupt the visible light video image acquisition when the capture instruction is received;
  • the second image sensor is further configured to interrupt the infrared light video image acquisition when the capture instruction command is received.
  • the camera further includes: a synchronization processing module; wherein:
  • the synchronization processing module is configured to receive the capture instruction transmitted by the main processing chip, and transmit the capture instruction to the first image sensor and the second image sensor respectively within a preset time;
  • the synchronization processing module is further configured to receive the visible light picture transmitted by the first image sensor and the infrared light picture transmitted by the second image sensor according to a preset timing, and combine the visible light picture and the The infrared light picture is transmitted to the main processing chip.
  • the synchronization processing module is specifically configured to stitch the captured visible light picture and infrared light picture into a frame picture, and transmit the picture to the main processing chip.
  • the synchronization processing module is further configured to stitch one frame of the visible light picture and one frame of the infrared light picture to be a stitched picture, and transmit the stitched picture to the main process. chip;
  • the main processing chip is further configured to split a frame of the stitched picture into a frame of the visible light picture and a frame of the infrared light picture.
  • the main processing chip is further configured to perform at least one of the following processing on the captured picture: image signal processing, ISP processing, encoding, and compression.
  • the camera further includes a slave processing chip; wherein:
  • the master processing chip is specifically configured to transmit the visible light picture and the infrared light picture to the slave processing chip;
  • the slave processing chip is configured to fuse the visible light picture and the infrared light picture to obtain the captured picture, and transmit the captured picture to the master processing chip.
  • the slave processing chip is further configured to perform ISP processing on the fused image before transmitting the fused image to the master processing chip.
  • the slave processing chip is further configured to perform at least one of the following processing on the captured picture: ISP processing, encoding, and compression.
  • the slave processing chip is further configured to perform vehicle feature recognition on the captured picture based on a deep learning algorithm, and transmit the recognition result to the master processing chip.
  • a method for capturing a captured image which is applied to a video surveillance system camera, includes:
  • a light splitting module in the camera splits incident light entering the camera through a lens in the camera into visible light and infrared light;
  • a first image sensor in the camera receives the visible light output by the spectroscopic module, and performs visible light video image acquisition according to a first shutter and a first gain to obtain a visible light video image;
  • a second image sensor in the camera receives the infrared light output by the spectroscopic module, and performs infrared light video image acquisition according to the first shutter and the first gain to obtain an infrared light video image;
  • the main processing chip in the camera outputs a video image obtained by fusing the visible light video image and the infrared light video image, wherein the fusion processing includes brightness information of the visible light video image and the infrared light video image Performing fusion, or fusing detailed information of the visible light video image and the infrared light video image.
  • the main processing chip when the main processing chip receives a capture instruction, the main processing chip transmits the capture instruction to the first image sensor and the second image sensor, respectively;
  • the first image sensor When the first image sensor receives the capture instruction, capture a picture according to a second shutter and a second gain to obtain a visible light picture;
  • the second image sensor When the second image sensor receives the capture instruction, perform a photo capture according to the second shutter and the second gain to obtain an infrared light picture;
  • the main processing chip outputs a captured picture obtained by fusing the visible light picture and the infrared light picture.
  • the visible light video image collection is interrupted
  • the infrared light video image acquisition is interrupted.
  • transmitting the capture instruction to the first image sensor and the second image sensor separately includes:
  • the main processing chip transmits the capture instruction to a synchronization processing module in the camera;
  • the synchronization processing module transmits the capture instruction to the first image sensor and the second image sensor respectively within a preset time.
  • the first image sensor transmits the visible light picture to the synchronization processing module
  • the second image sensor transmits the infrared light picture to the synchronization processing module
  • the synchronization processing module transmits the visible light picture and the infrared light picture to the main processing chip.
  • transmitting the visible light picture and the infrared light picture to the main processing chip includes:
  • the synchronization processing module stitches a frame of the visible light picture and a frame of the infrared light picture that are captured synchronously into a frame stitching picture, and
  • the synchronization processing module transmits the stitched picture to the main processing chip.
  • the master processing chip transmits the visible light picture and the infrared light picture to a slave processing chip in the camera;
  • the slave processing chip fuses the visible light picture and the infrared light picture to obtain the captured picture, and transmits the captured picture to the master processing chip.
  • the main processing chip performs at least one of the following processing on the captured picture: image signal processing, ISP processing, encoding, and compression.
  • the slave processing chip performs at least one of the following processing on the captured picture: ISP processing, encoding, and compression.
  • the slave processing chip performs vehicle feature recognition on the captured picture based on a deep learning algorithm, and transmits the recognition result to the master processing chip.
  • the camera when the first image sensor and the second image sensor receive a capture signal and a synchronization command during video image acquisition according to the first shutter and the first gain, the camera may follow the second shutter and the second gain.
  • the main processing chip takes a picture snapshot, and transfer the captured visible light picture and infrared light picture to the main processing chip, and the main processing chip outputs the fused snap picture, that is, the camera supports the picture snap in the video image acquisition process, and the picture snap can have Independent shutter and gain.
  • you can ensure the clarity of the license plate and vehicle details when the picture is taken, prevent the vehicle and license plate from being overexposed, and ensure that the vehicle passes a shorter shutter when passing quickly. Take a snapshot to ensure that the vehicle is not tailing.
  • FIG. 1 is a schematic structural diagram of a camera according to an exemplary embodiment of the present application.
  • Fig. 2 is a schematic structural diagram of a camera according to another exemplary embodiment of the present application.
  • Fig. 3 is a schematic structural diagram of a camera according to another exemplary embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of a camera according to an exemplary embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a method for fusing snapped pictures according to an exemplary embodiment of the present application.
  • Fig. 6 is a schematic flowchart of a method for snapping a picture fusion according to another exemplary embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a snapped picture fusion method according to another exemplary embodiment of the present application.
  • FIG. 1 is a schematic structural diagram of a camera according to an embodiment of the present application.
  • the camera may include a lens 110, a light splitting module 120, a first image sensor 131 and a second image sensor 132, and a main camera.
  • a processing chip hereinafter referred to as a main chip 140. among them:
  • the light splitting module 120 may be configured to divide incident light entering the camera through the lens 110 into visible light and infrared light, and output the light to the first image sensor 131 and the second image sensor 132, respectively.
  • the first image sensor 131 may be configured to collect visible light video images according to the first shutter and the first gain, and transmit the collected visible light video images to the main chip 140.
  • the second image sensor 132 may be configured to acquire an infrared light video image according to the first shutter and the first gain, and transmit the acquired infrared light video image to the main chip 140.
  • the main chip 140 is configured to fuse the collected visible light video image and the infrared light video image, and output the fused video image.
  • the incident light entering the camera through the lens is divided into visible light and infrared light.
  • the first image sensor 131 is based on The visible light output by the spectroscopic module 120 performs visible light video image acquisition according to the first shutter and the first gain
  • the second image sensor 132 performs infrared light video image acquisition according to the first shutter and the first gain according to the infrared light output by the spectroscopic module 120;
  • the chip 140 may perform fusion processing on the visible light video image collected by the first image sensor 131 and the infrared light video image collected by the second image sensor 132 to obtain a preview code stream.
  • the fusion processing of the main chip 140 can perform different fusion algorithm strategies for different scenes, in some cases, the brightness information of
  • the main chip 140 in order to further optimize the display effect of video images and reduce the bandwidth required for video transmission, after the main chip 140 fuses visible video images with infrared video images, it can also perform image signal processing (ISP), encoding compression, etc. Processing, its specific implementation is not described in detail here.
  • ISP image signal processing
  • a light splitting module in the camera to divide the incident light into visible light and infrared light, and deploying a first image sensor and a second image sensor corresponding to the visible light output direction and infrared light output direction of the light splitting module, respectively, to collect visible light video
  • the image and infrared light video image and then the main chip fuses the visible light video image and the infrared light video image, which not only guarantees the color of the video image, but also guarantees the details and brightness of the video image, and optimizes the display effect of the video image.
  • a slow shutter and a large gain are generally used to ensure the brightness of the picture, and in some special scenes, the camera may need to perform a picture Snapshots, and flash lighting is usually used when taking pictures. Therefore, when the gain is large, the picture may be overexposed and noisy, which may cause the picture details to be unclear.
  • a slow shutter may easily cause a vehicle tail in the picture. At this time, the accuracy of vehicle feature recognition based on the video image is poor.
  • the cameras provided in the embodiments of the present application can also capture pictures according to independent shutters and gains during the process of capturing video images. That is, when capturing is required during video image collection, use the and The shutter and gain of video image acquisition are independent of the shutter and gain for picture capture.
  • the main chip 140 may be further configured to transmit the capture instruction to the first image sensor 131 and the second image sensor 132 when the capture instruction is received, so that the first image sensor 131 Synchronized capture with the second image sensor 132.
  • the first image sensor 131 may also be used to capture a picture according to a second shutter and a second gain when a capture instruction is received, and transmit the captured visible light picture to the main chip 140.
  • the second image sensor 132 may also be used to capture a picture according to the second shutter and the second gain when the capture instruction is received, and transmit the captured infrared light picture to the main chip 140.
  • the main chip 140 may also be used for fusing the captured visible light picture and the infrared light picture, and outputting the fused captured picture.
  • the capture instruction may indicate a synchronization operation and a capture operation.
  • the capture instruction may be issued in the form of one instruction, or may be issued in the form of multiple instructions including a capture signal and a synchronization command.
  • the synchronization command includes but It is not limited to the flash synchronization command and the image sensor (first image sensor 131 and second image sensor 132) shutter and gain synchronization command.
  • the capture instruction may be triggered by, for example, an algorithm or an external signal for determining that an image capture is required.
  • a target detection algorithm can be used to monitor the detection and classification of targets in the scene, to correctly identify targets such as motor vehicles, non-motor vehicles, or pedestrians, while tracking the targets by a tracking algorithm.
  • a trigger trigger signal is given and the corresponding sensor and other modules are used to perform corresponding picture capture logic through the interface.
  • the main chip 140 may transmit the capture instruction to the first image sensor 131 and the second image sensor 132, respectively, so as to control the first image sensor 131 and the second image sensor 132 to perform picture capture in synchronization.
  • the first image sensor 131 and the second image sensor 132 When the first image sensor 131 and the second image sensor 132 receive a capture instruction, they can capture a picture (visible light picture and infrared light picture) synchronously according to the second shutter and the second gain, and respectively capture the captured visible light picture and infrared light picture. It is transmitted to the main chip 140, and the main chip 140 performs fusion processing.
  • the main chip 140 may fuse the received visible light picture and infrared light picture, and perform ISP processing, encoding, and compression processing on the fused picture.
  • the second shutter and the second gain are smaller than the first shutter and the first gain.
  • the first image sensor 131 may be specifically configured to interrupt the video image acquisition when receiving a capture instruction, perform a picture capture according to the second shutter and the second gain, and transmit the captured visible light picture to the main chip 140 ;
  • the second image sensor 132 may be specifically configured to, when receiving a capture instruction, interrupt video image collection, perform picture capture according to the second shutter and the second gain, and transmit the captured infrared light picture to the main chip 140.
  • the video image acquisition may be interrupted and a picture snapshot may be performed, so that the image sensors (the first image sensor 131 and the first All the resources of the two image sensors 132) can be used for video image acquisition or picture capture.
  • the image sensor collects each frame of video image data.
  • the video image collection is immediately interrupted, and the image capture is performed. After the capture is completed, the video image collection is resumed. All the resources may include storage of the sensor, shutter gain, synchronization control of the fill light, etc. After receiving the capture command, the resources of the sensor will be used for capturing picture capture
  • the first image sensor 131 may be specifically configured to perform video image capture according to the first shutter and the first gain, and perform photo capture according to the second shutter and the second gain when receiving a capture instruction, and respectively The visible light video image and the visible light picture are transmitted to the main chip 140.
  • the second image sensor 132 is specifically configured to capture video images according to a first shutter and a first gain when receiving a capture instruction, capture a picture according to a second shutter and a second gain, and respectively convert an infrared video image and an infrared light
  • the light picture is transmitted to the main chip 140.
  • the image sensor (including the first image sensor 131 and the second image sensor 132) can perform video image capture and picture capture at the same time, thereby avoiding interruption of video image collection, that is, the image sensor can use part of the resources for video image Capture and use another part of the resource to take a picture capture (when no picture capture is needed, this part of the resource is idle, that is, the image sensor reserves some resources for picture capture) to ensure that video image collection and picture capture are performed independently of each other.
  • the image sensor can be divided into parity (10101010 ...) frames, and the parity frames use two different sets of shutter gains to collect data alternately. For example, the data collected in the odd-numbered frames is transmitted to the main chip 140 as a video image. When a snap command is received, one of the even frames is transmitted to the main chip 140 as a snap picture.
  • the capture may also be directly implemented by copying the captured video image, that is, directly copying and receiving A certain frame of the video image collected when the capture instruction is obtained is taken as a captured picture, and the captured picture is transmitted to the main chip 140.
  • the main chip 140 performs subsequent processing in the manner described above, and its specific implementation is not described herein.
  • the specific implementation of the fusion of the visible light video image and the infrared light video image, and the fusion of the visible light picture and the infrared light picture can be referred to the related description in the existing related scheme, which is not described in the embodiment of the present application.
  • the camera supports picture capture during video image acquisition.
  • the picture capture has an independent shutter (and relative to video image capture) shutter and gain. You can adjust the shutter and gain during picture capture to ensure The sharpness of the license plate and vehicle detail information during picture capture prevents overexposure of the vehicle and license plate, and can ensure that the vehicle is captured through a short shutter when passing quickly to ensure that the vehicle does not tail.
  • the above camera may further include a synchronization processing module 150.
  • the synchronization processing module 150 may be configured to receive a capture instruction transmitted by the main processing chip 140 and transmit the capture instruction to the first image sensor 131 and the second image sensor 132, respectively.
  • the synchronization processing module 150 may also be configured to receive a visible light picture captured by the first image sensor 131 and an infrared light picture captured by the second image sensor 132, and synchronously transmit the visible light picture and the infrared light picture to the main chip 140.
  • a synchronization processing module 150 may also be deployed in the camera, and the synchronization processing module 150 may be deployed between the image sensor (including the first image sensor 131 and the second image sensor 132) and the master Between the chips 140.
  • the synchronization processing module 150 may include, but is not limited to, a Field-Programmable Gate Array (FPGA) chip or other chips that support two-way sensor data reception.
  • FPGA Field-Programmable Gate Array
  • the main chip 140 may transmit the capture instruction to the synchronization processing module 150, and the synchronization processing module 150 may transmit the first image sensor 131 and the second image sensor 132, respectively.
  • the synchronization processing module 150 may transmit the capture command to the first image sensor 131 and the second image sensor 132 within a preset time.
  • the supplementary light is controlled to light up for a preset exposure time, so that the exposure of the captured picture and the flashing brightness of the supplementary light are synchronized.
  • the synchronization processing module 150 is also responsible for receiving data sent by the two sensors according to a preset timing, so as to ensure that the data sent from the sensors are captured pictures.
  • the captured pictures may be transmitted to the synchronization processing module 150.
  • the synchronization processing module 150 When the synchronization processing module 150 receives the captured pictures (including visible light pictures and infrared light pictures) transmitted by the first image sensor 131 and the second image sensor 132, it can transmit the visible light pictures and infrared light pictures to the main chip 140, and the main chip 140 Output the fused snapshot image.
  • the synchronous processing module 150 transmits the visible light picture and the infrared light picture to the main chip 140
  • the visible light picture and the infrared light picture can be spliced, and one frame of the visible light picture and one frame of the infrared light picture are spliced into one frame picture. And transmitted to the main chip 140. Since the data header needs to be added during each data transmission, the splicing process here can reduce the redundant transmission of the data header. At the same time, the number of interruptions in the data transmission mechanism can be reduced, thereby improving the efficiency of transmitting captured pictures to the main chip 140.
  • the main chip 140 When the main chip 140 receives the stitched picture transmitted by the synchronization processing module 150, it can split the stitched picture into two independent snapshot pictures (visible light pictures and infrared light pictures), and perform fusion processing on the two frames of pictures.
  • the color information of the visible light picture is extracted, and the details and brightness information of the infrared light picture are extracted and fused into a frame of color picture.
  • the synchronization processing module 150 may transmit visible light pictures and infrared light pictures to the main chip 140, and the main chip 140 performs fusion processing.
  • the synchronization module 150 may further perform fusion processing on the videos transmitted by the first image sensor 131 and the second image sensor 132.
  • the camera may further include a slave processing chip (hereinafter referred to as a slave chip) 160.
  • a slave processing chip hereinafter referred to as a slave chip 160.
  • the master chip 140 may be specifically configured to transmit a visible light picture and an infrared light picture to the slave chip 160;
  • the slave chip 160 may be used to fuse the visible light picture and the infrared light picture, and transmit the fused picture to the master chip 140;
  • the main chip 140 may also be specifically configured to perform ISP processing, encoding, and compression on the fused processed picture.
  • a slave chip 160 can also be deployed in the camera.
  • the slave chip 160 can be connected to the master chip 140 and perform data interaction with the master chip 140. .
  • the master chip 140 when the master chip 140 receives the visible light picture and the infrared light picture, it can transmit the visible light picture and the infrared light picture to the slave chip 160.
  • the visible light picture and the infrared light picture are received from the chip 160, the visible light picture and the infrared light picture can be fused, and the fused picture is transmitted to the main chip 140 again.
  • the main chip 140 performs ISP on the fused picture Processing, encoding, and compression (such as JPEG (Joint Photographic Experts Group) compression) and other processing.
  • the fused picture may also be subjected to ISP processing, and The picture transmitted to the main chip 140 is transmitted to the main chip 140.
  • the main chip 140 receives the picture transmitted from the chip 160, it may perform secondary ISP processing on it, and perform processing such as encoding and compression.
  • the fused image may also be subjected to ISP processing, and encoding and compression processing may be performed.
  • the processed image is then transmitted to the main chip 140.
  • the main chip 140 receives the picture transmitted from the chip 160, it may not process it any more.
  • Both the master chip 140 and the slave chip 160 can perform ISP processing, encoding, and compression on the image.
  • the load balancing of the chip of the camera system can be considered, and the allocation of processing tasks can be adjusted in real time.
  • the visible light picture and infrared light picture transmitted from the master chip 140 to the slave chip 160 may be stitched pictures (stitched by the master chip 140 or by the synchronization processing module 150 (if synchronization is deployed in the camera)
  • the processing module 150 performs splicing) or captures pictures in two independent frames.
  • the fusion process may be performed.
  • the camera may also perform vehicle feature recognition based on the fused captured picture to obtain vehicle feature information, such as body color, vehicle model, and window face One or more of information such as identification, vehicle brand, and sub-brand identification.
  • the slave chip 160 when the camera is further provided with a slave chip 160, the slave chip 160 may also be used to perform vehicle feature recognition on the fused picture based on a deep learning algorithm, and transmit the recognition result to all The main processing chip is described.
  • a deep learning algorithm may be integrated in the slave chip 160. After the slave chip 160 completes the picture fusion according to the visible light picture and the infrared light picture transmitted by the master chip 140, the fused picture may also be performed based on the deep learning algorithm. Vehicle feature recognition, and the recognition result (that is, one or more of body color, vehicle model, window face recognition, vehicle main brand, sub-brand recognition, etc.) is transmitted to the main chip 140, and the main chip 140 receives The recognition results are processed accordingly.
  • FIG. 4 is a schematic structural diagram of a camera according to an embodiment of the present application.
  • the camera includes a lens 401, a light splitting module 402, a first image sensor 411, and a second image sensor.
  • 412 the image sensor in this embodiment uses a global exposure image sensor, such as a global exposure complementary metal-oxide semiconductor (Complementary Metal-Oxide-Semiconductor, CMOS) image sensor
  • CMOS complementary metal-oxide semiconductor
  • the synchronous processing module is an FPGA chip For example
  • master chip 404 master chip
  • slave chip 405 integrated with Graphics Processing Unit (GPU) and deep learning algorithm.
  • GPU Graphics Processing Unit
  • the incident light that enters the camera through the lens 401 of the camera is divided into visible light and infrared light by a spectroscopic module.
  • the light splitting module 402 can be implemented by a prism. After the visible light part of the incident light enters the prism, it is transmitted out; after the infrared light enters the prism, it is emitted after a reflection.
  • the first image sensor 411 and the second image sensor 412 are respectively disposed at the emission positions of the two light sources of the spectroscopic module 402; wherein the first image sensor 411 is disposed at the emission positions of the visible light sources of the spectroscopic module 402, The second image sensor 412 is disposed at an emission position of the infrared light source of the spectroscopic module 402.
  • the first image sensor 411 and the second image sensor 412 can respectively collect visible light video images and infrared light video images according to the first shutter and first gain, and integrate them into the FPGA chip 403.
  • the FPGA chip 403 performs visible light video images and infrared light video images. After the fusion, the fused video image is output to the main chip 404; the main chip 404 performs ISP processing, algorithm analysis, encoding and compression processing on the fused video image, and then outputs a preview bitstream.
  • the master chip 404 After the master chip 404 obtains the fused video image, it can also transmit the fused video image to the slave chip 405, and the slave chip 405 performs target analysis on the fused video image based on the deep learning algorithm, for example, target detection, One or more of target tracking and target classification processing.
  • target analysis on the fused video image based on the deep learning algorithm, for example, target detection, One or more of target tracking and target classification processing.
  • the main chip 404 when the main chip 404 receives the capture instruction, the main chip 404 transmits the capture instruction to the FPGA chip 403, and the FPGA chip 403 transmits the capture instruction to the first image sensor 411 and the second image sensor 412, respectively.
  • the capture instruction received by the main chip 404 may be triggered by, for example, an algorithm or an external signal for determining that an image capture is required.
  • the video image acquisition may be interrupted, and the image capture is performed according to the second shutter and the second gain.
  • the first image sensor 411 performs Visible light picture capture; second image sensor 412 captures infrared light picture.
  • the first image sensor 411 and the second image sensor 412 respectively transmit the captured visible light picture and infrared light picture to the FPGA chip 403, and the FPGA chip 403 stitches the visible light picture and the infrared light picture (one frame of visible light picture and one frame of infrared light The picture is spliced into one frame, and the stitched picture is output to the main chip 404.
  • the master chip 404 When the master chip 404 receives the stitched picture transmitted by the FPGA chip 403, it splits it into two independent captured pictures (visible light picture and infrared light picture), and transmits it to the slave chip 405.
  • the visible light picture and the infrared light picture transmitted from the main chip 404 from the chip 405 When receiving the visible light picture and the infrared light picture transmitted from the main chip 404 from the chip 405, the visible light picture and the infrared light picture are fused at the pixel level.
  • the color information of visible light is extracted from the chip 405, the details and brightness information of the infrared light picture are extracted, the visible light picture and the infrared light picture are merged into a frame color picture, and ISP processing is performed.
  • the processed picture is transmitted from the chip 405 to the master chip 404; on the other hand, the processed picture is subjected to vehicle feature recognition to obtain the recognition result, such as model, body color, vehicle brand (including the master Brands and sub-brands), car window faces, presence of phone calls, whether or not to wear a seat belt, etc., and transmit the recognition result to the main chip 404.
  • vehicle feature recognition such as model, body color, vehicle brand (including the master Brands and sub-brands), car window faces, presence of phone calls, whether or not to wear a seat belt, etc.
  • the master chip 404 and the slave chip 405 can interact through a universal serial bus (Universal Serial Bus, USB) communication method or a high-speed serial computer extended bus standard (Peripheral Component Interconnect Express (PCIE) communication method or a network communication method.
  • USB Universal Serial Bus
  • PCIE Peripheral Component Interconnect Express
  • the camera can support interrupt (video image capture) capture technology, and the camera can use different shutters and gains for video image capture and picture capture.
  • interrupt video image capture
  • the camera can use different shutters and gains for video image capture and picture capture.
  • picture capture you can adjust the shutter and gain (less than the video image) according to the principle of ensuring the clarity of the license plate and vehicle details (such as model, brand, etc.) Collected shutters and gains) to prevent the vehicle and the license plate from being overexposed, to ensure that the vehicle can be captured with a shorter shutter when passing at a faster speed, to ensure that the vehicle does not tail.
  • FIG. 5 is a schematic flowchart of a snapped picture fusion method according to an embodiment of the present application.
  • the snapped picture fusion method can be applied to the camera shown in FIG. 1 to FIG. 3, as shown in FIG. 5,
  • the captured picture fusion method may include the following steps.
  • Step S500 When the main processing chip receives a capture instruction, the main processing chip transmits the capture instruction to the first image sensor and the second image sensor, respectively.
  • the capture instruction may include multiple instructions such as a capture signal and a synchronization command.
  • the snap command is triggered by an algorithm or an external signal.
  • the main chip may transmit the capture instruction to the first image sensor and the second image sensor, respectively, so as to interrupt the capture, that is, interrupt the video image acquisition, and perform picture capture.
  • Step S510 When the first image sensor receives the capture instruction, perform a photo capture according to a second shutter and a second gain to obtain a visible light picture; when the second image sensor receives the capture instruction, perform The second shutter and the second gain perform picture capture to obtain an infrared light picture.
  • the first image sensor and the second image sensor when they receive a capture instruction, they can capture a picture (visible light picture and infrared light picture) synchronously according to the second shutter and the second gain, and respectively capture the visible light picture and the infrared light picture.
  • the light picture is transmitted to the main chip, and the main chip performs fusion processing.
  • Step S520 The main processing chip outputs a captured picture obtained by fusing the visible light picture and the infrared light picture.
  • the main chip when the main chip receives the captured visible light picture and infrared light picture, it can fuse the received visible light picture and infrared light picture, and perform ISP processing, encoding, and compression processing on the fused picture.
  • the first image sensor captures a picture and transmits the captured visible light picture to the main processing chip; the second image sensor captures a picture and transmits the captured infrared light picture to the main processing chip.
  • the first image sensor interrupts visible light video image acquisition, captures pictures according to the second shutter and second gain, and transmits the captured visible light pictures to the main processing chip;
  • the second image sensor interrupts infrared light video image acquisition, and according to the second shutter and The second gain captures the picture and transmits the captured infrared light picture to the main processing chip.
  • the video image acquisition may be interrupted and a picture captured. Therefore, the image sensor can use all resources for video image acquisition or picture capture, and thus, the quality of the video image or captured picture can be improved.
  • the first image sensor captures a picture and transmits the captured visible light picture to the main processing chip;
  • the second image sensor captures a picture and transmits the captured infrared light picture to the main processing chip
  • the first image sensor collects visible light video images according to the first shutter and first gain, and captures pictures according to the second shutter and second gain, and transmits the visible light video images and visible light pictures to the main processing chip, respectively;
  • the first shutter and the first gain perform infrared light video image acquisition, the picture is captured according to the second shutter and the second gain, and the infrared light video image and the infrared light picture are transmitted to the main processing chip, respectively.
  • the image sensor (including the first image sensor and the second image sensor) can perform video image acquisition and picture capture at the same time, thereby preventing the video image acquisition from being interrupted. That is, the image sensor can use part of the resources for video image capture, and use another part of the resource for picture capture (when no picture capture is needed, the part of the resource is idle, that is, the image sensor reserves some resources for picture capture) to ensure the video image Acquisition and picture capture are performed independently of each other.
  • the above camera may further include: a synchronization processing module;
  • transmitting the capture letter instruction to the first image sensor and the second image sensor, respectively may include the following steps:
  • Step S501 The main processing chip transmits a capture instruction to the synchronization processing module
  • Step S502 The synchronization processing module transmits the capture instruction to the first image sensor and the second image sensor respectively within a preset time.
  • the first image sensor transmitting the captured visible light image to the main processing chip, and the second image sensor transmitting the captured infrared light image to the main processing chip may include the following steps:
  • Step S511 The first image sensor transmits the captured visible light image to the synchronization processing module, and the second image sensor transmits the captured infrared light image to the synchronization processing module;
  • Step S512 The synchronization processing module transmits the visible light picture and the infrared light picture to the main processing chip.
  • a synchronization processing module may also be deployed in the camera, and the synchronization processing module may be deployed in the image sensor (including the first image sensor and the second image sensor). And the main chip.
  • the main chip when the main chip receives the capture instruction, the main chip may transmit the capture instruction to the synchronization processing module, and the synchronization processing module may transmit the first image sensor and the second image sensor, respectively.
  • the captured pictures may be transmitted to the synchronization processing module.
  • the synchronization processing module When the synchronization processing module receives the captured pictures (including visible light pictures and infrared light pictures) transmitted by the first image sensor and the second image sensor, it can transmit the visible light pictures and infrared light pictures to the main chip, and the main chip outputs the fused snapshot image.
  • the captured pictures including visible light pictures and infrared light pictures
  • the synchronization processing module can stitch the visible light picture and the infrared light picture, stitch a frame of visible light picture and a frame of infrared light picture into a frame picture, and transmit it to the main processing chip to improve the captured picture transmission to the main Chip efficiency.
  • the main chip When the main chip receives the stitched picture transmitted by the synchronization processing module, it can split the stitched picture into two independent snapshot pictures (visible light picture and infrared light picture); and perform fusion processing on the two frame pictures to extract visible light.
  • the color information of the picture, and the details and brightness information of the infrared light picture are extracted, and the visible light picture and the infrared light picture are merged into a frame color picture.
  • the above camera may further include a slave processing chip.
  • the main processing chip outputs the fused captured picture, which may include the following steps:
  • Step S521 The master processing chip transmits the visible light picture and the infrared light picture to the slave processing chip;
  • Step S522 The slave processing chip fuses the visible light picture and the infrared light picture to obtain the captured picture, and transmits the captured picture to the main processing chip;
  • Step S523 The main processing chip performs ISP processing, encoding, and compression on the fused image and outputs it.
  • a slave chip in order to reduce the workload of the master chip and simplify the implementation of the master chip, a slave chip can also be deployed in the camera, and the slave chip can be connected to the master chip and perform data interaction with the master chip.
  • the master chip when the master chip receives the captured visible light picture and infrared light picture, it can transmit the visible light picture and infrared light picture to the slave chip.
  • the visible light picture and infrared light picture When the visible light picture and infrared light picture are received from the chip, the visible light picture and the infrared light picture can be fused, and the fused picture is transmitted to the main chip again.
  • the main chip performs ISP processing and encoding on the fused picture.
  • compression such as JPEG compression).
  • the fused picture may be subjected to ISP processing and the processed picture is transmitted to the master chip; the master chip receives When the picture is transmitted from the chip, it can be subjected to secondary ISP processing, and processed such as encoding and compression.
  • the fused image may also be subjected to ISP processing, and encoding and compression processing may be performed.
  • the processed image is then transmitted to the main chip.
  • the master chip receives the picture transmitted from the chip, it can no longer process it.
  • Both the master chip and the slave chip can perform ISP processing, encoding, and compression on the image.
  • the load balancing of the chip of the camera system can be considered, and the processing task allocation can be adjusted in real time.
  • the camera may also perform vehicle feature recognition based on the fused captured picture to obtain vehicle feature information, such as body color, vehicle model, and window face One or more of information such as identification, vehicle brand, and sub-brand identification.
  • the slave chip when the camera is further provided with a slave chip, the slave chip may also be used to perform vehicle feature recognition on the fused picture based on a deep learning algorithm, and transmit the recognition result to the master Processing chip.
  • a deep learning algorithm may be integrated in the slave chip. After the slave chip completes the image fusion based on the visible light picture and the infrared light picture transmitted by the master chip, it may also perform vehicle feature recognition on the fused picture based on the deep learning algorithm. And transmit the recognition result (ie, one or more of the body color, vehicle model, window face recognition, vehicle main brand, sub-brand identification, etc.) to the main chip, and the main chip will respond accordingly according to the received recognition result deal with.
  • the recognition result ie, one or more of the body color, vehicle model, window face recognition, vehicle main brand, sub-brand identification, etc.
  • the picture capture when the first image sensor and the second image sensor receive a capture instruction during video image acquisition according to the first shutter and the first gain, the picture capture may be performed according to the second shutter and the second gain.
  • the captured visible light image and infrared light image are transmitted to the main processing chip, and the main processing chip outputs the fused captured image, that is, the camera supports picture capture in the video image acquisition process, and the picture capture can have an independent shutter and Gain: By adjusting the shutter and gain when taking a picture, you can ensure the clarity of the license plate and vehicle details when the picture is taken, prevent overexposure of the vehicle and the license plate, and ensure that the vehicle takes a snapshot through a relatively short shutter when passing quickly. The vehicle does not tail.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供一种摄像机及抓拍图片融合方法,该摄像机包括:镜头、分光模块、第一图像传感器、第二图像传感器以及主处理芯片。分光模块,用于将通过所述镜头进入所述摄像机的入射光拆分为可见光和红外光;第一图像传感器,用于接收所述可见光,并按照第一快门和第一增益进行视频图像采集以获得可见光视频图像;第二图像传感器,用于接收所述红外光,并按照所述第一快门和所述第一增益进行视频图像采集以获得红外光视频图像;主处理芯片,用于输出对所述可见光视频图像和所述红外光视频图像进行融合后的视频图像。

Description

一种摄像机及抓拍图片融合方法
相关申请的交叉引用
本专利申请要求于2018年6月4日提交的、申请号为201810563691.4、发明名称为“一种摄像机及抓拍照片融合方法”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本申请涉及视频监控技术,尤其涉及一种摄像机及抓拍图片融合方法。
背景技术
智能交通摄像机主要是通过单个传感器利用红外爆闪或者白光爆闪进行抓拍。红外爆闪抓拍得到的成像是偏色或者是黑白的。白光爆闪抓拍能够抓拍彩色图片但是需要配备白光爆闪灯进行补光。白光爆闪灯会存在较严重的光污染;而且,在夜晚道路上突然的白光爆闪会造成开车人员的暂时性失明可能导致危险。
发明内容
有鉴于此,本申请提供一种摄像机及抓拍图片融合方法。
具体地,本申请是通过如下技术方案实现的:
根据本申请实施例的第一方面,提供一种摄像机,应用于视频监控系统,所述摄像机包括:镜头、分光模块、第一图像传感器、第二图像传感器以及主处理芯片;
光模块,用于将通过所述镜头进入所述摄像机的入射光拆分为可见光和红外光;
第一图像传感器,用于接收所述分光模块输出的所述可见光,并按照第一快门和第一增益进行可见光视频图像采集以获得可见光视频图像;
第二图像传感器,用于接收所述分光模块输出的所述红外光,并按照所述第一快门和所述第一增益进行红外光视频图像采集以获得红外光视频图像;
主处理芯片,用于输出对所述可见光视频图像和所述红外光视频图像进行融合后的视频图像,其中,融合处理包括对所述可见光视频图像和所述红外光视频图像的亮度信息进行融合,或者对所述可见光视频图像和所述红外光视频图像的细节信息进行融合。
可选的,所述主处理芯片,还用于当接收到抓拍指令时,将所述抓拍指令分别传输给所述第一图像传感器和所述第二图像传感器;
所述第一图像传感器,还用于当接收到所述抓拍指令时,按照第二快门和第二增益进行图片抓拍,以获得可见光图片;
所述第二图像传感器,还用于当接收到所述抓拍指令时,按照所述第二快门和所述第二增益进行图片抓拍,以获得红外光图片;
所述主处理芯片,还用于输出对所述可见光图片和所述红外光图片进行融合后的抓拍图片。
可选的,所述第一图像传感器,还用于当接收到所述抓拍指令时,中断所述可见光视频图像采集;
所述第二图像传感器,还用于当接收到所述抓拍指令命令时,中断所述红外光视频图像采集。
可选的,所述摄像机还包括:同步处理模块;其中:
所述同步处理模块,用于接收所述主处理芯片传输的所述抓拍指令,并将所述抓抓拍指令在预设时间内分别传输给所述第一图像传感器和所述第二图像传感器;
所述同步处理模块,还用于按照预设的时序接收所述第一图像传感器传输的所述可见光图片以及所述第二图像传感器传输的所述红外光图片,并将所述可见光图片和所述红外光图片传输给所述主处理芯片。可选的,所述同步处理模块,具体用于将同步抓拍的可见光图片和红外光图片拼接为一帧图片,并传输给所述主处理芯片。
可选的,所述同步处理模块,还用于将同步抓拍的一帧所述可见光图片和一帧所述红外光图片拼接为一帧拼接图片,并将所述拼接图片传输给所述主处理芯片;
所述主处理芯片,还用于将一帧所述拼接图片拆分为一帧所述可见光图片和一帧所述红外光图片。
可选的,所述主处理芯片,还用于对所述抓拍图片进行至少一个以下处理:图像信号处理ISP处理、编码以及压缩。
可选的,所述摄像机还包括从处理芯片;其中:
所述主处理芯片,具体用于将所述可见光图片和所述红外光图片传输给所述从处理芯片;
所述从处理芯片,用于对所述可见光图片和所述红外光图片进行融合以获得所述抓拍图片,并将所述抓拍图片传输给所述主处理芯片。可选的,所述从处理芯片,还用于将融合后的图片传输给所述主处理芯片之前,对融合后的图片进行ISP处理。
可选的,所述从处理芯片,还用于对所述抓拍图片进行至少一个以下处理:ISP处理、编码以及压缩。
可选的,所述从处理芯片,还用于基于深度学习算法对所述抓拍图片进行车辆特征识别,并将识别结果传输给所述主处理芯片。
根据本申请实施例的第二方面,提供一种抓拍图片融合方法,应用于视频监控系统中的摄像机,包括:
所述摄像机中的分光模块将通过所述摄像机中的镜头进入所述摄像机的入射光拆分为可见光和红外光;
所述摄像机中的第一图像传感器接收所述分光模块输出的所述可见光,并按照第一快门和第一增益进行可见光视频图像采集以获得可见光视频图像;
所述摄像机中的第二图像传感器接收所述分光模块输出的所述红外光,并按照所述第一快门和所述第一增益进行红外光视频图像采集以获得红外光视频图像;
所述摄像机中的主处理芯片输出对所述可见光视频图像和所述红外光视频图像进行融合后的视频图像,其中,融合处理包括对所述可见光视频图像和所述红外光视频图像的亮度信息进行融合,或者对所述可见光视频图像和所述红外光视频图像的细节信息进行融合。
可选的,当所述主处理芯片接收到抓拍指令时,将所述抓拍指令分别传输给所述第一图像传感器和所述第二图像传感器;
所述第一图像传感器接收到所述抓拍指令时,按照第二快门和第二增益进行图片抓拍,以获得可见光图片;
所述第二图像传感器接收到所述抓拍指令时,按照所述第二快门和所述第二增益进行图片抓拍,以获得红外光图片;
所述主处理芯片输出对所述可见光图片和所述红外光图片进行融合后的抓拍图片。
可选的,所述第一图像传感器接收到所述抓拍指令时,中断所述可见光视频图像采集;
所述第二图像传感器接收到所述抓拍指令时,中断所述红外光视频图像采集。
可选的,将所述抓拍指令分别传输给所述第一图像传感器和所述第二图像传感器,包括:
所述主处理芯片将所述抓拍指令传输给所述摄像机中的同步处理模块;
所述同步处理模块在预设时间内将所述抓拍指令分别传输给所述第一图像传感器和所述第二图像传感器。
可选的,所述第一图像传感器将所述可见光图片传输给所述同步处理模块,所述第二图像传感器将所述红外光图片传输给所述同步处理模块;
所述同步处理模块将所述可见光图片和所述红外光图片传输给所述主处理芯片。
可选的,将所述可见光图片和所述红外光图片传输给所述主处理芯片,包括:
所述同步处理模块将同步抓拍的一帧所述可见光图片和一帧所述红外光图片拼接为一帧拼接图片,并
所述同步处理模块将所述拼接图片传输给所述主处理芯片。
可选的,所述主处理芯片将所述可见光图片和所述红外光图片传输给所述摄像机中的从处理芯片;
所述从处理芯片对所述可见光图片和所述红外光图片进行融合以获得所述抓拍图片,并将所述抓拍图片传输给所述主处理芯片。
可选的,所述主处理芯片对所述抓拍图片进行至少一个以下处理:图像信号处理ISP处理、编码以及压缩。
可选的,所述从处理芯片对所述抓拍图片进行至少一个以下处理:ISP处理、编码以及压缩。
可选的,所述从处理芯片基于深度学习算法对所述抓拍图片进行车辆特征识别,并将识别结果传输给所述主处理芯片。
本申请实施例的摄像机,当第一图像传感器和第二图像传感器在按照第一快门和第一增益进行视频图像采集过程中接收到抓拍信号和同步命令时,可以按照第二快门和第二增益进行图片抓拍,并分别将抓拍到的可见光图片和红外光图片传输给主处理芯片,由主处理芯片输出融合后的抓拍图片,即摄像机支持视频图像采集过程中的图片抓拍,且图片抓拍可以具有独立的快门和增益,通过调节图片抓拍时的快门和增益,可以保证图片抓拍时车牌以及车辆细节信息的清晰度,防止车辆以及车牌过曝,并且可以保证车辆较快通过时通过比较短的快门进行抓拍,确保车辆不拖尾。
附图说明
图1是本申请一示例性实施例示出的一种摄像机的结构示意图。
图2是本申请又一示例性实施例示出的一种摄像机的结构示意图。
图3是本申请又一示例性实施例示出的一种摄像机的结构示意图。
图4是本申请一示例性实施例示出的一种摄像机的结构示意图。
图5是本申请一示例性实施例示出的一种抓拍图片融合方法的流程示意图。
图6是本申请又一示例性实施例示出的一种抓拍图片融合方法的流程示意图。
图7是本申请又一示例性实施例示出的一种抓拍图片融合方法的流程示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
为了使本领域技术人员更好地理解本申请实施例提供的技术方案,并使本申请实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本申请实施例中技术方案 作进一步详细的说明。
请参见图1,为本申请实施例提供的一种摄像机的结构示意图,如图1所示,该摄像机可以包括:镜头110、分光模块120、第一图像传感器131和第二图像传感器132以及主处理芯片(下文中称为主芯片)140。其中:
分光模块120,可以用于将通过镜头110进入摄像机的入射光分为可见光和红外光,并分别输出给第一图像传感器131和第二图像传感器132。
第一图像传感器131,可以用于按照第一快门和第一增益进行可见光视频图像采集,并将所采集的可见光视频图像传输给主芯片140。
第二图像传感器132,可以用于按照第一快门和第一增益进行红外光视频图像采集,并将所采集的红外光视频图像传输给主芯片140。
主芯片140,用于将所采集的可见光视频图像和红外光视频图像进行融合,并输出融合后的视频图像。
本申请实施例中,通过在摄像机中部署分光模块120,将通过镜头进入摄像机的入射光分为可见光和红外光。进而,通过在摄像机中部署分别对应分光模块120的可见光输出方向和红外光输出方向的两个图像传感器(本文中称为第一图像传感器131和第二图像传感器132),第一图像传感器131根据分光模块120输出的可见光按照第一快门和第一增益进行可见光视频图像采集,第二图像传感器132根据分光模块120输出的红外光按照第一快门和第一增益进行红外光视频图像采集;进而主芯片140可以对第一图像传感器131采集的可见光视频图像和第二图像传感器132采集的红外光视频图像进行融合处理,以得到预览码流。主芯片140的融合处理,可以针对不同的场景进行不同的融合算法策略,在某些情况下融合视频图像的亮度信息,某些情况下融合视频图像的细节信息。
其中,为了进一步优化视频图像显示效果,并降低视频传输所需带宽,主芯片140对可见光视频图像和红外光视频图像融合之后,还可以进行图像信号处理(Image Signal Processing,ISP)、编码压缩等处理,其具体实现在此不做赘述。
可见,通过在摄像机部署分光模块用以将入射光分为可见光和红外光,并部署分别对应分光模块的可见光输出方向和红外光输出方向的第一图像传感器和第二图像传感器用以采集可见光视频图像和红外光视频图像,进而由主芯片对可见光视频图像和红外光视频图像进行融合,既保证了视频图像的色彩,又保证了视频图像的细节和亮度,优化了 视频图像的显示效果。
进一步地,在本申请实施例中,考虑到摄像机进行视频图像采集时,通常会使用较慢的快门和较大的增益以保证画面的亮度,而在某些特殊场景,摄像机可能会需要进行图片抓拍,而由于图片抓拍时通常会有闪光灯补光,因此,当增益较大时,可能会导致图片过曝且噪点较多,进而可能导致图片细节不清。此外,当进行车辆等抓拍时,如车辆速度过快,较慢的快门容易导致画面中出现车辆拖尾的情况,此时,基于视频图像进行车辆特征识别的准确率较差。
为了提高车辆特征识别的准确率,本申请实施例提供的摄像机在采集视频图像的过程中还可以按照独立的快门和增益进行图片抓拍,即当在视频图像采集过程中需要进行抓拍时,使用与视频图像采集的快门和增益相独立的快门和增益进行图片抓拍。
相应地,在本申请实施例中,主芯片140,还可以用于当接收到抓拍指令时,将抓拍指令分别传输给第一图像传感器131和第二图像传感器132,以使第一图像传感器131和第二图像传感器132进行同步抓拍。
第一图像传感器131,还可以用于当接收到抓拍指令时,按照第二快门和第二增益进行图片抓拍,并将抓拍的可见光图片传输给主芯片140。
第二图像传感器132,还可以用于当接收到抓拍指令时,按照第二快门和第二增益进行图片抓拍,并将抓拍的红外光图片传输给主芯片140。
主芯片140,还可以用于所抓拍的可见光图片和红外光图片进行融合,输出融合后的抓拍图片。
本申请实施例中,该抓拍指令可以指示同步操作和抓拍操作,该抓拍指令可以以一条指令的形式发出,也可以以包含抓拍信号和同步命令的多条指令的形式发出,该同步命令包括但不限于闪光灯同步命令和图像传感器(第一图像传感器131和第二图像传感器132)快门和增益同步命令。该抓拍指令可以由例如用于判断需要进行图像抓拍的算法或外部信号触发。在一个例子中,可以使用目标检测算法监视场景内目标的检测和分类,正确识别到机动车、非机动车或者行人等目标,同时由跟踪算法对目标进行跟踪。当检测目标到达事先设置的触发线位置时,给出触发抓拍的信号并通过接口给到对应的传感器等模块进行相应的图片抓拍逻辑。在另一个例子中,可以通过车检器、雷达等外部设备检查到车辆目标通过,并按照设定的通信协议通知抓拍设备,抓拍设备解析该通信协议后通过接口通知对应的传感器等模块进行相应的图片抓拍。
当主芯片140接收到抓拍指令时,主芯片140可以将该抓拍指令分别传输给第一图像传感器131和第二图像传感器132,以控制第一图像传感器131和第二图像传感器132同步进行图片抓拍。
第一图像传感器131和第二图像传感器132接收到抓拍指令时,可以按照第二快门和第二增益同步进行图片(可见光图片和红外光图片)抓拍,并分别将抓拍的可见光图片和红外光图片传输给主芯片140,由主芯片140进行融合处理。
其中,主芯片140接收到抓拍的可见光图片和红外光图片时,可以对接收到的可见光图片和红外光图片进行融合,并对融合后的图片进行ISP处理、编码以及压缩等处理。
在一个示例中,第二快门和第二增益小于第一快门和第一增益。
在一个示例中,第一图像传感器131,可以具体用于当接收到抓拍指令时,中断视频图像采集,按照第二快门和第二增益进行图片抓拍,并将抓拍的可见光图片传输给主芯片140;
第二图像传感器132,可以具体用于当接收到抓拍指令时,中断视频图像采集,按照第二快门和第二增益进行图片抓拍,并将抓拍的红外光图片传输给主芯片140。
在该示例中,第一图像传感器131和第二图像传感器132在视频图像采集过程中接收到抓拍指令时,可以中断视频图像采集并进行图片抓拍,从而,图像传感器(第一图像传感器131和第二图像传感器132)的全部资源可以用于视频图像采集或图片抓拍。由此可以提高视频图像或抓拍图片的质量。在本例中,图像传感器采集视频图像的每一帧数据,当收到抓拍命令时会立即中断视频图像的采集,进行图片抓拍,抓拍结束后会恢复视频图像的采集。所述全部资源可以包括传感器的存储、快门增益、补光灯同步控制等,在收到抓拍命令后传感器的资源都会用于抓拍图片捕获
在该示例中,第一图像传感器131和第二图像传感器132完成图片抓拍之后,可以继续进行视频图像采集,其具体实现可以参见上述方法实施例中的相关描述,本申请实施例在此不做赘述。
在另一示例中,第一图像传感器131,可以具体用于当接收到抓拍指令时,按照第一快门和第一增益进行视频图像采集,按照第二快门和第二增益进行图片抓拍,并分别将可见光视频图像和可见光图片传输给主芯片140。
第二图像传感器132,具体用于当接收到抓拍指令时,按照第一快门和第一增益进行视频图像采集,按照第二快门和第二增益进行图片抓拍,并分别将红外光视频图像和红 外光图片传输给主芯片140。
在该示例中,图像传感器(包括第一图像传感器131和第二图像传感器132)可以同时进行视频图像采集和图片抓拍,进而避免视频图像采集被打断,即图像传感器可以利用一部分资源进行视频图像采集,利用另一部分资源进行图片抓拍(当不需要进行图片抓拍时,该部分资源空闲,即图像传感器预留一部分资源用于图片抓拍),以保证视频图像采集和图片抓拍相互独立进行。在本例中,图像传感器可以为分奇偶(10101010…)帧,并且奇偶帧分别采用两套不同的快门增益交替采集数据。例如,将奇数帧采集的数据作为视频图像传输给主芯片140。当收到抓拍命令时,将偶数帧中一帧作为抓拍图片传输给主芯片140。
需要说明的是,在本申请再一个例子中,当第一图像传感器131和第二图像传感器132接收到抓拍指令时,还可以直接通过复制所采集的视频图像的方式实现抓拍,即直接复制接收到抓拍指令时采集的视频图像的某一帧作为抓拍图片,并将该抓拍图片传给主芯片140,主芯片140按照上述方式进行后续处理,其具体实现在此不做赘述。
此外,对可见光视频图像和红外光视频图像的融合,以及对可见光图片和红外光图片的融合的具体实现可以参见现有相关方案中的相关描述,本申请实施例对此不做赘述。
可见,在图1所示摄像机中,摄像机支持视频图像采集过程中的图片抓拍,图片抓拍具有独立(相对于视频图像采集)的快门和增益,可以通过调节图片抓拍时的快门和增益,以保证图片抓拍时车牌以及车辆细节信息的清晰度,防止车辆以及车牌过曝,并且能够保证车辆较快通过时通过比较短的快门进行抓拍,确保车辆不拖尾。
进一步地,如图2所示,在本申请其中一个实施例中,上述摄像机还可以包括:同步处理模块150。
同步处理模块150,可以用于接收主处理芯片140传输的抓拍指令,并将该抓拍指令分别传输给第一图像传感器131和第二图像传感器132。
同步处理模块150,还可以用于接收第一图像传感器131抓拍的可见光图片,以及第二图像传感器132抓拍的红外光图片,并将可见光图片和红外光图片同步传输给主芯片140。
在该实施例中,为了保证抓拍时的时序要求,摄像机中还可以部署同步处理模块150,该同步处理模块150可以部署在图像传感器(包括第一图像传感器131和第二图像传感器132)与主芯片140之间。
其中,同步处理模块150可以包括但不限于现场可编程门阵列(Field-Programmable Gate Array,FPGA)芯片或支持两路传感器数据接收的其它芯片。
在该实施例中,当主芯片140接收到抓拍指令时,主芯片140可以将抓拍指令传输给同步处理模块150,由同步处理模块150分别传输给第一图像传感器131和第二图像传感器132。当给出抓拍命令后,同步处理模块150可以在预设的时间内将抓拍命令传输给第一图像传感器131和第二图像传感器132。并且在传感器抓拍的同时控制补光灯在预设的曝光时间内亮起,从而达到抓拍图片曝光和补光灯闪烁亮度同步的作用。此外同步处理模块150还负责按照预设的时序接收两个传感器发出来的数据,以保证从传感器发出的数据是抓拍的图片。
第一图像传感器131和第二图像传感器132完成图片抓拍之后,可以将抓拍图片传输给同步处理模块150。
同步处理模块150接收到第一图像传感器131和第二图像传感器132传输的抓拍图片(包括可见光图片和红外光图片)时,可以将可见光图片和红外光图片传输给主芯片140,由主芯片140输出融合后的抓拍图片。
在一个示例中,同步处理模块150向主芯片140传输可见光图片和红外光图片时,可以对可见光图片和红外光图片进行拼接,将一帧可见光图片和一帧红外光图片拼接为一帧图片,并传输给主芯片140。由于在每一次数据传输时需要增加数据报头,此处的拼接处理可以减少数据报头的冗余传输。同时可以减少数据传输机制中的中断次数的产生,从而提高了抓拍图片传输给主芯片140的效率。
其中,主芯片140接收到同步处理模块150传输的拼接图片时,可以将该拼接图片拆分为两帧独立的抓拍图片(可见光图片和红外光图片),并对该两帧图片进行融合处理,提取可见光图片的颜色信息,并提取红外光图片的细节和亮度信息,融合为一帧彩色图片。
在另一个示例中,当同步处理模块150与主芯片140之间的带宽较小时,同步处理模块150可以分别将可见光图片和红外光图片传输给主芯片140,由主芯片140进行融合处理。
在再一个示例中,为了降低主芯片的工作负荷,同步模块150还可以对第一图像传感器131和第二图像传感器132传输的视频进行融合处理。
进一步地,请参见图3,在本申请其中一个实施例中,上述摄像机还可以包括从 处理芯片(下文中称为从芯片)160。
主芯片140,可以具体用于将可见光图片和红外光图片传输给从芯片160;
从芯片160,可以用于对可见光图片和红外光图片进行融合,并将融合后的图片传输给主芯片140;
主芯片140,还可以具体用于对融合后的处理图片进行ISP处理、编码以及压缩后输出。
在该实施例中,为了降低主芯片140的工作负荷,简化主芯片140的实现,摄像机中还可以部署从芯片160,该从芯片160可以与主芯片140相连,并与主芯片140进行数据交互。
在该实施例中,当主芯片140接收到可见光图片和红外光图片时,可以将可见光图片和红外光图片传输给从芯片160。
从芯片160接收到可见光图片和红外光图片时,可以对该可见光图片和红外光图片进行融合处理,并将融合后的图片再次传输给主芯片140,由主芯片140对融合后的图片进行ISP处理、编码以及压缩(如JPEG(Joint Photographic Experts Group,联合图像专家小组)压缩)等处理。
在一个示例中,为了进一步优化图片显示效果并降低主芯片140的工作负荷,从芯片160将融合后的图片传输给主芯片140之前,还可以对融合后的图片进行ISP处理,并将处理后的图片传输给主芯片140;主芯片140接收到从芯片160传输的图片时,可以对其进行二次ISP处理,并进行编码和压缩等处理。
在另一个示例中,为了进一步降低主芯片140的工作负荷,从芯片160将融合后的图片传输给主芯片140之前,还可以对融合后的图片进行ISP处理,并进行编码和压缩等处理。然后将处理后的图像传输给主芯片140。主芯片140接收到从芯片160传输的图片时,可以不再对其进行处理。主芯片140和从芯片160都可以对图像进行的ISP处理、编码以及压缩等处理,可以考虑摄像机系统的芯片的负载均衡,对处理任务的分配进行实时的调整。
需要说明的是,在该实施例中,主芯片140传输给从芯片160的可见光图片和红外光图片可以为拼接图片(由主芯片140进行拼接或由同步处理模块150(若摄像机中部署有同步处理模块150)进行拼接)或独立的两帧抓拍图片。其中,从芯片160接收到的为拼接图片时,可以将其拆分为独立的两帧图片后进行融合处理。
进一步地,在本申请实施例中,摄像机获取到融合后的抓拍图片之后,还可以根据对融合后的抓拍图片进行车辆特征识别,以得到车辆特征信息,如车身颜色、车型、车窗人脸识别、车辆主品牌、子品牌识别等信息中的一个或多个。
相应地,在本申请其中一个实施例中,当摄像机还设置有从芯片160时,从芯片160还可以用于基于深度学习算法对融合后的图片进行车辆特征识别,并将识别结果传输给所述主处理芯片。
在该实施例中,可以在从芯片160中集成深度学习算法,当从芯片160根据主芯片140传输的可见光图片和红外光图片完成图片融合之后,还可以基于深度学习算法对融合后的图片进行车辆特征识别,并将识别结果(即车身颜色、车型、车窗人脸识别、车辆主品牌、子品牌识别等信息中的一个或多个)传输给主芯片140,由主芯片140根据接收到的识别结果进行相应处理。
为了使本领域技术人员更好地理解本申请实施例提供的技术方案,下面结合具体实例对本申请实施例提供的摄像机的工作原理(以打断抓拍为例)进行简单说明。
请参见图4,为本申请实施例提供的一种摄像机的结构示意图,如图4所示,在该实施例中,摄像机包括镜头401、分光模块402、第一图像传感器411、第二图像传感器412(该实施例中图像传感器采用全局曝光图像传感器,如全局曝光互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor,CMOS)图像传感器)、FPGA芯片(该实施例中以同步处理模块为FPGA芯片为例)403、主芯片404以及从芯片405(集成有图形处理单元(Graphics Processing Unit,GPU)和深度学习算法)。
在该实施例中,通过摄像机的镜头401进入摄像机的入射光经过分光模块分成可见光和红外光。
其中,分光模块402可以通过棱镜实现,入射光中的可见光部分进入棱镜之后,透射而出;红外光部分进入棱镜之后,经过一次反射之后射出。
在该实施例中,第一图像传感器411和第二图像传感器412分别部署在分光模块402的两束光源的出射位置;其中,第一图像传感器411部署在分光模块402的可见光光源的出射位置,第二图像传感器412部署在分光模块402的红外光光源的出射位置。
第一图像传感器411和第二图像传感器412可以分别按照第一快门和第一增益采集可见光视频图像和红外光视频图像,并汇入FPGA芯片403;FPGA芯片403对可见光视频图像和红外光视频图像进行融合后,将融合后的视频图像输出给主芯片404;主芯片 404对融合后的视频图像进行ISP处理、算法分析以及编码压缩等处理后,输出预览码流。
其中,主芯片404得到融合后的视频图像之后,还可以将融合后的视频图像传输给从芯片405,由从芯片405基于深度学习算法对融合后的视频图像进行目标分析,例如,目标检测、目标跟踪以及目标分类等处理中的一个或多个。
在该实施例中,当主芯片404接收到抓拍指令时,主芯片404将抓拍指令传输给FPGA芯片403,由FPGA芯片403将该抓拍指令分别传输给第一图像传感器411和第二图像传感器412。
其中,主芯片404接收到的抓拍指令可以由例如用于判断需要进行图像抓拍的算法或外部信号触发。
在该实施例中,第一图像传感器411和第二图像传感器412接收到抓拍指令时,可以中断视频图像采集,并按照第二快门和第二增益进行图片抓拍;其中,第一图像传感器411进行可见光图片抓拍;第二图像传感器412进行红外光图片抓拍。
第一图像传感器411和第二图像传感器412分别将抓拍的可见光图片和红外光图片传输给FPGA芯片403,由FPGA芯片403对可见光图片和红外光图片进行拼接(一帧可见光图片和一帧红外光图片拼接为一帧拼接图片),并将拼接图片输出给主芯片404。
主芯片404接收到FPGA芯片403传输的拼接图片时,将其拆分为独立的两帧抓拍图片(可见光图片和红外光图片),并传输给从芯片405。
从芯片405接收到主芯片404传输的可见光图片和红外光图片时,对可见光图片和红外光图片进行像素级别的融合。从芯片405提取可见光的色彩信息,提取红外光图片的细节和亮度信息,将可见光图片和红外光图片融合成一帧彩色图片,并进行ISP处理。然后,一方面,从芯片405将处理后的图片传输给主芯片404;另一方面,对处理后的图片进行车辆特征识别,以得到识别结果,例如,车型、车身颜色、车辆品牌(包括主品牌和子品牌)、车窗人脸、是否存在打电话行为、是否系安全带等中的一个或多个,并将识别结果传输给主芯片404。
其中,主芯片404与从芯片405之间可以通过通用串行总线(Universal Serial Bus,USB)通讯方式或高速串行计算机扩展总线标准(Peripheral Component Interconnect Express,PCIE)通讯方式或网络通讯方式进行交互。
可见,在该实施例中,摄像机可以支持打断(视频图像采集)抓拍技术,摄像机可以采用不同的快门和增益进行视频图像采集和图片抓拍。对于视频图像采集,可以调 节较大的快门和增益以保证预览画面的亮度;对于图片抓拍,可以按照保证车牌和车辆细节(如车型、品牌等)清晰度的原则调节快门和增益(小于视频图像采集的快门和增益),防止车辆以及车牌过曝,保证车辆以较快速度通过时能以较短的快门进行抓拍,确保车辆不拖尾。
请参见图5,为本申请实施例提供的一种抓拍图片融合方法的流程示意图,其中,该抓拍图片融合方法可以应用于图1~图3任一所示的摄像机,如图5所示,该抓拍图片融合方法可以包括以下步骤。
步骤S500、当主处理芯片接收到抓拍指令时,将所述抓拍指令分别传输给所述第一图像传感器和所述第二图像传感器。
本申请实施例中,抓拍指令可以包含抓拍信号和同步命令等多条指令。抓拍指令由算法或外部信号触发。当主芯片接收到抓拍指令时,主芯片可以将该抓拍指令分别传输给第一图像传感器和第二图像传感器,以实现打断抓拍,即中断视频图像采集,并进行图片抓拍。
步骤S510、所述第一图像传感器接收到所述抓拍指令时,按照第二快门和第二增益进行图片抓拍,以获得可见光图片;所述第二图像传感器接收到所述抓拍指令时,按照所述第二快门和所述第二增益进行图片抓拍,以获得红外光图片。
本申请实施例中,第一图像传感器和第二图像传感器接收到抓拍指令时,可以按照第二快门和第二增益同步进行图片(可见光图片和红外光图片)抓拍,并分别将可见光图片和红外光图片传输给主芯片,由主芯片进行融合处理。
步骤S520、所述主处理芯片输出对所述可见光图片和所述红外光图片进行融合后的抓拍图片。
本申请实施例中,主芯片接收到抓拍的可见光图片和红外光图片时,可以对接收到的可见光图片和红外光图片进行融合,并对融合后的图片进行ISP处理、编码以及压缩等处理。
在本申请其中一个实施例中,上述第一图像传感器进行图片抓拍,并将抓拍的可见光图片传输给主处理芯片;第二图像传感器进行图片抓拍,并将抓拍的红外光图片传输给主处理芯片,可以包括:
第一图像传感器中断可见光视频图像采集,按照第二快门和第二增益进行图片抓拍,并将抓拍的可见光图片传输给主处理芯片;第二图像传感器中断红外光视频图像采集, 按照第二快门和第二增益进行图片抓拍,并将抓拍的红外光图片传输给主处理芯片。
在该实施例中,第一图像传感器和第二图像传感器在视频图像采集过程中接收到抓拍指令时,可以中断视频图像采集并进行图片抓拍。从而,图像传感器可以将全部资源用于视频图像采集或图片抓拍,从而,可以提高视频图像或抓拍图片的质量。
在本申请另一个实施例中,上述第一图像传感器进行图片抓拍,并将抓拍的可见光图片传输给主处理芯片;第二图像传感器进行图片抓拍,并将抓拍的红外光图片传输给主处理芯片,可以包括:
第一图像传感器按照第一快门和第一增益进行可见光视频图像采集,按照第二快门和第二增益进行图片抓拍,并分别将可见光视频图像和可见光图片传输给主处理芯片;第二图像传感器按照第一快门和第一增益进行红外光视频图像采集,按照第二快门和第二增益进行图片抓拍,并分别将红外光视频图像和红外光图片传输给主处理芯片。
在该实施例中,图像传感器(包括第一图像传感器和第二图像传感器)可以同时进行视频图像采集和图片抓拍,进而避免视频图像采集被打断。即图像传感器可以利用一部分资源进行视频图像采集,利用另一部分资源进行图片抓拍(当不需要进行图片抓拍时,该部分资源空闲,即图像传感器预留一部分资源用于图片抓拍),以保证视频图像采集和图片抓拍相互独立进行。
进一步地,在本申请其中一个实施例中,上述摄像机还可以包括:同步处理模块;
相应地,请参见图6,上述步骤S500中,将抓拍信指令分别传输给第一图像传感器和第二图像传感器,可以包括以下步骤:
步骤S501、主处理芯片将抓拍指令传输给同步处理模块;
步骤S502、同步处理模块预设时间内将抓拍指令分别传输给第一图像传感器和第二图像传感器。
对于第一图像传感器将抓拍的可见光图片传输给主处理芯片,第二图像传感器将抓拍的红外光图片传输给主处理芯片,可以包括以下步骤:
步骤S511、第一图像传感器将抓拍的可见光图片传输给同步处理模块,第二图像传感器将抓拍的红外光图片传输给同步处理模块;
步骤S512、同步处理模块将可见光图片和红外光图片传输给主处理芯片。
在该实施例中,为了降低主芯片的工作负荷,简化主芯片的实现,摄像机中还可 以部署同步处理模块,该同步处理模块可以部署在图像传感器(包括第一图像传感器和第二图像传感器)与主芯片之间。
在该实施例中,当主芯片接收到抓拍指令时,主芯片可以将抓拍指令传输给同步处理模块,由同步处理模块分别传输给第一图像传感器和第二图像传感器。
第一图像传感器和第二图像传感器完成图片抓拍之后,可以将抓拍图片传输给同步处理模块。
同步处理模块接收到第一图像传感器和第二图像传感器传输的抓拍图片(包括可见光图片和红外光图片)时,可以将可见光图片和红外光图片传输给主芯片,由主芯片输出融合后的抓拍图片。
在一个示例中,同步处理模块可以对可见光图片和红外光图片进行拼接,将一帧可见光图片和一帧红外光图片拼接为一帧图片,并传输给主处理芯片,以提高抓拍图片传输给主芯片的效率。
其中,主芯片接收到同步处理模块传输的拼接图片时,可以将该拼接图片拆分为两帧独立的抓拍图片(可见光图片和红外光图片);并对该两帧图片进行融合处理,提取可见光图片的颜色信息,并提取红外光图片的细节和亮度信息,将可见光图片和红外光图片融合为一帧彩色图片。
进一步地,在本申请其中一个实施例中,上述摄像机还可以包括:从处理芯片。
相应地,请一并参见图7,上述步骤S520中,主处理芯片输出融合后的抓拍图片,可以包括以下步骤:
步骤S521、主处理芯片将可见光图片和红外光图片传输给从处理芯片;
步骤S522、所述从处理芯片对所述可见光图片和所述红外光图片进行融合以获得所述抓拍图片,并将所述抓拍图片传输给所述主处理芯片;
步骤S523、主处理芯片对融合后的图片进行ISP处理、编码以及压缩后输出。
在该实施例中,为了降低主芯片的工作负荷,简化主芯片的实现,摄像机中还可以部署从芯片,该从芯片可以与主芯片相连,并与主芯片进行数据交互。
在该实施例中,当主芯片接收到抓拍的可见光图片和红外光图片时,可以将可见光图片和红外光图片传输给从芯片。
从芯片接收到可见光图片和红外光图片时,可以对该可见光图片和红外光图片进 行融合处理,并将融合后的图片再次传输给主芯片,由主芯片对融合后的图片进行ISP处理、编码以及压缩(如JPEG压缩)等处理。
在一个示例中,为了进一步优化图片显示效果,从芯片将融合后的图片传输给主芯片之前,还可以对融合后的图片进行ISP处理,并将处理后的图片传输给主芯片;主芯片接收到从芯片传输的图片时,可以对其进行二次ISP处理,并进行编码和压缩等处理。
在另一个示例中,为了进一步降低主芯片的工作负荷,从芯片将融合后的图片传输给主芯片之前,还可以对融合后的图片进行ISP处理,并进行编码和压缩等处理。然后将处理后的图像传输给主芯片。主芯片接收到从芯片传输的图片时,可以不再对其进行处理。主芯片和从芯片都可以对图像进行的ISP处理、编码以及压缩等处理,可以考虑摄像机系统的芯片的负载均衡,对处理任务的分配进行实时的调整。
进一步地,在本申请实施例中,摄像机获取到融合后的抓拍图片之后,还可以根据对融合后的抓拍图片进行车辆特征识别,以得到车辆特征信息,如车身颜色、车型、车窗人脸识别、车辆主品牌、子品牌识别等信息中的一个或多个。
相应地,在本申请其中一个实施例中,当摄像机还设置有从芯片时,从芯片还可以用于基于深度学习算法对融合后的图片进行车辆特征识别,并将识别结果传输给所述主处理芯片。
在该实施例中,可以在从芯片中集成深度学习算法,当从芯片根据主芯片传输的可见光图片和红外光图片完成图片融合之后,还可以基于深度学习算法对融合后的图片进行车辆特征识别,并将识别结果(即车身颜色、车型、车窗人脸识别、车辆主品牌、子品牌识别等信息中的一个或多个)传输给主芯片,由主芯片根据接收到的识别结果进行相应处理。
本申请实施例中,当第一图像传感器和第二图像传感器在按照第一快门和第一增益进行视频图像采集过程中接收到抓拍指令时,可以按照第二快门和第二增益进行图片抓拍,并分别将抓拍到的可见光图片和红外光图片传输给主处理芯片,由主处理芯片输出融合后的抓拍图片,即摄像机支持视频图像采集过程中的图片抓拍,且图片抓拍可以具有独立的快门和增益,通过调节图片抓拍时的快门和增益,可以保证图片抓拍时车牌以及车辆细节信息的清晰度,防止车辆以及车牌过曝,并且可以保证车辆较快通过时通过比较短的快门进行抓拍,确保车辆不拖尾。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (19)

  1. 一种摄像机,包括:
    镜头;
    分光模块,用于将通过所述镜头进入所述摄像机的入射光拆分为可见光和红外光;
    第一图像传感器,用于接收所述分光模块输出的所述可见光,并按照第一快门和第一增益进行可见光视频图像采集以获得可见光视频图像;
    第二图像传感器,用于接收所述分光模块输出的所述红外光,并按照所述第一快门和所述第一增益进行红外光视频图像采集以获得红外光视频图像;
    主处理芯片,用于输出对所述可见光视频图像和所述红外光视频图像进行融合后的视频图像,其中,融合处理包括对所述可见光视频图像和所述红外光视频图像的亮度信息进行融合,或者对所述可见光视频图像和所述红外光视频图像的细节信息进行融合。
  2. 根据权利要求1所述的摄像机,其中:
    所述主处理芯片,还用于当接收到抓拍指令时,将所述抓拍指令分别传输给所述第一图像传感器和所述第二图像传感器;
    所述第一图像传感器,还用于当接收到所述抓拍指令时,按照第二快门和第二增益进行图片抓拍,以获得可见光图片;
    所述第二图像传感器,还用于当接收到所述抓拍指令时,按照所述第二快门和所述第二增益进行图片抓拍,以获得红外光图片;
    所述主处理芯片,还用于输出对所述可见光图片和所述红外光图片进行融合后的抓拍图片。
  3. 根据权利要求2所述的摄像机,其特征在于,
    所述第一图像传感器,还用于当接收到所述抓拍指令时,中断所述可见光视频图像采集;
    所述第二图像传感器,还用于当接收到所述抓拍指令命令时,中断所述红外光视频图像采集。
  4. 根据权利要求2所述的摄像机,其特征在于,所述摄像机还包括同步处理模块;
    所述同步处理模块,用于接收所述主处理芯片传输的所述抓拍指令,并将所述抓抓拍指令在预设时间内分别传输给所述第一图像传感器和所述第二图像传感器;
    所述同步处理模块,还用于按照预设的时序接收所述第一图像传感器传输的所述可见光图片以及所述第二图像传感器传输的所述红外光图片,并将所述可见光图片和所述红外光图片传输给所述主处理芯片。
  5. 根据权利要求4所述的摄像机,其特征在于,
    所述同步处理模块,还用于将同步抓拍的一帧所述可见光图片和一帧所述红外光图片拼接为一帧拼接图片,并将所述拼接图片传输给所述主处理芯片;
    所述主处理芯片,还用于将一帧所述拼接图片拆分为一帧所述可见光图片和一帧所述红外光图片。
  6. 根据权利要求2所述的摄像机,其特征在于,
    所述主处理芯片,还用于对所述抓拍图片进行至少一个以下处理:图像信号处理ISP处理、编码以及压缩。
  7. 根据权利要求2所述的摄像机,其特征在于,所述摄像机还包括从处理芯片;
    所述主处理芯片,具体用于将所述可见光图片和所述红外光图片传输给所述从处理芯片;
    所述从处理芯片,用于对所述可见光图片和所述红外光图片进行融合以获得所述抓拍图片,并将所述抓拍图片传输给所述主处理芯片。
  8. 根据权利要求7所述的摄像机,其特征在于,
    所述从处理芯片,还用于对所述抓拍图片进行至少一个以下处理:ISP处理、编码以及压缩。
  9. 根据权利要求7所述的摄像机,其特征在于,
    所述从处理芯片,还用于基于深度学习算法对所述抓拍图片进行车辆特征识别,并将识别结果传输给所述主处理芯片。
  10. 一种抓拍图片融合方法,应用于视频监控系统中的摄像机,包括:
    所述摄像机中的分光模块将通过所述摄像机中的镜头进入所述摄像机的入射光拆分为可见光和红外光;
    所述摄像机中的第一图像传感器接收所述分光模块输出的所述可见光,并按照第一快门和第一增益进行可见光视频图像采集以获得可见光视频图像;
    所述摄像机中的第二图像传感器接收所述分光模块输出的所述红外光,并按照所述第一快门和所述第一增益进行红外光视频图像采集以获得红外光视频图像;
    所述摄像机中的主处理芯片输出对所述可见光视频图像和所述红外光视频图像进行融合后的视频图像,其中,融合处理包括对所述可见光视频图像和所述红外光视频图像的亮度信息进行融合,或者对所述可见光视频图像和所述红外光视频图像的细节信息进行融合。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    当所述主处理芯片接收到抓拍指令时,将所述抓拍指令分别传输给所述第一图像传感器和所述第二图像传感器;
    所述第一图像传感器接收到所述抓拍指令时,按照第二快门和第二增益进行图片抓拍,以获得可见光图片;
    所述第二图像传感器接收到所述抓拍指令时,按照所述第二快门和所述第二增益进行图片抓拍,以获得红外光图片;
    所述主处理芯片输出对所述可见光图片和所述红外光图片进行融合后的抓拍图片。
  12. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    所述第一图像传感器接收到所述抓拍指令时,中断所述可见光视频图像采集;
    所述第二图像传感器接收到所述抓拍指令时,中断所述红外光视频图像采集。
  13. 根据权利要求11所述的方法,其特征在于,将所述抓拍指令分别传输给所述第一图像传感器和所述第二图像传感器,包括:
    所述主处理芯片将所述抓拍指令传输给所述摄像机中的同步处理模块;
    所述同步处理模块在预设时间内将所述抓拍指令分别传输给所述第一图像传感器和所述第二图像传感器。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    所述第一图像传感器将所述可见光图片传输给所述同步处理模块,
    所述第二图像传感器将所述红外光图片传输给所述同步处理模块;
    所述同步处理模块将所述可见光图片和所述红外光图片传输给所述主处理芯片。
  15. 根据权利要求14所述的方法,其特征在于,将所述可见光图片和所述红外光图片传输给所述主处理芯片,包括:
    所述同步处理模块将同步抓拍的一帧所述可见光图片和一帧所述红外光图片拼接为一帧拼接图片,并
    所述同步处理模块将所述拼接图片传输给所述主处理芯片。
  16. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    所述主处理芯片将所述可见光图片和所述红外光图片传输给所述摄像机中的从处理芯片;
    所述从处理芯片对所述可见光图片和所述红外光图片进行融合以获得所述抓拍图片,并将所述抓拍图片传输给所述主处理芯片。
  17. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    所述主处理芯片对所述抓拍图片进行至少一个以下处理:图像信号处理ISP处理、 编码以及压缩。
  18. 根据权利要求16所述的方法,其特征在于,所述方法还包括:
    所述从处理芯片对所述抓拍图片进行至少一个以下处理:ISP处理、编码以及压缩。
  19. 根据权利要求16所述的方法,其特征在于,所述方法还包括:
    所述从处理芯片基于深度学习算法对所述抓拍图片进行车辆特征识别,并将识别结果传输给所述主处理芯片。
PCT/CN2018/105225 2018-06-04 2018-09-12 一种摄像机及抓拍图片融合方法 WO2019232969A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18921360.6A EP3806444A4 (en) 2018-06-04 2018-09-12 CAMERA AND PROCESS FOR MERGING CAPTURED IMAGES
US15/734,835 US11477369B2 (en) 2018-06-04 2018-09-12 Camera and method for fusing snapped images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810563691.4A CN110557527B (zh) 2018-06-04 2018-06-04 一种摄像机及抓拍图片融合方法
CN201810563691.4 2018-06-04

Publications (1)

Publication Number Publication Date
WO2019232969A1 true WO2019232969A1 (zh) 2019-12-12

Family

ID=68735764

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/105225 WO2019232969A1 (zh) 2018-06-04 2018-09-12 一种摄像机及抓拍图片融合方法

Country Status (4)

Country Link
US (1) US11477369B2 (zh)
EP (1) EP3806444A4 (zh)
CN (1) CN110557527B (zh)
WO (1) WO2019232969A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10958830B2 (en) 2018-05-24 2021-03-23 Magna Electronics Inc. Vehicle vision system with infrared LED synchronization
CN111523401B (zh) * 2020-03-31 2022-10-04 河北工业大学 一种识别车型的方法
CN112258592A (zh) * 2020-09-17 2021-01-22 深圳市捷顺科技实业股份有限公司 一种人脸可见光图的生成方法及相关装置
CN112270639B (zh) * 2020-09-21 2024-04-19 浙江大华技术股份有限公司 一种图像处理方法、图像处理装置以及存储介质
CN112529973B (zh) * 2020-10-13 2023-06-02 重庆英卡电子有限公司 野外自供能动物抓拍图片动物识别方法
CN112995515B (zh) * 2021-03-05 2023-04-07 浙江大华技术股份有限公司 数据处理方法及装置、存储介质、电子装置
CN113596395A (zh) * 2021-07-26 2021-11-02 浙江大华技术股份有限公司 一种图像获取的方法及监控设备
CN114463792B (zh) * 2022-02-10 2023-04-07 厦门熵基科技有限公司 一种多光谱识别方法、装置、设备及可读存储介质
CN114966733B (zh) * 2022-04-21 2023-04-18 北京福通互联科技集团有限公司 基于激光阵列和单目摄像机的肉牛立体深度图像采集系统
CN115484409A (zh) * 2022-09-09 2022-12-16 成都微光集电科技有限公司 多图像传感器协同工作方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130235163A1 (en) * 2010-07-05 2013-09-12 Hoon Joo Camera system for three-dimensional thermal imaging
CN203632765U (zh) * 2013-12-17 2014-06-04 天津鑫彤科技有限公司 多源图像信息采集融合系统
CN104270570A (zh) * 2014-10-17 2015-01-07 北京英泰智软件技术发展有限公司 双目摄像机及其图像处理方法
CN204948210U (zh) * 2015-09-24 2016-01-06 广州市巽腾信息科技有限公司 一种图像信息采集装置
CN205249392U (zh) * 2015-12-10 2016-05-18 莱阳市百盛科技有限公司 一种辅助驾驶设备的视频采集系统
CN106385530A (zh) * 2015-07-28 2017-02-08 杭州海康威视数字技术股份有限公司 一种双光谱摄像机

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994006247A1 (en) * 1992-09-08 1994-03-17 Paul Howard Mayeaux Machine vision camera and video preprocessing system
US6781127B1 (en) * 2000-06-08 2004-08-24 Equinox Corporation Common aperture fused reflective/thermal emitted sensor and system
US20110102616A1 (en) * 2009-08-28 2011-05-05 Nikon Corporation Data structure for still image file, image file generation device, image reproduction device, and electronic camera
CN101893804B (zh) * 2010-05-13 2012-02-29 杭州海康威视软件有限公司 曝光控制方法及装置
US9930316B2 (en) * 2013-08-16 2018-03-27 University Of New Brunswick Camera imaging systems and methods
JP5820120B2 (ja) * 2011-01-28 2015-11-24 キヤノン株式会社 撮像装置およびその制御方法
KR101774591B1 (ko) 2011-09-29 2017-09-05 삼성전자주식회사 디지털 영상 촬영 방법 및 장치
CN103856764B (zh) * 2012-11-30 2016-07-06 浙江大华技术股份有限公司 一种利用双快门进行监控的装置
KR101858646B1 (ko) * 2012-12-14 2018-05-17 한화에어로스페이스 주식회사 영상 융합 장치 및 방법
KR102035355B1 (ko) * 2013-07-22 2019-11-18 현대모비스 주식회사 영상 처리 방법 및 이를 위한 위한 장치
JP2017011634A (ja) * 2015-06-26 2017-01-12 キヤノン株式会社 撮像装置およびその制御方法、並びにプログラム
CN105678727A (zh) 2016-01-12 2016-06-15 四川大学 基于异构多核构架的红外光与可见光图像实时融合系统
CN106060364A (zh) * 2016-07-28 2016-10-26 浙江宇视科技有限公司 光学透雾彩色图像采集方法及摄像机
CN111028188B (zh) * 2016-09-19 2023-05-02 杭州海康威视数字技术股份有限公司 分光融合的图像采集设备
JP2018170656A (ja) * 2017-03-30 2018-11-01 ソニーセミコンダクタソリューションズ株式会社 撮像装置、撮像モジュール、撮像システムおよび撮像装置の制御方法
US20180309919A1 (en) * 2017-04-19 2018-10-25 Qualcomm Incorporated Methods and apparatus for controlling exposure and synchronization of image sensors
CN107679531A (zh) * 2017-06-23 2018-02-09 平安科技(深圳)有限公司 基于深度学习的车牌识别方法、装置、设备及存储介质
US20200329195A1 (en) * 2019-04-11 2020-10-15 Qualcomm Incorporated Synchronizing application of settings for one or more cameras

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130235163A1 (en) * 2010-07-05 2013-09-12 Hoon Joo Camera system for three-dimensional thermal imaging
CN203632765U (zh) * 2013-12-17 2014-06-04 天津鑫彤科技有限公司 多源图像信息采集融合系统
CN104270570A (zh) * 2014-10-17 2015-01-07 北京英泰智软件技术发展有限公司 双目摄像机及其图像处理方法
CN106385530A (zh) * 2015-07-28 2017-02-08 杭州海康威视数字技术股份有限公司 一种双光谱摄像机
CN204948210U (zh) * 2015-09-24 2016-01-06 广州市巽腾信息科技有限公司 一种图像信息采集装置
CN205249392U (zh) * 2015-12-10 2016-05-18 莱阳市百盛科技有限公司 一种辅助驾驶设备的视频采集系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3806444A4 *

Also Published As

Publication number Publication date
CN110557527A (zh) 2019-12-10
US20210235011A1 (en) 2021-07-29
US11477369B2 (en) 2022-10-18
EP3806444A1 (en) 2021-04-14
CN110557527B (zh) 2021-03-23
EP3806444A4 (en) 2021-05-19

Similar Documents

Publication Publication Date Title
WO2019232969A1 (zh) 一种摄像机及抓拍图片融合方法
CN109691079B (zh) 成像装置和电子设备
US11790504B2 (en) Monitoring method and apparatus
WO2020085881A1 (en) Method and apparatus for image segmentation using an event sensor
CN103856764B (zh) 一种利用双快门进行监控的装置
WO2021073140A1 (zh) 单目摄像机、图像处理系统以及图像处理方法
KR102015956B1 (ko) 기가비트 이더넷 통신망에서의 차량 번호 인식 시스템 및 차량 정보 전송 방법
KR101625538B1 (ko) 도시방범이 가능한 다차선 자동차 번호판 인식시스템
CN104918019A (zh) 一种能全天候同时看清车牌和车内人员的双目摄像机
US20160212410A1 (en) Depth triggered event feature
WO2019233129A1 (zh) 图像采集
KR20160034064A (ko) 차량 번호 인식 장치 및 그 방법
KR102196086B1 (ko) 카메라의 ptz 자율 자세보정 방법 및 이를 이용한 교통정보 시스템
WO2018214838A1 (zh) 监控抓拍方法、装置及系统
KR100878491B1 (ko) 방범 카메라 시스템 및 그 제어방법
CN110062224B (zh) 一种提高智能摄像机应用性的系统
JP6161582B2 (ja) 画像処理装置
US10999488B2 (en) Control device, imaging device, and control method
JP4797441B2 (ja) 車両用画像処理装置
JP2003162796A (ja) 車両監視方法及びシステム
CN107948620B (zh) 基于高级驾驶辅助系统的双目摄像头调试方法
KR101303758B1 (ko) 카메라 시스템 및 그 제어방법
JP2019022028A (ja) 撮像装置、その制御方法およびプログラム
TWI854850B (zh) 多模組影像系統與影像同步方法
US20240064418A1 (en) Autonomous rotating sensor device and corresponding data processing optimization method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018921360

Country of ref document: EP

Effective date: 20210111