CN110798623A - Monocular camera, image processing system, and image processing method - Google Patents

Monocular camera, image processing system, and image processing method Download PDF

Info

Publication number
CN110798623A
CN110798623A CN201910979447.0A CN201910979447A CN110798623A CN 110798623 A CN110798623 A CN 110798623A CN 201910979447 A CN201910979447 A CN 201910979447A CN 110798623 A CN110798623 A CN 110798623A
Authority
CN
China
Prior art keywords
infrared
image frame
color image
image
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910979447.0A
Other languages
Chinese (zh)
Inventor
陈开�
黄普发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910979447.0A priority Critical patent/CN110798623A/en
Publication of CN110798623A publication Critical patent/CN110798623A/en
Priority to PCT/CN2020/097704 priority patent/WO2021073140A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The embodiment of the application relates to the field of image processing, and particularly provides a monocular camera, an image processing system and an image processing method. The monocular camera comprises a lens, a double-pass filter, an image sensor and an image fusion processing unit. The double-pass filter filters light received by the lens, and optical signals in a visible light band and optical signals in an infrared band are reserved after filtering; the image sensor generates color image frames after photoelectrical conversion of optical signals in visible light wave bands, and generates infrared image frames after photoelectrical conversion of optical signals in infrared wave bands; the image fusion processing unit performs image fusion, specifically, the color image frame and the infrared image frame generated by the image sensor are fused to generate a fused color image frame.

Description

Monocular camera, image processing system, and image processing method
Technical Field
The present application relates to the field of images, and more particularly to a monocular camera, an image processing system, and an image processing method.
Background
In the field of video security, color attributes play an important role in identifying objects. For the camera, when the working mode is daytime, the visible light information in the environment is rich, so the camera can work in a color mode; and the working mode is night, the visible light information in the environment is less, and the camera is difficult to obtain a high-quality color image.
The existing camera can generate an infrared image frame and a color image frame at night, and the infrared image frame and the color image frame are fused to improve the display effect of the color image frame. However, such a camera is relatively complex and costly.
Disclosure of Invention
The present invention provides a monocular camera, an image processing system and an image processing method in combination with various embodiments, so as to solve one or more of the defects in the prior art.
In a first aspect, an embodiment of the present invention provides a monocular camera, including: a lens for receiving light from the subject; the double-pass filter is used for filtering light received by the lens and reserving optical signals in a visible light band and optical signals in an infrared band after filtering; the image sensor is used for performing photoelectric conversion on the optical signals of the two wave bands obtained after filtering, the optical signals of the visible light wave band generate color image frames after photoelectric conversion, and the optical signals of the infrared wave band generate infrared image frames after photoelectric conversion; and the image fusion processing unit is used for fusing the color image frame and the infrared image frame generated by the image sensor to generate a fused color image frame.
The scheme provides a monocular camera, and the double-pass filter is combined into the camera for fusing images, so that the fused image effect is better. And because a single lens and a single image sensor are used, the cost is saved.
In a first possible implementation manner of the first aspect, the monocular camera further includes: and the infrared light supplement lamp control circuit is used for controlling the infrared light supplement lamp to carry out infrared light supplement on the shot object. The scheme is used for improving the image effect at night.
In a second possible implementation of the first aspect: based on the first aspect or the first possible implementation manner of the first aspect, the image sensor is configured to generate a first color image frame at a first time, generate a first infrared image frame at a second time, and generate a second color image frame at a third time, where the first time, the second time, and the third time are adjacent in time, and the image fusion processing unit is specifically configured to: estimating to obtain a predicted color image frame at a second time using the first color image frame and the second color image frame; and fusing the predicted color image frame and the first infrared image frame image to generate a fused color image frame. The scheme specifically introduces a specific implementation: a process for fusing a color image frame and an infrared image frame into a fused color image frame.
In a third possible implementation manner of the first aspect, the monocular camera further includes: and the infrared light supplement lamp is used for receiving the control of the infrared light supplement lamp control circuit and performing infrared light supplement on the shot object. The scheme is used for improving the image effect at night.
In a fourth possible implementation manner of the first aspect, the image fusion processing unit specifically includes: the first image signal processor is used for receiving the color image frames from the image sensor, carrying out image signal processing on the color image frames and sending the processed color image frames to the digital signal processor; the second image signal processor is used for receiving the infrared image frame from the image sensor, carrying out image signal processing on the infrared image frame and sending the processed infrared image frame to the digital signal processor; and the digital signal processor is used for fusing the infrared image frame and the color image frame which are processed by the image signal to generate a fused color image frame. The scheme introduces a specific structure inside the image fusion processing unit.
In a fifth possible implementation form of the first aspect: and when the digital signal processor receives the color image frame, a starting control signal is sent to the infrared light supplement lamp. The scheme introduces the time for starting the infrared supplementary lighting.
In a fifth possible implementation form of the first aspect: the working wavelength range of the infrared light supplement lamp is [840nm,1040nm ]. The scheme introduces the wavelength range of the infrared light supplement lamp, the wavelength range can avoid the influence on human eyes, and the light pollution is reduced.
In a sixth possible implementation form of the first aspect: the infrared light supplement lamp is a 930nm light supplement lamp, and the wavelength range of light emitted by the 930nm light supplement lamp is close to 930 nm. The scheme introduces the wavelength range of the infrared light supplement lamp, the wavelength range can avoid the influence on human eyes, and the light pollution is reduced.
In a second aspect, the present invention provides an embodiment of an image processing system, which includes the monocular camera in the first aspect or any possible implementation manner of the first aspect, and further includes a network and a server. Wherein the network is configured to transmit the fused color image frames generated by the monocular camera; the storage server is used for storing the fused color image frames received by the network. The scheme introduces an image processing system based on a monocular camera embodiment, and the image processing system can store the fused image frames.
A first possible implementation of the second aspect: the stored images are further processed by the storage server or other device, for example: carrying out face recognition on the fused image frame; and carrying out license plate recognition on the fused image frame.
In a third aspect, an image processing method is provided, including the steps of: filtering light received by the lens, and reserving optical signals in a visible light band and optical signals in an infrared band after filtering; performing photoelectric conversion on the optical signals of the two wave bands obtained by filtering, wherein the optical signals of the visible light wave band generate color image frames after the photoelectric conversion, and the optical signals of the infrared wave band generate infrared image frames after the photoelectric conversion; and fusing the color image frame and the infrared image frame generated by the image sensor to generate a fused color image frame. The scheme provides an image processing method, and the fused image effect is better because of the camera which fuses frames generated by the optical signals of two wave bands obtained by filtering. Alternatively, cost can be saved by using a single lens and a single image sensor.
The third aspect also provides various possible implementations corresponding to the first aspect, with corresponding technical effects.
In a fourth aspect, there is provided a computer program product comprising computer readable code instructions which, when executed by a computer, enable the computer to configure a monocular camera such that the monocular camera is able to perform the method of any one of the various possible implementations of the third aspect and the third aspect.
In a fifth aspect, there is provided a Non-transitory computer readable storage medium containing computer program code instructions which, when executed by a computer, enable the configuration of a monocular camera by the computer such that the monocular camera is able to perform the method of any one of the various possible implementations of the third aspect and the third aspect. The non-transitory computer-readable storage medium comprising one or more of the group: Read-Only Memory (ROM), programmable ROM (programmable ROM), Erasable PROM (Erasable PROM), Flash Memory, Electrically EPROM (EEPROM), and Hard disk drive (Hard drive).
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings.
FIG. 1 is an architectural diagram of a monocular camera embodiment of the present invention;
FIG. 2 is a schematic diagram of an embodiment of the fused image of the present invention;
fig. 3 is an architecture diagram of an embodiment of an image processing system according to the present invention.
Fig. 4 is a flowchart of an embodiment of the image processing method of the present invention.
Detailed Description
The terms "including" and "having," and any variations thereof, in the description and claims of the invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or acts is not limited to those listed but may alternatively include other steps or acts not listed or inherent to such process, method, article, or apparatus. The terms "first", "second", "third" and "fourth", etc. are used to distinguish between different objects and are not used to describe a particular order.
The embodiment of the invention provides a camera with a double-pass filter and a single image sensor, and after infrared and color image frames are fused by applying the camera with the novel framework, the color image frame with good effect can be obtained at night. In order to be matched with the structure, the embodiment of the invention adopts a monocular (lens), has simple structure and can save cost.
In addition, in the field of video monitoring such as warehouse management, unattended operation, and safe cities, images/videos are often required to be shot in a low-illumination scene, and information such as the outline of a shot object is often partially lost due to insufficient light. If an infrared lamp with 750nm is used for light supplement, although the imaging quality can be improved to a certain extent, the infrared lamp with 750nm still belongs to the range which can be perceived by human eyes, so that visual stimulation can be caused to the human eyes to be shot, namely, so-called light pollution is formed.
In order to take color pictures as clear as possible in low-illumination scenes such as at night, in the embodiment of the invention, a 930nm light supplement lamp is used for performing infrared light supplement on a shot object, a monocular camera uses a double-pass filter for filtering light entering a lens, and the filtered light only retains 380-650nm visible light and 930 +/-15 nm infrared light. Then, the infrared image frame and the color image frame are generated and fused by a chip on chip (SOC), the fused image is clearer, and the low-illumination infrared image fusion system has color and black and white information, namely the low-illumination color capability. In addition, because the wavelength of the output light of the light supplement lamp adopted by the embodiment of the invention is 930nm, human eyes cannot perceive the light of the light supplement lamp, and visual stimulation cannot be caused to shot pedestrians.
It should be noted that, in the product practice, 930nm mentioned in the embodiments of the present invention is not an exact number, and there is a certain error, so that it actually refers to a wavelength range around 930nm, such as 930 ± 100 nm. In other words, the working wavelength of the infrared fill-in light is within 840nm-1040nm, which all belong to the protection scheme of the embodiment of the invention.
The following describes an embodiment of the present invention in detail with reference to fig. 1.
As shown in fig. 1, the camera includes: an infrared fill light 12 (optional component), a lens 13, a double-pass filter 14, an image sensor (sensor)15, a Chip on Chip (SOC) 16, and a fill light control circuit 17. The on-chip specifically includes an Image Signal Processor (ISP) 161, an image signal processor 162, a digital signal processor 163 and an encoder 164. Depending on the design, the fill light 12 may also be independent of the camera and thus may not be part of the camera. The image signal processor 161, the image signal processor 162, the digital signal processor 163, and the encoder 164 may be separate chips, and are not integrated in the SOC.
And a lens 13 for collecting light from the subject 11. The lens 13 is generally a lens group composed of one or more pieces of optical glass (or plastic), and may be composed of a lens or a combination of lenses such as a concave lens, a convex lens, an M-type lens, and the like. The lens 13 may be a spherical mirror or an aspherical mirror. The embodiment of the invention is used as a monocular camera and only has one lens, so the structure is simple and the cost is low.
In the embodiment of the invention, the camera only has 1 lens (the lens 13), so the camera belongs to a monocular camera.
A double-pass filter 14, the surface of which has a layer, and which filters the light from the lens 13 by means of a coating, so as to select the optical signal of the desired radiation band. In the embodiment of the present invention, the light rays allowed to pass through the double-pass filter 14 include 2 wave bands: the wavelength is a visible light band (the visible light band is understood differently in the industry, but the wavelength ranges are similar, such as 380 + 650nm, or 400-760 nm, 380-780 nm, etc.), and the wavelength is 930 + -5 nm (or 930 + -15 nm). The double-pass filter 14 is not a prism, and has the advantage of reducing light attenuation caused by scattering compared to a prism, and according to the estimation, the light energy intensity of the light passing through the double-pass filter 14 is higher than the light energy intensity after passing through the prism by more than 40%.
The sensor 15 receives the optical signal filtered by the double pass filter 14, and performs exposure to generate an infrared image frame and a color image frame. The sensor alternately exposes visible light and infrared light to alternately generate infrared image frames and color image frames. Taking the frame rate of the video as 25 frames/second for example, the sum of the total exposure time for obtaining an infrared image frame and a color image frame is 1/25 seconds. The exposure time period of one frame is 1/50 seconds on average. It is contemplated that a longer exposure of visible light may produce a more effective color image frame; due to the existence of the infrared fill-in light, good infrared image frames can be generated even if the exposure time is slightly short. Therefore, in the embodiment of the present invention, the exposure time of the color image frame is longer than that of the infrared image frame. For example, the exposure time period of the color image frame is 30ms, and the exposure time period of the infrared image frame is 10 ms.
The sensor is internally provided with a control logic, and the visible light and the infrared light are continuously and alternately exposed according to the control logic. Thereby continuously generating infrared image frames and color image frames. The infrared image frames may provide brightness information and the visible light images may provide color information and brightness information.
And the image signal processor 161 receives the infrared image frames sent by the sensor 15, and performs image optimization on the infrared image frames, so as to optimize the optical performance of the image. The infrared image frame reflects the space distribution of target and background infrared radiation, and the radiation brightness distribution of the infrared image frame mainly depends on the temperature and the emissivity of an observed object. The content of the image optimization may be, for example, one or more of the following: removing bottom current noise, performing linearization processing on data, solving the problem of data nonlinearity, removing dead pixels, removing noise points, adjusting white balance, focusing and exposure, adjusting the sharpness of an image, performing color space conversion (converting to different color spaces for processing), and the like.
And an image signal processor 162 for receiving the color image frames transmitted by the sensor 15. And carrying out image optimization on the color image frame. Thereby optimizing the optical performance of the image. Compared with an infrared image, the visible light image provides more detailed information of the target or the scene, and is more beneficial to observation by human eyes. The content of the image optimization may be, for example, one or more of the following: removing bottom current noise, performing linearization processing on data, solving the nonlinear problem of the data, removing dead spots, removing noise points, adjusting white balance, focusing and exposure, adjusting the sharpness of an image, performing color space conversion (converting to different color spaces for processing), performing color enhancement on the image, optimizing the skin color of the portrait and the like.
In addition, the image signal processor 162 may further send a control signal to the fill-in light control circuit 17, so as to control the operation of the infrared fill-in light (e.g., 930nm fill-in light) 12. Specifically, the image processor 161 may issue the control signal after receiving one color image frame. The image processor 161 is an internal device located in the SOC16, so that the control signal is transmitted through the pins of the on-chip 16 and then transmitted to the fill-in light control circuit 17 through a circuit inside the camera line. SOC16 is an image fusion processing unit.
The image signal processor 161 and the image signal processor 162 may also adjust image parameters such as contrast, saturation, gain, etc. by reading the configuration of the sensor 15.
The digital processor 163 performs image fusion (image fusion) on the monochrome frame processed by the image processor 161 and the color image frame processed by the image processor 162, and generates a fused color image frame.
Image fusion may employ algorithms to fuse multiple image information of the same scene acquired by various image sensors operating in different wavelength ranges or having different imaging mechanisms to generate a new image. The fused image contains more sensitive information of the human visual system, and is more beneficial to the observation of human eyes.
In the embodiment of the present invention, it is assumed that the exposure time of the color image frame is 30ms and the exposure time of the infrared image frame is 10 ms. The time interval between two preceding and succeeding color image frames is then 10 ms. Using a motion estimation algorithm, color image frames within the exposure duration of an infrared image frame can be estimated using two adjacent color image frames. And then fusing the color image frame and the infrared image frame obtained by estimation.
Referring to fig. 2, the sensor 15 alternately outputs color image frames and infrared image frames, of which frames, a color image frame 21, an infrared image frame 22, and a color image frame 23 are adjacent three frames. Digital processor 163 estimates color image frame 24 using color image frame 21 and color image frame 23, with the exposure time duration of color image frame 24 coinciding with infrared image frame 22. The digital processor 163 then fuses the infrared image frame 22 and the color image frame 24 to obtain a fused frame 25. By using the method to fuse different image frames, fused frames can be continuously generated.
And an encoder 164 for encoding the image frames for playback. For example, the YUV data is generated into an image frame of h.264 or h.265. The image frames may be played by a display or stored. And continuously playing a plurality of image frames to form a dynamic video.
And the supplementary lighting lamp control circuit 17 is used for receiving a control signal sent by the SOC and controlling the 930nm supplementary lighting lamp. Triggering the 40nm fill light to emit 930nm infrared light to irradiate the object 11, where the irradiation time length of the infrared light may be longer than the exposure time length of the infrared image frame.
The fill light control circuit 17 may intermittently control the 930nm fill light to be turned on within the exposure time range of the infrared image frame and to be turned off within the exposure time range of the color image frame. The SOC may intermittently send a control signal to the light supplement lamp control circuit 17, so that the light supplement lamp control circuit 17 intermittently controls the 930nm light supplement lamp to be turned on.
It should be noted that the infrared fill-in light 12 (an infrared fill-in light with a wavelength of 930nm, which is an embodiment) is configured to receive control of the infrared fill-in light control circuit 17 and perform infrared fill-in light on a subject. In the specific implementation process, the infrared light supplement lamp 12 can be integrated in a camera to form an integrated device, and can also be sold independently of the camera, and for the latter situation, the camera needs to be provided with a corresponding interface for connecting an external (independent) infrared light supplement lamp to realize the control of the external infrared light supplement lamp.
Referring to fig. 3, the present invention also provides an embodiment of an image processing system. The image processing system includes monocular and monocular cameras 31 and 32, a network 33, and a server 34. Wherein the monocular camera 31 and the monocular camera 32. The internal structures of the monocular camera 31 and the monocular camera 32 are described in the above description and fig. 1, and only briefly described below. The number of single cameras in the image processing system can be 1 or more, and 2 are taken as examples in fig. 3; the number of servers 34 may be 1 or more, exemplified by 1 in fig. 3.
The monocular camera 31 (or the monocular camera 31) includes a lens 13, a double-pass filter 14, an image sensor 15, and an image fusion processing unit 16. The lens 13 is used for receiving light rays from the object; a double-pass filter 14, configured to filter light received by the lens, and retain optical signals in a visible light band and optical signals in an infrared band after filtering; the image sensor 15 is configured to perform photoelectric conversion on the optical signals in the two bands obtained by filtering, generate a color image frame after the optical signals in the visible light band are subjected to photoelectric conversion, and generate an infrared image frame after the optical signals in the infrared band are subjected to photoelectric conversion; and an image fusion processing unit 16, configured to fuse the color image frame generated by the image sensor and the infrared image frame to generate a fused color image frame.
The network 33 is used to provide communication between the monocular camera 31 and the monocular camera 32 and the server 34. The network 33 may be a wired network, a wireless network, or a combination of wired and wireless networks. There may be network devices such as routers, switches, etc. in the network 33.
The server 34 is configured to receive the merged color image frames via the network 33, and store and/or analyze the merged color image frames. Optionally, the server 34 (or other devices, such as a video analysis server, an image analysis server) may further process the images stored by the server 34. For example: identifying the fused color image frames, and identifying the persons as persons in a person list in a database according to the characteristics of the persons appearing in the fused color image frames; and (4) carrying out license plate recognition on the vehicle to recognize a license plate number.
Referring to fig. 4, the present invention also provides an embodiment of an image processing method, which may be performed by the monocular camera described above.
And step S41, filtering the light received by the lens, and reserving the optical signals in the visible light band and the optical signals in the infrared band after filtering. This step can be performed by a double pass filter or by a prism.
And step S42, performing photoelectric conversion on the optical signals of the two wave bands obtained by filtering, wherein the optical signals of the visible light wave band generate color image frames after photoelectric conversion, and the optical signals of the infrared wave band generate infrared image frames after photoelectric conversion. This step may be performed by a sensor that may process optical signals in the visible and infrared bands.
And step S43, fusing the color image frame generated by the image sensor and the infrared image frame to generate a fused color image frame. This step may be performed by an image fusion processing unit (e.g., SOC).
The fusing the generated color image frame and the infrared image frame may specifically include: receiving the color image frame from the image sensor, performing image signal processing on the color image frame, and sending the processed color image frame to a digital signal processor; receiving the infrared image frame from the image sensor, processing an image signal of the infrared image frame, and sending the processed infrared image frame to a digital signal processor; and the infrared image frame and the color image frame which are processed by the image signal are fused to generate a fused color image frame.
The low fusion algorithm is described below by way of a specific example.
Referring to fig. 2, a first color image frame (color image frame 21) is generated at a first time instant, a first infrared image frame (infrared image frame 22) is generated at a second time instant, and a second color image frame (color image frame 23) is generated at a third time instant, the first time instant, the second time instant, and the third time instant being adjacent in time. Fusing the generated color image frame and the infrared image frame, which specifically comprises: estimating a predicted color image frame 24 at a second time using the first color image frame 21 and the second color image frame 23; and performing image fusion by using the predicted color image frame 24 and the first infrared image frame 22 to generate a fused color image frame.
Optionally, the embodiment of the method further includes: and when the digital signal processor receives the color image frame, sending a starting control signal to a 930nm supplementary lighting lamp. And after receiving the starting control signal, the 930nm light supplement lamp emits 930nm infrared light so as to supplement light to the object to be shot during the exposure period of the infrared image frame. After the exposure of the infrared image frame is completed, a closing control signal can be sent to the 930nm supplementary lighting lamp, so that the infrared supplementary lighting is stopped. When the digital signal processor receives the next color image frame, the digital signal processor sends a starting control signal to the 930nm supplementary lighting lamp again; when the exposure of the next infrared image frame is finished, a closing control signal can be sent to the 930nm supplementary lighting lamp again, and the process is circulated.
The present invention also provides a computer program product embodiment, which comprises computer readable code instructions that, when executed by a computer, enable the computer to configure a monocular camera such that the monocular camera can perform the method according to any one of the various possible implementations of the third aspect and the fourth aspect.
The present invention also provides an embodiment of a Non-transitory computer readable storage medium containing computer program code instructions which, when executed by a computer, enable the configuration of a monocular camera by the computer such that the monocular camera is able to perform the method according to any one of the various possible implementations of the third aspect and the fourth aspect. The non-transitory computer-readable storage medium comprising one or more of the group: Read-Only Memory (ROM), programmable ROM (programmable ROM), Erasable PROM (Erasable PROM), Flash Memory, Electrically EPROM (Electrically EPROM, EEPROM), and Hard disk drive (Hard drive).
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the same; although the present invention has been described in detail with reference to the foregoing examples, however: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (16)

1. A monocular camera comprising:
a lens for receiving light from the subject;
the double-pass filter is used for filtering light received by the lens and reserving optical signals in a visible light band and optical signals in an infrared band after filtering;
the image sensor is used for performing photoelectric conversion on the optical signals of the two wave bands obtained after filtering, wherein the optical signals of the visible light wave band generate color image frames after being subjected to photoelectric conversion, and the optical signals of the infrared wave band generate infrared image frames after being subjected to photoelectric conversion; and
and the image fusion processing unit is used for fusing the color image frame and the infrared image frame generated by the image sensor to generate a fused color image frame.
2. The monocular camera of claim 1, wherein the monocular camera further comprises:
and the infrared light supplement lamp control circuit is used for controlling the infrared light supplement lamp to carry out infrared light supplement on the shot object.
3. The monocular camera of claim 1 or 2, the image sensor configured to generate a first color image frame at a first time, a first infrared image frame at a second time, and a second color image frame at a third time, the first time, the second time, and the third time being adjacent in time, the image fusion processing unit being specifically configured to:
estimating to obtain a predicted color image frame at a second time using the first color image frame and the second color image frame; and fusing the predicted color image frame and the first infrared image frame image to generate a fused color image frame.
4. The monocular camera of claim 2 or 3, further comprising:
and the infrared light supplement lamp is used for receiving the control of the infrared light supplement lamp control circuit and performing infrared light supplement on the shot object.
5. The monocular camera according to any one of claims 1 to 4, wherein the image fusion processing unit specifically includes:
the first image signal processor is used for receiving the color image frames from the image sensor, carrying out image signal processing on the color image frames and sending the processed color image frames to the digital signal processor;
the second image signal processor is used for receiving the infrared image frames from the image sensor, carrying out image signal processing on the infrared image frames and sending the processed infrared image frames to the digital signal processor;
and the digital signal processor is used for fusing the infrared image frame and the color image frame which are processed by the image signal to generate the fused color image frame.
6. The monocular camera of claim 5, wherein:
the exposure time of the color image frame is longer than that of the infrared image frame.
7. The monocular camera of claim 5, the infrared fill-in light control circuit being specifically configured to:
and when the digital signal processor receives the color image frame, a starting control signal is sent to the infrared light supplement lamp.
8. The monocular camera of any one of claims 2-7, the operating wavelength range of the infrared fill light is [840nm,1040nm ].
9. An image processing system comprising the monocular camera of any one of claims 1-8, a network, and a server;
wherein the network is configured to transmit the fused color image frames generated by the monocular camera;
the server is used for receiving the fused color image frames through the network and storing and/or analyzing the fused color image frames.
10. An image processing method applied to a monocular camera, comprising:
receiving light rays from the shot object through a lens, and filtering the received light rays by using a double-pass filter to obtain optical signals of a visible light wave band and optical signals of an infrared wave band;
performing photoelectric conversion on optical signals of two wave bands obtained by filtering by using an image sensor, wherein the optical signals of the visible light wave band generate color image frames after being subjected to photoelectric conversion, and the optical signals of the infrared wave band generate infrared image frames after being subjected to photoelectric conversion;
and fusing the color image frame and the infrared image frame generated after photoelectric conversion by using an image fusion processing unit to generate a fused color image frame.
11. The method of claim 10, prior to receiving light from the subject, the method further comprising:
and sending a control signal to an infrared light supplement lamp, wherein the control signal is used for controlling the infrared light supplement lamp to perform infrared light supplement on the shot object.
12. The method according to claim 10 or 11, wherein the step of performing photoelectric conversion on the filtered two wavelength bands of optical signals by using the image sensor comprises:
generating a first color image frame at a first time, generating a first infrared image frame at a second time, and generating a second color image frame at a third time, wherein the first time, the second time and the third time are adjacent in time;
the fusing the color image frame and the infrared image frame generated after the photoelectric conversion by using the image fusion processing unit specifically comprises:
estimating to obtain a predicted color image frame at a second time using the first color image frame and the second color image frame;
and fusing the predicted color image frame and the first infrared image frame image to generate a fused color image frame.
13. The method according to claim 11 or 12, after sending the control signal to the infrared fill light, the method further comprising:
and turning on an infrared light supplement lamp to supplement light to the shot object in an infrared manner.
14. The method according to any one of claims 10 to 13, wherein fusing the color image frame and the infrared image frame generated after photoelectric conversion specifically comprises:
processing the image signal of the color image frame, and sending the processed color image frame to a digital signal processor;
receiving the infrared image frame from the image sensor, processing an image signal of the infrared image frame, and sending the processed infrared image frame to the digital signal processor;
and the infrared image frame and the color image frame which are processed by the image signal are fused to generate a fused color image frame.
15. The method of claim 14, further comprising:
and when the digital signal processor receives the color image frame, a starting control signal is sent to the infrared light supplement lamp.
16. The method according to any of claims 11-15, wherein the operating wavelength range of the infrared fill light is [840nm,1040nm ].
CN201910979447.0A 2019-10-15 2019-10-15 Monocular camera, image processing system, and image processing method Pending CN110798623A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910979447.0A CN110798623A (en) 2019-10-15 2019-10-15 Monocular camera, image processing system, and image processing method
PCT/CN2020/097704 WO2021073140A1 (en) 2019-10-15 2020-06-23 Monocular camera, and image processing system and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910979447.0A CN110798623A (en) 2019-10-15 2019-10-15 Monocular camera, image processing system, and image processing method

Publications (1)

Publication Number Publication Date
CN110798623A true CN110798623A (en) 2020-02-14

Family

ID=69440452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910979447.0A Pending CN110798623A (en) 2019-10-15 2019-10-15 Monocular camera, image processing system, and image processing method

Country Status (2)

Country Link
CN (1) CN110798623A (en)
WO (1) WO2021073140A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021073140A1 (en) * 2019-10-15 2021-04-22 华为技术有限公司 Monocular camera, and image processing system and image processing method
CN113489865A (en) * 2021-06-11 2021-10-08 浙江大华技术股份有限公司 Monocular camera and image processing system
CN113542546A (en) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 Image pickup device, image pickup method, equipment and device
WO2021217521A1 (en) * 2020-04-29 2021-11-04 华为技术有限公司 Camera and image acquisition method
CN113711584A (en) * 2020-03-20 2021-11-26 华为技术有限公司 Camera device
CN114257707A (en) * 2020-09-21 2022-03-29 安霸国际有限合伙企业 Intelligent IP camera with colored night mode
CN114374776A (en) * 2020-10-15 2022-04-19 华为技术有限公司 Camera and camera control method
WO2023125087A1 (en) * 2021-12-30 2023-07-06 华为技术有限公司 Image processing method and related apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105988215A (en) * 2015-02-15 2016-10-05 宁波舜宇光电信息有限公司 Multispectral module set imaging system, manufacturing method thereof and application of multispectral module set imaging system
CN107005639A (en) * 2014-12-10 2017-08-01 索尼公司 Image pick up equipment, image pickup method, program and image processing equipment
US20170237887A1 (en) * 2014-11-13 2017-08-17 Panasonic Intellectual Property Management Co. Ltd. Imaging device and imaging method
CN107534761A (en) * 2015-05-07 2018-01-02 索尼半导体解决方案公司 Imaging device, imaging method, program and image processing apparatus
CN109963086A (en) * 2018-07-30 2019-07-02 华为技术有限公司 Be time-multiplexed light filling imaging device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110798623A (en) * 2019-10-15 2020-02-14 华为技术有限公司 Monocular camera, image processing system, and image processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170237887A1 (en) * 2014-11-13 2017-08-17 Panasonic Intellectual Property Management Co. Ltd. Imaging device and imaging method
CN107005639A (en) * 2014-12-10 2017-08-01 索尼公司 Image pick up equipment, image pickup method, program and image processing equipment
CN105988215A (en) * 2015-02-15 2016-10-05 宁波舜宇光电信息有限公司 Multispectral module set imaging system, manufacturing method thereof and application of multispectral module set imaging system
CN107534761A (en) * 2015-05-07 2018-01-02 索尼半导体解决方案公司 Imaging device, imaging method, program and image processing apparatus
CN109963086A (en) * 2018-07-30 2019-07-02 华为技术有限公司 Be time-multiplexed light filling imaging device and method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021073140A1 (en) * 2019-10-15 2021-04-22 华为技术有限公司 Monocular camera, and image processing system and image processing method
CN113711584A (en) * 2020-03-20 2021-11-26 华为技术有限公司 Camera device
CN113711584B (en) * 2020-03-20 2023-03-03 华为技术有限公司 Camera device
WO2021217521A1 (en) * 2020-04-29 2021-11-04 华为技术有限公司 Camera and image acquisition method
CN114257707A (en) * 2020-09-21 2022-03-29 安霸国际有限合伙企业 Intelligent IP camera with colored night mode
CN114374776A (en) * 2020-10-15 2022-04-19 华为技术有限公司 Camera and camera control method
WO2022078036A1 (en) * 2020-10-15 2022-04-21 华为技术有限公司 Camera and control method therefor
CN114374776B (en) * 2020-10-15 2023-06-23 华为技术有限公司 Camera and control method of camera
CN113542546A (en) * 2021-05-31 2021-10-22 浙江大华技术股份有限公司 Image pickup device, image pickup method, equipment and device
CN113489865A (en) * 2021-06-11 2021-10-08 浙江大华技术股份有限公司 Monocular camera and image processing system
WO2023125087A1 (en) * 2021-12-30 2023-07-06 华为技术有限公司 Image processing method and related apparatus

Also Published As

Publication number Publication date
WO2021073140A1 (en) 2021-04-22

Similar Documents

Publication Publication Date Title
CN110798623A (en) Monocular camera, image processing system, and image processing method
CN107370958B (en) Image blurs processing method, device and camera terminal
JP6732902B2 (en) Imaging device and imaging system
CN110493494B (en) Image fusion device and image fusion method
KR102270674B1 (en) Biometric camera
CN107945135B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN110572573B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN106385530B (en) Double-spectrum camera
US11240443B2 (en) Systems and methods for image acquisition
CN107862653B (en) Image display method, image display device, storage medium and electronic equipment
CN108111749B (en) Image processing method and device
CN112788249A (en) Image fusion method and device, electronic equipment and computer readable storage medium
JP2022509034A (en) Bright spot removal using a neural network
KR102281149B1 (en) APPARATUS FOR TRACKING EYE POINT OPERABLE AT HIGH intensity of illumination AND LOW intensity of illumination AND METHOD THEREOF
CN108322651B (en) Photographing method and device, electronic equipment and computer readable storage medium
CN107993209B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109474770B (en) Imaging device and imaging method
EP3300363B1 (en) A bit rate controller and a method for limiting output bit rate
US20170154437A1 (en) Image processing apparatus for performing smoothing on human face area
CN110493532A (en) A kind of image processing method and system
WO2017086155A1 (en) Image capturing device, image capturing method, and program
CN107800971A (en) Auto-exposure control processing method, device and the equipment of pan-shot
CN110121031A (en) Image-pickup method and device, electronic equipment, computer readable storage medium
CN108052883B (en) User photographing method, device and equipment
CN108805025A (en) Laser output control method and device, electronic equipment, storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214

RJ01 Rejection of invention patent application after publication