CN115314629A - Imaging method, system and camera - Google Patents

Imaging method, system and camera Download PDF

Info

Publication number
CN115314629A
CN115314629A CN202110502434.1A CN202110502434A CN115314629A CN 115314629 A CN115314629 A CN 115314629A CN 202110502434 A CN202110502434 A CN 202110502434A CN 115314629 A CN115314629 A CN 115314629A
Authority
CN
China
Prior art keywords
image
gain
exposure time
ratio
weight map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110502434.1A
Other languages
Chinese (zh)
Other versions
CN115314629B (en
Inventor
谢建磊
范蒙
於敏杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110502434.1A priority Critical patent/CN115314629B/en
Publication of CN115314629A publication Critical patent/CN115314629A/en
Application granted granted Critical
Publication of CN115314629B publication Critical patent/CN115314629B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application provides an imaging method, an imaging system and a camera, wherein the imaging method comprises the following steps: acquiring an output image of a single image sensor in a first working mode, wherein the output image comprises a first image and a second image, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different; the first image and the second image are used for processing to obtain a third image, and the processing at least comprises synthesis processing; when the image sensor is switched from the first working mode to the second working mode, an output image of the image sensor in the second working mode is obtained, and the output image is a fourth image. The imaging method can improve the working flexibility of the image sensor and improve the imaging quality.

Description

Imaging method, system and camera
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an imaging method, an imaging system, and a camera.
Background
With the development of image sensor technology and image processing technology, for scenes with sufficient illumination, mainstream image processing systems can obtain images with higher quality. In contrast, in a low-illuminance environment, the obtained image is likely to have problems of low brightness and large noise. The mainstream image processing system increases the light input quantity by increasing the exposure time, improves the image brightness, and reduces the image noise, but the increase of the exposure time can cause the smear of the motion area in the image to be more serious.
How to improve the image quality under the low-illumination environment becomes a technical problem to be solved urgently.
Disclosure of Invention
In view of the above, the present application provides an imaging method, an imaging system and a camera.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of embodiments of the present application, there is provided an imaging method including:
acquiring an output image of a single image sensor in a first working mode, wherein the output image comprises a first image and a second image, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different;
the first image and the second image are used for processing to obtain a third image, and the processing at least comprises synthesis processing;
when the image sensor is switched from the first working mode to the second working mode, acquiring an output image of the image sensor in the second working mode, wherein the output image is a fourth image.
According to a second aspect of embodiments of the present application, there is provided an imaging system comprising: a control unit, an image sensor and a processing unit; wherein:
a control unit for determining exposure parameters, the exposure parameters including an exposure time and a gain;
the image sensor is used for outputting an image in a first working mode, the output image comprises a first image and a second image, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different;
the processing unit is used for processing the first image and the second image to obtain a third image, and the processing at least comprises synthesis processing;
the image sensor is further used for outputting a fourth image when the first working mode is switched to the second working mode.
According to a third aspect of embodiments of the present application, there is provided a camera including: a lens, an image sensor and a processor; wherein:
the lens is used for processing incident light into a light signal incident to the image sensor;
the image sensor is used for outputting an image according to the optical signal in a first working mode, the output image comprises a first image and a second image, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different;
the processor is used for processing the first image and the second image to obtain a third image, and the processing at least comprises synthesis processing;
the image sensor is further used for outputting a fourth image when the first working mode is switched to the second working mode.
According to the imaging method provided by the embodiment of the application, the image sensor can output different images in different working modes by setting a plurality of different working modes for the image sensor, so that the working flexibility of the image sensor is improved; when the image sensor is in the first working mode, the single image sensor is controlled to generate and output the first image and the second image according to different exposure time and gain, and the first image and the second image are used for processing to obtain a third image, so that the imaging quality is improved.
Drawings
FIG. 1 is a schematic flow chart diagram of an imaging method shown in an exemplary embodiment of the present application;
FIG. 2A is a block diagram of an imaging system in accordance with an exemplary embodiment of the present application;
2B-2E are schematic structural diagrams of imaging systems showing different gain calculation modes in an exemplary embodiment of the application;
FIGS. 2F-2H are schematic structural diagrams of imaging systems of different weight map determination approaches shown in exemplary embodiments of the present application;
FIGS. 3A-3D are schematic diagrams illustrating a gain calculation unit calculating a gain according to an exemplary embodiment of the present application;
fig. 4A to 4H are schematic diagrams illustrating a weight calculating unit performing filtering processing according to pixel value differences to obtain a weight map according to an exemplary embodiment of the present application;
FIG. 5 is a diagram illustrating a weight calculation unit using a motion detection model to obtain a weight map according to an exemplary embodiment of the present application;
6A-6C are schematic diagrams of a weight calculation unit utilizing a target detection model to obtain a weight map according to an exemplary embodiment of the application;
FIG. 7 is a schematic block diagram of an imaging system shown in an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of an imaging device according to an exemplary embodiment of the present application;
fig. 9 is a schematic structural view of another image forming apparatus shown in still another exemplary embodiment of the present application;
FIG. 10 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment of the present application;
fig. 11 is a schematic diagram illustrating a structure of a camera according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features, and advantages of the embodiments of the present application more obvious and understandable by those skilled in the art, the technical solutions in the embodiments of the present application are further described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic flow chart of an imaging method according to an embodiment of the present disclosure is shown in fig. 1, where the imaging method may include the following steps:
step S100, obtaining an output image of a single image sensor in a first working mode, where the output image includes a first image and a second image, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different.
Step S110, the first image and the second image are used for processing to obtain a third image, and the processing at least includes a synthesizing process.
In the embodiment of the application, the condition that the image sensor generates the image meeting the quality requirement can be different according to different scenes.
For example, in a scene with the light intensity of the environment, an image with proper brightness can be obtained within a short exposure time; for a scene with weak ambient light, the exposure time needs to be increased, or a larger gain needs to be set, so as to obtain an image with proper brightness.
In order to meet the requirements of different scenes, a plurality of different working modes can be set for the image sensor, and when the image sensor is in different working modes, the image can be generated according to different strategies, so that the working flexibility of the image sensor is improved under the condition of ensuring the image quality.
For example, when the image sensor is in the first operating mode, a single image sensor may be controlled to generate and output the first image and the second image according to different exposure parameters, and the third image may be obtained by performing a synthesizing process on the first image and the second image.
Illustratively, the exposure parameters may include exposure time and sensor gain (gain for short).
It should be noted that, unless otherwise specified, all the gains mentioned below refer to sensor gains.
For example, when the image sensor generates the first image and the second image, the exposure time is different and the gain is different, and the first image and the second image can respectively ensure the quality of the target region, such as reducing the degree of smear, and ensure the quality of the non-target region, such as improving the signal-to-noise ratio of the non-target region, so that the third image is obtained by combining the first image and the second image, and a combined image with better quality of the target region and the non-target region is obtained.
Step S120, when the image sensor is switched from the first working mode to the second working mode, acquiring an output image of the image sensor in the second working mode, where the output image is a fourth image.
In this embodiment, when the image sensor is switched from the first operation mode to the second operation mode, the image sensor may output a fourth image.
For example, when the image sensor operates in the second operation mode, the synthesis processing of the fourth images of two or more consecutive frames output by the image sensor may not be required, so that the power consumption and the computational complexity of the imaging system may be reduced.
In the method flow shown in fig. 1, by setting a plurality of different working modes for the image sensor, the image sensor can output different images in the different working modes, thereby improving the flexibility of the image sensor; when the image sensor is in the first working mode, the single image sensor is controlled to generate and output the first image and the second image according to different exposure time and gain, and the first image and the second image are used for processing to obtain a third image, so that the imaging quality is improved.
In addition, in the embodiment of the application, different exposure parameters are set for a single image sensor, so that the single image sensor generates and outputs the first image and the second image according to the different exposure parameters respectively in the first working mode, that is, the single image sensor realizes the image generation of the different exposure parameters, and the structure of the imaging system is simplified.
In some embodiments, the exposure time for the image sensor to generate the first image is a first exposure time, the exposure time for the image sensor to generate the second image is a second exposure time, the first image exposure time is less than the second image exposure time; the gain when the image sensor generates the first image is a first gain, the gain when the image sensor generates the second image is a second gain, and the first gain is greater than the second gain.
For example, in a low-illumination environment, it may be considered to increase the exposure time to increase the light-entering amount, thereby increasing the image brightness and improving the image quality.
However, when there is a moving object in the scene, such as a pedestrian or a vehicle, or other moving objects, the exposure time is long, and the image quality is poor, for example, when there is a running vehicle in the scene, the image quality of the license plate of the vehicle is poor due to the large degree of smear of the target region.
If the gain is calculated by the brightness of the long frame when the long and short frame images are collected, the gain of the short frame is equal to the gain of the long frame, and then the short frame is multiplied by the digital gain to reach the proper brightness in the algorithm processing flow, the scheme does not fully utilize the characteristic that the amplification degree of the sensor gain to the noise is smaller than the additionally arranged digital gain in the algorithm processing flow.
Based on the above consideration, when the image sensor collects the long and short frame images, the first image and the second image are generated according to the imaging control methods with different exposure times and different sensor physical gains, the imaging method can give consideration to both the problem of improving the signal-to-noise ratio of the non-target area and the problem of avoiding the smear of the target area, and can respectively guarantee the imaging quality of the target area and the imaging quality of the non-target area of the collected images.
For example, the exposure time when the image sensor generates the first image (referred to herein as the first exposure time) may be less than the exposure time when the image sensor generates the second image (referred to herein as the second exposure time) to reduce the degree of smearing of the target region of the first image and to improve the signal-to-noise ratio of the non-target region.
Illustratively, if the long frame gain is equal to the short frame gain when the long and short frame images are acquired, in order to make the short frame images reach proper brightness, the short frame needs to be multiplied by a larger digital gain in the subsequent algorithm processing, which will cause the noise of the target area of the short frame images to be too large, therefore, in order to reduce the noise of the target area of the first image (the first image may be referred to as the short frame), the gain (referred to as the first gain herein) when the image sensor generates the first image may be larger than the gain (referred to as the second gain herein) when the image sensor generates the second image (the second image may be referred to as the long frame), so that the short frame does not need to be multiplied by an additional digital gain in the subsequent algorithm processing, or only needs to be multiplied by a smaller digital gain, that is the first image can reach proper brightness, and the characteristic that the amplification degree of the sensor gain on the noise is smaller than the additionally set digital gain in the algorithm processing flow is fully utilized, so as to reduce the noise of the target area of the first image.
Therefore, when the first image and the second image are processed to obtain the third image, the quality of the target area and the quality of the non-target area can be considered, the smear degree of the target area is reduced, the signal-to-noise ratio of the non-target area is improved, and the image quality in a low-illumination environment is improved.
Illustratively, the gain of the image sensor may include an analog gain and/or a digital gain. Considering that the analog gain amplifies noise less than the digital gain, the image sensor can be preferentially controlled to have the analog gain when controlling the image sensor to have the gain.
Illustratively, the first gain may include a first analog gain and a first digital gain, the first digital gain being greater than or equal to 0; the second gain may include a second analog gain and a second digital gain, the second digital gain being greater than or equal to 0.
As an example, the first gain being greater than the second gain may include:
the first analog gain is greater than the second analog gain, i.e., neither of the image sensors sets the digital gain when the first image and the second image are generated.
As another example, the first gain being greater than the second gain may include:
the sum of the first analog gain and the first digital gain is greater than the second analog gain; the first digital gain is greater than 0, i.e. the image sensor does not set the digital gain when generating the second image sequence.
As another example, the first gain being greater than the second gain may include:
the sum of the first analog gain and the first digital gain is greater than the sum of the second analog gain and the second digital gain; the first digital gain is greater than 0 and the second digital gain is greater than 0.
In one example, the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the first ratio is equal to the second ratio.
For example, the ratio of the gains of the first image and the second image may be determined according to the ratio of the exposure times of the first image and the second image, and the ratio of the second gain to the first gain (referred to as the second ratio herein) is controlled to be equal to the ratio of the first exposure time to the second exposure time (referred to as the first ratio herein), that is, the first gain is greater than the second gain, so that a greater gain (sensor gain) is set for the first image, and accordingly, the digital gain that needs to be additionally set in the subsequent algorithm processing flow may be reduced, and the noise of the target region of the first image may be reduced.
In one example, the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the difference between the first ratio and the second ratio is less than 10%.
For example, the gain ratio of the first image to the second image may be determined according to the exposure time ratio of the first image to the second image, and the difference between the ratio of the first exposure time to the second exposure time (referred to as the first ratio herein) and the ratio of the second gain to the first gain (referred to as the second ratio herein) is less than 10%, so that the brightness of the first image and the second image before synthesis is approximately the same, thereby improving the quality of the synthesized image.
For example, the first ratio may be smaller than the second ratio.
In some embodiments, the switching between the first mode of operation and the second mode of operation is based on the intensity of ambient light.
For example, considering that when the ambient light intensity is different, the image sensor needs to perform image generation according to different exposure parameters to ensure the image quality, and when the ambient light intensity is stronger, the image quality generated by the image sensor may generally meet the requirement, and when the ambient light intensity is weaker, the image quality generated by the image sensor may be poorer, and processing needs to be performed to improve the image quality, as in the manner described in the above embodiments.
Illustratively, the switching between the first operation mode and the second operation mode may be performed based on the intensity of the ambient light.
In one example, the image sensor may be controlled to operate in a first operating mode when ambient light is weak; and when the ambient light is strong, controlling the image sensor to work in a second working mode.
In some embodiments, the synthesizing the first image and the second image to obtain the third image in step S110 may include:
synthesizing the first image and the second image based on the weight map to obtain a third image; the weight map is determined according to the first image sequence or the second image sequence; the first image sequence comprises at least one frame of a first image and the second image sequence comprises at least one frame of a second image, the first image sequence and the second image sequence being determined from output images of the single image sensor in the first mode of operation.
For example, to improve the image quality improvement effect of image synthesis, a first image sequence may be determined according to a first image output by the image sensor in the first operating mode, and a weight map for performing synthesis processing on the first image and the second image may be determined according to the first image sequence.
Alternatively, the second image sequence may be determined according to a second image output by the image sensor in the first operation mode, and the weight map for performing the synthesizing process on the first image and the second image may be determined according to the second image sequence.
Illustratively, the first image sequence includes at least one frame of the first image; the second image sequence comprises at least one frame of the second image.
For example, the first image and the second image may be synthesized to obtain the third image according to the determined weight map.
For example, the image sensor may alternately output the first image and the second image, that is, the image sensor may output the second image for a next frame after outputting the first image for one frame, and output the first image for a next frame, and so on.
It should be appreciated that the above embodiments describe only one output mode of the first image and the second image in the image generated by the image sensor, and do not limit the scope of the present application.
In one example, determining a weight map from a first sequence of images may include:
the first image sequence comprises a current frame first image and a historical frame first image, and a weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
For example, the history frame first image may refer to a first image whose output time is prior to the output time of the current frame first image.
For example, the historical frame first image may include one or more frames of first images adjacent to the current frame first image.
For example, considering that the long and short frame fusion scheme calculates the weight through the difference between the long and short frames and the brightness of the long and short frames, the ringing phenomena such as stroboscopic light caused by different exposure times cannot be solved.
In view of the above problems, in the embodiment of the present application, when at least two frames of first images exist in the first image sequence, the weight map may be determined according to a difference between pixel values of the current frame first image and the historical frame first image, and since exposure times of the current frame first image and the historical frame first image are the same, oscillation phenomena such as stroboscopic light caused by different exposure times may be avoided, and accuracy and a detection rate of object motion information detection in a low-illumination environment may be improved.
For example, the current frame first image and the historical frame first image may be as close as possible to improve the quality of the composite image.
In one example, the current frame first image and the historical frame first image are adjacent frames in the first image sequence.
In one example, determining the weight map from the second sequence of images may include:
the second image sequence comprises a current frame second image and a historical frame second image, and the weight map is determined according to the difference of pixel values of the current frame second image and the historical frame second image. For example, the history frame second image may refer to a second image whose output time is prior to the output time of the current frame second image.
For example, the history frame second image may include one or more frame second images adjacent to the current frame second image.
For example, considering that the long and short frame fusion scheme calculates the weight through the difference between the long and short frames and the brightness of the long and short frames, the ringing phenomena such as stroboscopic light caused by different exposure times cannot be solved.
In view of the above problems, in the embodiment of the present application, when at least two frames of second images exist in the second image sequence, the weight map may be determined according to a difference between pixel values of the current frame of second image and the historical frame of second image, and since exposure times of the current frame of second image and the historical frame of second image are the same, oscillation phenomena such as stroboscopic light caused by different exposure times may be avoided, and accuracy and detection rate of object motion information detection in a low-illumination environment may be improved.
For example, the current frame second image and the historical frame second image may be as close as possible to improve the quality of the composite image.
In one example, the current frame second image and the historical frame second image are adjacent frames in the second image sequence.
It should be noted that, in the embodiment of the present application, in addition to determining the weight map according to the difference between the pixel values of the current frame first image and the historical frame first image, or determining the weight map according to the difference between the pixel values of the current frame second image and the historical frame second image, the weight map may also be determined according to the difference between the pixel values of the current frame first image and the historical frame first image, and the difference between the pixel values of the second image and the historical frame second image.
For example, a corresponding weight map (assumed as weight map 1) may be determined according to the pixel value difference between the current frame first image and the historical frame first image, a corresponding weight map (assumed as weight map 2) may be determined according to the pixel value difference between the current frame second image and the historical frame second image, and a final weight map may be determined according to the weight maps 1 and 2.
For example, the weight map 1 and the weight map 2 may be fused, e.g., weighted, to determine a final weight map.
In addition, the current frame first image and the current frame second image can be weighted respectively to obtain a current frame weighted image, the historical frame first image and the historical frame second image are weighted to obtain a historical frame weighted image, and the weighting image is determined according to the current frame weighted image and the historical frame weighted image.
In one example, determining the weight map from pixel value differences may include:
and filtering the pixel value difference to obtain a weight map.
Illustratively, the weight map may be obtained by performing a filtering process on the pixel value difference.
For example, taking the example of determining the weight map according to the pixel value difference between the current frame first image and the historical frame first image included in the first image sequence, the current frame first image and the historical frame first image may be subtracted by a subtraction unit to obtain a residual image, then the residual image is subjected to mean filtering processing within a specified neighborhood size, and the residual mean value is converted into the weight map by a thresholding means according to the residual mean value of each pixel point.
In another example, determining the weight map from pixel value differences may include:
and sending the pixel value difference into a pre-trained convolutional neural network to obtain a weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the weight map.
For example, in order to improve the efficiency and accuracy of motion detection, a convolutional neural network model (which may be referred to as a motion detection model) for performing motion detection on the input pixel value difference to obtain a weight map may be trained in advance, and when the pixel difference value is determined in the above manner, the determined pixel difference value may be input into the pre-trained motion detection model to obtain a corresponding weight map.
For example, taking the example of determining the weight map according to the pixel value difference between the current frame second image and the historical frame second image included in the second image sequence, the pixel value difference between the current frame second image and the historical frame second image may be input into a pre-trained motion detection model to obtain a corresponding weight map.
For another example, taking the example of determining the weight map according to the pixel value difference between the current frame first image and the historical frame first image and the pixel value difference between the second image and the historical frame second image as an example, the pixel value difference between the current frame first image and the historical frame first image may be input into a pre-trained motion detection model to obtain a corresponding weight map (assumed as weight map 1), and the pixel value difference between the current frame second image and the historical frame second image may be input into a pre-trained motion detection model to obtain a corresponding weight map (assumed as weight map 2), and the final weight map may be obtained according to the weight map 1 and the weight map 2.
In some embodiments, the position information of the key target in the image can be determined by performing target detection on the image, and the weight map can be determined according to the position information of the key target in the image, so that the synthesis weight can be obtained through at least one frame of image, and the consumption of cache and calculation amount is saved.
By way of example, key targets may include, but are not limited to, vehicles, pedestrians, animals, or signal lights, etc.
In one example, determining the weight map from the first sequence of images may include:
the first image sequence is a first image, and the weight map is determined according to the position information of the key target in the first image.
For example, object detection may be performed on the first image, position information of a key object in the first image may be determined, and the weight map may be determined according to the position information of the key object in the first image.
In one example, determining the weight map from the second sequence of images may include:
the second image sequence is a second image, and the weight map is determined according to the position information of the key target in the second image.
For example, the target detection may be performed on the second image, the position information of the key target in the second image may be determined, and the weight map may be determined according to the position information of the key target in the second image.
It should be noted that, in the embodiment of the present application, in addition to determining the weight map according to the position information of the key object in the first image as described above, or determining the weight map according to the position information of the key object in the second image, the weight map may also be determined according to the position information of the key object in the first image and the position information of the key object in the second image.
For example, the target detection may be performed on the first image, the position information of the key target in the first image may be determined, the target detection may be performed on the second image, the position information of the key target in the second image may be determined, and the weight map may be determined according to the position information of the key target in the first image and the position information of the key target in the second image.
In one example, determining the weight map from the location information of the key target may include:
and obtaining a weight map by utilizing a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on an input image and obtaining the weight map.
For example, in order to improve the accuracy of target detection, a convolutional neural network model (which may be referred to as a target detection model) for performing target detection on an input image and obtaining a weight map may be trained in advance, and the trained target detection model is used to determine the weight map.
For example, taking the determination of the weight map according to the position information of the key target in the first image as an example, the first image may be input into a pre-trained target detection model to obtain a corresponding weight map.
For another example, taking the example of determining the weight map according to the position information of the key target in the first image and the position information of the key target in the second image as an example, the first image may be input into a pre-trained target detection model to obtain a corresponding weight map (assumed as weight map 1), the second image may be input into a pre-trained target detection model to obtain a corresponding weight map (assumed as weight map 2), and the final weight map may be determined according to the weight map 1 and the weight map 2.
In some embodiments, processing the first image and the second image to obtain a third image may include:
according to the configuration weight of each pixel position in the weight map, weighting processing is carried out on each pixel value of the first image and the second image to obtain a third image; the configuration weight of any pixel position is used for determining the weighted weight of the first image and the second image at the pixel position.
For example, when the weight map for combining the first image and the second image is obtained in the above manner, the third image may be obtained by performing weighting processing on each pixel value of the first image and the second image according to the arrangement weight of each pixel position in the weight map.
For example, the first image and the second image may be processed according to the weight map to obtain a third image according to the following formula:
img_fus=(img_1*alpha+img_2*(n-alpha))/n
where img _ fus denotes a composite image (i.e., a third image), img _1 denotes a first image, img _2 denotes a second image, alpha denotes a weight map, and n denotes a normalized weight.
In one example, processing the first image and the second image to obtain a third image may include:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining process is performed.
Illustratively, in order to ensure that the dynamic range of the synthesized image is higher than that of the image before synthesis, the wide dynamic synthesis scheme must ensure that the brightness of the long and short frames is different, which limits the applicable scenes of the scheme.
For example, in order to improve the quality of the composite image, it may be possible to ensure that the average brightness of the first image is the same as that of the second image.
It should be noted that the same average brightness mentioned in the embodiments of the present application does not require that the average brightness is strictly equal, which allows a tolerable deviation, that is, if the difference between the average brightness of the first image and the average brightness of the second image is within a preset difference range, the average brightness of the first image and the average brightness of the second image can be considered to be the same; if the difference between the average brightness of the first image and the average brightness of the second image is not within the preset difference range, the average brightness of the first image and the average brightness of the second image can be considered to be different.
For example, when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image may be adjusted to be the same as the average luminance of the second image before the first image and the second image are subjected to the combining process.
As an example, the average luminance of the first image may be the same as the average luminance of the second image by multiplying the first image by the ratio of the average luminance of the second image to the average luminance of the first image.
For example, assuming that the average luminance of the first image is L1 and the average luminance of the second image is L2, the average luminance of the first image may be made the same as the average luminance of the second image by multiplying the first image by L2/L1.
In some embodiments, the first gain and the second gain may be determined by:
determining a first gain according to the brightness information of the first image;
and determining a second gain according to the first gain and the ratio of the second exposure time to the first exposure time.
For example, in order to make the brightness of the target area in the synthesized image at a proper level, when performing the gain calculation, a corresponding gain (i.e., a first gain) may be determined according to the brightness information of the first image output by the image sensor, and a corresponding gain (i.e., a second gain) of the second image may be determined according to the first gain and the ratio of the first exposure time to the second exposure time.
It should be noted that, in the embodiment of the present application, the second gain may also be calculated according to the brightness information of the second image output by the image sensor, and the first gain is determined according to the second gain and the ratio of the second exposure time to the first exposure time.
In some embodiments, the first gain and the second gain may be determined by:
determining a second gain according to the brightness information of the third image;
and determining the first gain according to the second gain and the ratio of the second exposure time to the first exposure time.
For example, to improve the signal-to-noise ratio of the non-target region, a second gain may be calculated according to the brightness information of the third image, and the first gain may be determined according to the second gain and the ratio of the second exposure time to the first exposure time.
In some embodiments, the first gain and the second gain may be determined by:
determining a first gain according to the brightness information of the first image; and determining a second gain according to the brightness information of the second image.
For example, in order to improve the accuracy of the gain control, the first gain may be determined according to the luminance information of the first image sequence, and the second gain may be determined according to the luminance information of the second image sequence, respectively.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application are described below with reference to specific examples.
Referring to fig. 2A, an imaging system provided in an embodiment of the present application may include a control unit, an image sensor, a weight calculation unit, and a processing unit.
In practical applications, the weight calculation unit may be a functional sub-unit of the processing unit, that is, the weight calculation may also be a function of the processing unit, and the processing unit implements the weight calculation function.
As shown in fig. 2A, the control unit may perform gain calculation according to the luminance information of the first image, or perform gain calculation according to the luminance information of the second image, or perform gain calculation according to the luminance information of the third image, or perform gain calculation according to the luminance information of the first image and the luminance information of the second image, respectively.
The weight calculation unit may perform weight map determination according to the first image, or may perform weight map determination according to the second image, or may perform weight map determination according to the first image and the second image.
It should be noted that various different implementations of the gain calculation performed by the control unit may be flexibly combined with various different implementations of the weight calculation unit determining the weight map.
For example, the control unit may perform a gain calculation (i.e., obtain a first gain) according to the brightness information of the first image, and determine a second gain according to the first gain and the exposure time ratio; the weight calculating unit determines a weight map according to the first image.
As shown in fig. 2A, in order to improve the image quality under the low illumination condition, the control unit obtains the exposure parameters, which may include the exposure time and the gain; when the image sensor is in the first working mode, the first image and the second image can be alternately generated and output according to the exposure parameter, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different; the weight calculation unit can perform weight calculation according to at least one frame of image in the first image and the second image to obtain a weight map; the processing unit may perform synthesis processing on the first image and the second image according to the weight map, and output a third image.
For example, the weight calculation unit may perform weight calculation according to a pixel difference between the current frame first image and the historical frame first image to obtain a weight map; or, carrying out weight calculation according to the pixel difference between the current frame second image and the historical frame second image to obtain a weight map.
When the image sensor is switched from the first operation mode to the second operation mode, the image sensor may output a fourth image.
The method can greatly improve the problem of high image noise under the low-illumination condition and improve the image quality under the low-illumination condition.
The technical solution provided by the embodiment of the present application is described in detail below based on fig. 2A.
1. Main technical characteristics
1. Control unit
1.1, a control unit: exposure parameters are generated, the exposure parameters including gain and exposure time.
1.2, a control unit: when the image sensor is in the first working mode, the exposure time comprises a first exposure time and a second exposure time.
1.3, a control unit: when the image sensor is in the first working mode, the gain comprises a first gain and a second gain.
1.4, a control unit: the first exposure time is less than the second exposure time.
1.5, a control unit: the first gain is greater than the second gain.
2. Image sensor with a light-emitting element
2.1, image sensor: and in the first working mode, generating a first image and a second image according to the exposure parameters generated by the control unit.
3. Weight calculation unit
3.1, weight calculation unit: and performing weight calculation according to at least one frame of image in the first image and the second image to obtain a weight map.
4. Processing unit
4.1, a processing unit: including at least synthetic processing.
4.2, a processing unit: the synthesizing process includes synthesizing the first image and the second image based on the weight map calculated by the weight calculating unit, and outputting a synthesized image (i.e., a third image).
2. Other technical features
1. Control unit
1.6, a control unit: calculating a first gain according to the first image, and calculating a second gain according to the first gain, the ratio of the second exposure time to the first exposure time;
or, calculating a second gain according to the second image, and calculating a first gain according to the second gain, the ratio of the second exposure time to the first exposure time;
or, calculating a second gain according to the third image, and calculating a first gain according to the second gain, the ratio of the second exposure time to the first exposure time;
or, the first gain is calculated according to the first image, and the second gain is calculated according to the second image.
1.7, a control unit: the first gain comprises a first analog gain and a first digital gain; the second gain includes a second analog gain and a second digital gain.
Illustratively, the first digital gain is greater than or equal to 0 and the second digital gain is greater than or equal to 0.
1.8, a control unit: the first analog gain is greater than or equal to the second analog gain.
2. Image sensor with a plurality of pixels
2.2, image sensor: and when in the first working mode, the images in the first image and the second image are alternately generated and output.
2.3, image sensor: and generating a first image sequence according to the first exposure time and the first gain, and generating a second image sequence according to the second exposure time and the second gain.
3. Weight calculation unit
3.2, a weight calculation unit: calculating a weight map according to the pixel value difference between the current frame image and the historical frame image;
3.3, weight calculation unit: obtaining a weight map according to the pixel value difference of the current frame first image and the historical frame first image;
or, obtaining a weight map according to the pixel value difference between the current frame second image and the historical frame second image;
or weighting the two weight maps to obtain a weight map;
or weighting the current frame first image and the current frame second image to obtain a current frame weighted image, weighting the historical frame first image and the historical frame second image to obtain a historical frame weighted image, and obtaining a weight map according to the pixel value difference value of the current frame weighted image and the historical frame weighted image.
3.4, weight calculation unit: the current frame image and the historical frame image are as close to each other as possible.
3.5, weight calculating unit: the image used for determining the weight map at least comprises any one of the current frame first image and the current frame second image;
3.6, weight calculation unit: carrying out target detection according to the first image to obtain a weight map;
or, carrying out target detection according to the second image to obtain a weight map.
Or, the two weight maps are weighted to obtain the weight map.
4. Processing unit
4.3, a processing unit: the average brightness of the first image and the second image subjected to the combining process is the same.
4.4, a processing unit: and performing weighting processing on the first image and the second image according to the weight map to obtain a third image, and finally outputting the third image.
4.5, a processing unit: the target area of the third image is preferably the first image.
4.6, a processing unit: the target area is at least one of an object motion area in the image, a key target area in the image, such as a pedestrian, a vehicle, an animal, or a signal light.
The above features are explained below with reference to the embodiments.
Example one
The control unit determines an exposure parameter.
Illustratively, when the image sensor is in the first operating mode, the exposure parameters include a first exposure time and a second exposure time, a first gain and a second gain, the first exposure time being less than the second exposure time; the first gain is greater than the second gain.
When the image sensor is in a first working mode, outputting a first image and a second image; when the image sensor generates the first image and the second image, the exposure time is different and the gain is different.
Illustratively, a first image is generated and output according to a first exposure time and a first gain; and generating and outputting a second image according to the second exposure time and the second gain.
The processing unit processes the first image and the second image to obtain a third image, wherein the processing at least comprises synthesis processing.
Example two
The control unit controls the image sensor to switch between the first working mode and the second working mode according to the weak intensity of the environment light.
Illustratively, when the ambient light is weak, the image sensor is controlled to be switched into a first working mode; and when the ambient light intensity is high, controlling the image sensor to be switched into a second working mode.
Outputting a first image and a second image when the image sensor is in a first working mode; when the image sensor generates the first image and the second image, the exposure time is different and the gain is different;
illustratively, a first image is generated and output according to a first exposure time and a first gain; and generating and outputting a second image according to the second exposure time and the second gain.
The processing unit processes the first image and the second image to obtain a third image.
And outputting a fourth image when the image sensor is in the second working mode.
The processing unit outputs the fourth image.
The respective units will be described below.
1. Control unit
The control unit functions to determine the exposure parameters.
Illustratively, the exposure parameters may include gain and exposure time.
For example, when the image sensor is in the first operating mode, the gain may include a first gain and a second gain, and the exposure time may include a first exposure time and a second exposure time.
Illustratively, the control unit controls the first exposure time to be less than the second exposure time.
Illustratively, the control unit calculates the first gain and the second gain through the gain calculation unit.
Illustratively, the gain calculating unit may calculate the first gain and the second gain according to the first image, the second image, the third image, and a ratio of the second exposure time to the first exposure time.
Illustratively, the first gain is greater than the second gain.
Illustratively, the first gain and the second gain may include an analog gain and a digital gain, respectively; wherein the analog gain amplifies noise less than the digital gain.
It should be noted that, in the current wide dynamic imaging scheme, in order to ensure the high dynamic range of the final image, the long and short exposure image sensors are required to have the same gain, and the advantages of the analog gain are not fully utilized.
Possible embodiments of the present application, which utilize the above advantages, include:
EXAMPLE III
Referring to fig. 2B and fig. 3A, the gain calculating unit calculates a first gain1 according to the luminance information of the first image, and calculates a second image gain according to a ratio of the second exposure time exp _ tim2 to the first exposure time exp _ tim1 by the following formula:
gain2=gain1/(exp_tim2/exp_tim1)
example four
Referring to fig. 2C and 3B, the gain calculating unit calculates a second gain2 according to the luminance information of the second image, and calculates a first image gain1 according to a ratio of the second exposure time exp _ tim2 to the first exposure time exp _ tim1 by the following formula.
gain1=gain2*(exp_tim2/exp_tim1)
gain1=max(gain1,max_sensor_gain)
Wherein max _ sensor _ gain represents the maximum gain of the sensor, and max (x, y) is the maximum value of x and y.
EXAMPLE five
Referring to fig. 2D and 3C, the gain calculating unit calculates a second gain2 according to the luminance information of the synthesized image (i.e., the third image), and calculates a first image gain1 according to a ratio of the second exposure time exp _ tim2 to the first exposure time exp _ tim1 by the following formula.
gain1=gain2*(exp_tim2/exp_tim1)
gain1=max(gain1,max_sensor_gain)
Where max _ sensor _ gain represents the maximum gain of the sensor and max (x, y) is taken to be the maximum of x and y.
EXAMPLE six
Referring to fig. 2E and fig. 3D, the gain calculating unit calculates a first gain1 according to the luminance information of the first image, and calculates a second gain2 according to the luminance information of the second image.
2. Image sensor with a plurality of pixels
The image sensor is used to generate a sequence of images.
For example, the image sensor may generate a first image according to a first exposure time and a first gain and generate a second image according to a second exposure time and a second gain when in the first operating mode.
The image sensor may generate a fourth image while in the second mode of operation.
EXAMPLE seven
When the image sensor is in a first working mode, a first image and a second image are generated by a time-sharing exposure method.
For example, the image sensor may alternately output a first image and a second image, resulting in a first image sequence and a second image sequence, the first image sequence being composed of odd frame images generated by the image sensor, and the second image sequence being composed of even frame images generated by the image sensor.
Illustratively, the exposure time of the odd frame is equal to the first exposure time, and the gain is equal to the first gain; the exposure time of the even frame is equal to the second exposure time, and the gain is equal to the second gain.
3. Weight calculation unit
The main role of the weight calculation unit is to obtain a weight map by using at least one of motion detection and object detection.
Illustratively, the weight map represents at least one of object motion information and key target position information such as pedestrians, vehicles, animals or signal lights.
Example eight
Referring to fig. 2G and fig. 4A, the weight calculating unit performs motion detection on at least two frame image differences at different times in the first image sequence to obtain a weight map.
Illustratively, taking two frames as an example, as shown in fig. 4A, the current frame first image is a current time image in the first image sequence (i.e., a first image currently participating in image synthesis in the first image sequence), and the historical frame first image (which may be referred to as a fifth image) is an earlier time image in the first image sequence.
For example, as shown in fig. 4A, the first image and the fifth image may be subtracted by a subtraction unit to obtain a residual image, and then the residual image is subjected to mean filtering processing within a specified neighborhood size.
Illustratively, the size of the neighborhood is not limited in the embodiments of the present application.
The residual mean value of each pixel point is obtained through the method, and the residual mean value is converted into a weight graph through a thresholding means.
Example nine
Referring to fig. 2H and fig. 4B, the weight calculating unit performs motion detection on the difference between at least two frames of images at different time in the second image sequence to obtain a weight map.
Illustratively, taking two frames as an example, as shown in fig. 4B, the current-frame second image is a current-time image in the second image sequence (i.e., a second image currently participating in image synthesis in the second image sequence), and the historical-frame second image is an earlier-time image in the second image sequence (which may be referred to as a sixth image).
For example, as shown in fig. 4B, the second image and the sixth image may be subtracted by a subtraction unit to obtain a residual image, and then the residual image is subjected to mean filtering processing in a specified neighborhood size.
Illustratively, the size of the neighborhood is not limited in the embodiments of the present application.
The residual mean value of each pixel point is obtained through the method, and the residual mean value is converted into a weight graph through a thresholding means.
Example ten
Referring to fig. 2H and fig. 4C, the weight calculating unit performs motion detection on the difference between the first image and the second image to obtain a weight map.
For example, as shown in fig. 4C, the first image and the second image may be subtracted by a subtraction unit to obtain a residual image, and then the residual image is subjected to mean filtering processing within a specified neighborhood size.
Illustratively, the size of the neighborhood is not limited in the embodiments of the present application.
The residual mean value of each pixel point is obtained through the method, and the residual mean value is converted into a weight graph through a thresholding means.
In this embodiment, the total set of the object motion information included in the weight maps involved in the fusion may be obtained as a final weight map (i.e., the fusion weight map) by fusing the weight maps obtained in the above manner.
EXAMPLE eleven
Combining the eighth embodiment with the ninth embodiment, and performing fusion processing on the weight maps determined in the eighth embodiment and the ninth embodiment to obtain a total set of motion information of the two objects as a final weight map.
One combination is as follows:
referring to fig. 2H and 4D, on one hand, the weight calculating unit may subtract the first image and the fifth image by the subtracting unit to obtain a residual image, then perform mean filtering processing on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point by the above method, and convert the residual mean value into the weight map 1 by a thresholding means.
On the other hand, the weight calculation unit may perform subtraction on the second image and the sixth image through the subtraction unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 2 through a thresholding means.
The weight calculation unit may fuse the weight map 1 and the weight map 2 to obtain a final weight map.
Example ten
Combining the eighth embodiment with the ninth embodiment, and performing fusion processing on the weight maps determined in the eighth embodiment and the ninth embodiment to obtain a total set of motion information of the two objects as a final weight map.
Another combination is as follows:
referring to fig. 2H and 4E, the weight calculating unit may perform weighting according to the first image and the second image to obtain a seventh image (i.e., the current frame weighted image), and perform weighting according to the fifth image and the sixth image to obtain an eighth image (i.e., the history frame weighted image).
The weight calculation unit can make a difference between the seventh image and the eighth image through the subtraction unit to obtain a residual image, then the residual image is subjected to mean filtering processing in a specified neighborhood size, the residual mean value of each pixel point is obtained through the method, and the residual mean value is converted into the weight map through thresholding means.
Thirteen examples
Combining the decimal lines of the embodiment eight and the embodiment ten, and performing fusion processing on the weight maps determined in the embodiment eight and the embodiment ten to obtain a complete set of the motion information of the two objects as a final weight map.
Referring to fig. 2H and 4F, on one hand, the weight calculating unit may subtract the first image and the fifth image by the subtracting unit to obtain a residual image, then perform mean filtering processing on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point by the above method, and convert the residual mean value into the weight map 1 by a thresholding means.
On the other hand, the weight calculation unit may perform subtraction on the first image and the second image through the subtraction unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 2 through a thresholding means.
The weight calculation unit may fuse the weight map 1 and the weight map 2 to obtain a final weight map.
Example fourteen
Combining the decimal lines of the embodiment nine and the embodiment ten, and performing fusion processing on the weight maps determined in the embodiment nine and the embodiment ten respectively to obtain a complete set of the motion information of the objects of the embodiment nine and the embodiment ten, wherein the complete set is used as a final weight map.
Referring to fig. 2H and 4G, on one hand, the weight calculating unit may subtract the second image and the sixth image by the subtracting unit to obtain a residual image, then perform mean filtering processing on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point by the above method, and convert the residual mean value into the weight map 1 by a thresholding means.
On the other hand, the weight calculation unit may perform subtraction on the first image and the second image through the subtraction unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 2 through a thresholding means.
The weight calculation unit may fuse the weight map 1 and the weight map 2 to obtain a final weight map.
Example fifteen
And combining decimal lines of the eight embodiments to the ten embodiments, and fusing the weight maps determined by the eight embodiments to the ten embodiments to obtain a complete set of the motion information of the three objects as a final weight map.
Referring to fig. 2H and 4H, on one hand, the weight calculation unit may subtract the first image and the fifth image by the subtraction unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point by the above method, and convert the residual mean value into the weight map 1 by thresholding.
On the other hand, the weight calculation unit may perform subtraction on the second image and the sixth image through the subtraction unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 2 through a thresholding means.
Furthermore, the weight calculation unit may perform subtraction on the first image and the second image through a subtraction unit to obtain a residual image, then perform mean filtering on the residual image in a specified neighborhood size, obtain a residual mean value of each pixel point through the above method, and convert the residual mean value into the weight map 3 through a thresholding means.
The weight calculation unit may fuse the weight map 1, the weight map 2, and the weight 3 to obtain a final weight map.
Example sixteen
As for any one of the eighth to fifteenth embodiments, the weight map may be obtained by estimation using a convolutional neural network (i.e., a motion detection model) instead of the subtraction unit and the residual accumulation unit, where the convolutional neural network is used to perform motion detection on the input image and obtain the weight map.
Taking the weight calculation shown in the eighth embodiment as an example, the first image and the fifth image may be input to a pre-trained motion detection model, and a weight map may be obtained through estimation by the pre-trained motion detection model, and a schematic diagram thereof may be as shown in fig. 5.
Example seventeen
And performing target detection on at least one frame of image in the first image and the second image by using a pre-trained convolutional neural network model (namely a target detection model) for performing target detection on the input image and obtaining a weight map to obtain the position information of the key target as the weight map.
Taking the target detection of the first image as an example, please refer to fig. 2F and fig. 6A, the first image may be input into a pre-trained target detection model, and the target detection model is used to perform target detection on the first image, so as to obtain the position information of the key target in the first image, which is used as the weight map.
EXAMPLE eighteen
And performing target detection on at least one frame of image in the first image and the second image by using a pre-trained convolutional neural network model (namely a target detection model) for performing target detection on the input image and obtaining a weight map to obtain the position information of the key target as the weight map.
Taking the example of performing the target detection on the second image, please refer to fig. 2G and fig. 6B, the second image may be input into a pre-trained target detection model, and the target detection model is used to perform the target detection on the second image, so as to obtain the position information of the key target in the second image, which is used as the weight map.
Example nineteen
And performing target detection on at least one frame of image in the first image and the second image by using a pre-trained convolutional neural network model (namely a target detection model) for performing target detection on the input image and obtaining a weight map to obtain target position information as the weight map.
Taking target detection on the first image and the second image as an example, please refer to fig. 2H and fig. 6C, the first image may be input into a pre-trained target detection model, and the target detection model is used to perform target detection on the first image, so as to obtain position information of a key target in the first image, which is used as the weight map 1; and inputting the second image into a pre-trained target detection model, and performing target detection on the second image by using the target detection model to obtain the position information of the key target in the second image as a weight map 2.
The weight calculation unit may fuse the weight map 1 and the weight map 2 to obtain a final weight map.
4. Processing unit
The processing unit is mainly used for synthesizing the first image sequence and the second image sequence according to the weight graph output by the weight calculation unit and outputting a synthesized image sequence.
Illustratively, the first image sequence is preferentially selected by the target region of the synthesized image sequence, and the first image sequence and the second image sequence are used for weighting the non-target region, so that the purposes of clear image target region without smear and obviously improving the signal-to-noise ratio of the non-target region are achieved.
For example, the target area may refer to a moving area of an object in the image, or may be key information in the image, such as a pedestrian, a vehicle, an animal, or a signal light.
For example, before the first image and the second image are subjected to the synthesis processing, the processing unit needs to ensure that the average brightness of the first image and the average brightness of the second image are the same.
Example twenty
The average brightness of the first image is the same as the average brightness of the second image:
the processing unit synthesizes the first image and the second image according to the following formula:
img_fus=(img_1*alpha+img_2*(n-alpha))/n
where img _ fus represents a synthesized image, img _1 represents a first image, img _2 represents a second image, alpha represents a weight map, and n represents a normalized weight.
Example twenty one
When the average brightness of the first image is different from the average brightness of the second image:
the processing unit adjusts the average brightness of the first image to be the same as the average brightness of the second image.
For example, the processing unit may multiply the first image by a ratio of the average luminance of the second image to the average luminance of the first image such that the average luminance of the first image is the same as the average luminance of the second image.
Then, the processing unit synthesizes the first image and the second image according to the following formula:
img_fus=(img_1*alpha+img_2*(n-alpha))/n
where img _ fus represents a synthesized image, img _1 represents a first image, img _2 represents a second image, alpha represents a weight map, and n represents a normalized weight.
It should be noted that the foregoing embodiments are merely specific examples of implementations of the embodiments of the present application, and do not limit the scope of the present application, and based on the foregoing embodiments, new embodiments may be obtained through combination between the embodiments or modification of the embodiments, which all belong to the scope of the present application.
The methods provided herein are described above. The following describes the apparatus and image processing system provided in the present application:
referring to fig. 7, a schematic structural diagram of an imaging system according to an embodiment of the present disclosure is shown in fig. 7, where the imaging system may include: a control unit 710, an image sensor 720, and a processing unit 730; wherein:
a control unit 710 for determining exposure parameters, the exposure parameters including an exposure time and a gain;
an image sensor 720, configured to output an image in a first operation mode, where the output image includes a first image and a second image, and the image sensor generates the first image and the second image with different exposure times and different gains;
a processing unit 730, configured to use the first image and the second image for processing to obtain a third image, where the processing at least includes synthesis processing;
the image sensor 720 is further configured to output a fourth image when the first operation mode is switched to the second operation mode.
In some embodiments, the exposure time for the image sensor 720 to generate the first image is a first exposure time, the exposure time for the image sensor to generate the second image is a second exposure time, and the first image exposure time is less than the second image exposure time;
the gain of the image sensor 720 when generating the first image is a first gain, and the gain of the image sensor when generating the second image is a second gain, and the first gain is greater than the second gain.
In some embodiments, the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the first ratio is equal to the second ratio.
In some embodiments, the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the difference between the first ratio and the second ratio is less than 10%.
In some embodiments, the control unit 710 is further configured to control the image sensor to switch between the first operating mode and the second operating mode based on the intensity of the ambient light.
In some embodiments, the processing unit 720 uses the first image and the second image for processing to obtain a third image, including:
according to the configuration weight of each pixel position in the weight map, weighting processing is carried out on each pixel value of the first image and the second image to obtain a third image; the weight map is determined according to the first image sequence or the second image sequence; the first image sequence comprises at least one frame of a first image, the second image sequence comprises at least one frame of a second image, and the first image sequence and the second image sequence are determined according to an output image of the image sensor in a first working mode; the configured weight of any pixel location is used to determine the weighted weight of the first image and the second image at that pixel location.
In some embodiments, the processing unit 730 is further configured to determine a weight map according to the first image sequence, including:
the first image sequence comprises a current frame first image and a historical frame first image, and the weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
In some embodiments, the processing unit 730 is configured to determine a weight map according to the second image sequence, and includes:
the second image sequence comprises a current frame second image and a historical frame second image, and the weight map is determined according to the pixel value difference of the current frame second image and the historical frame second image.
In some embodiments, the processing unit 730 determines the weight map according to the pixel value difference, including:
filtering by using the pixel value difference to obtain a weight map;
and/or sending the pixel value difference into a pre-trained convolutional neural network to obtain a weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the weight map.
In some embodiments, the processing unit 730 determines the weight map according to the first image sequence or the second image sequence, including:
the first image sequence is a first image, and the weight map is determined according to the position information of the key target in the first image;
and/or the second image sequence is a second image, and the weight map is determined according to the position information of the key target in the second image.
In some embodiments, the processing unit 730 determines the weight map according to the location information of the key target, including:
and obtaining a weight map by utilizing a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on an input image and obtaining the weight map.
In some embodiments, the processing unit 730 uses the first image and the second image for processing to obtain a third image, including:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining process is performed.
In some embodiments, the processing unit 730 is specifically configured to multiply the first image by a ratio of the average luminance of the second image to the average luminance of the first image, so that the average luminance of the first image is the same as the average luminance of the second image.
In some embodiments, the control unit 710 determines the first and second gains by:
determining the first gain according to the brightness information of the first image;
and determining the second gain according to the first gain and the ratio of the second exposure time to the first exposure time.
Referring to fig. 8, a schematic structural diagram of an imaging device provided in an embodiment of the present application is shown in fig. 8, where the imaging device may include:
an obtaining unit 810, configured to obtain an output image of a single image sensor in a first operating mode, where the output image includes a first image and a second image, and when the image sensor generates the first image and the second image, exposure time is different and gain is different;
a processing unit 820, configured to use the first image and the second image for processing to obtain a third image, where the processing at least includes a synthesis processing;
the obtaining unit 810 is further configured to obtain an output image of the image sensor in the second operating mode when the image sensor is switched from the first operating mode to the second operating mode, where the output image is a fourth image.
In some embodiments, the exposure time for the image sensor to generate the first image is a first exposure time, the exposure time for the image sensor to generate the second image is a second exposure time, and the first image exposure time is less than the second image exposure time;
the gain when the image sensor generates the first image is a first gain, the gain when the image sensor generates the second image is a second gain, and the first gain is larger than the second gain.
In some embodiments, the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the first ratio is equal to the second ratio.
In some embodiments, the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the difference between the first ratio and the second ratio is less than 10%.
In some embodiments, as shown in fig. 9, the imaging device further comprises:
a control unit 830, configured to control the image sensor to switch between the first operating mode and the second operating mode based on the intensity of the ambient light.
In some embodiments, the processing unit 820 uses the first image and the second image for processing to obtain a third image, including:
according to the configuration weight of each pixel position in the weight map, weighting processing is carried out on each pixel value of the first image and the second image to obtain a third image; the weight map is determined according to the first image sequence or the second image sequence; the first image sequence comprises at least one frame of a first image, the second image sequence comprises at least one frame of a second image, the first image sequence and the second image sequence are determined according to output images of the single image sensor in a first working mode; the configuration weight of any pixel position is used for determining the weighted weight of the first image and the second image at the pixel position.
In some embodiments, the processing unit 820 determines the weight map from the first image sequence, including:
the first image sequence comprises a current frame first image and a historical frame first image, and the weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
In some embodiments, the processing unit 820 determines the weight map according to the second image sequence, including:
the second image sequence comprises a current frame second image and a historical frame second image, and the weight map is determined according to the difference of pixel values of the current frame second image and the historical frame second image.
In some embodiments, the processing unit 820 determines the weight map according to the pixel value difference, including:
filtering by using the pixel value difference to obtain a weight map;
and/or sending the pixel value difference into a pre-trained convolutional neural network to obtain a weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the weight map.
In some embodiments, the processing unit 820 determines the weight map according to the first image sequence or the second image sequence, including:
the first image sequence is a first image, and the weight map is determined according to the position information of the key target in the first image;
and/or the second image sequence is a second image, and the weight map is determined according to the position information of the key target in the second image.
In some embodiments, the processing unit 820 determines the weight map according to the position information of the key target, including:
and obtaining a weight map by utilizing a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on an input image and obtaining the weight map.
In some embodiments, the processing unit 820 uses the first image and the second image for processing to obtain a third image, including:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining processing is performed.
In some embodiments, the processing unit 820 is specifically configured to multiply the first image by a ratio of the average luminance of the second image to the average luminance of the first image, so that the average luminance of the first image is the same as the average luminance of the second image.
In some embodiments, the first gain and the second gain are determined by:
determining the first gain according to the brightness information of the first image;
and determining the second gain according to the first gain and the ratio of the second exposure time to the first exposure time.
Please refer to fig. 10, which is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure. The electronic device may include a processor 1001, a memory 1002 having stored thereon machine-executable instructions. The processor 1001 and the memory 1002 may communicate via a system bus 1003. Also, by reading and executing the machine-executable instructions in the memory 1002, the processor 1001 may perform the imaging methods described above.
The memory 1002 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: RAM (random Access Memory), volatile Memory, non-volatile Memory, flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
In some embodiments, there is also provided a machine-readable storage medium, such as the memory 1002 in fig. 10, having stored therein machine-executable instructions that, when executed by a processor, implement the imaging method described above. For example, the machine-readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and so forth.
Referring to fig. 11, a schematic structural diagram of a camera provided in an embodiment of the present application is shown in fig. 11, where the camera may include: lens 1110, image sensor 1120, and processor 1130; wherein:
the lens 1110 is configured to process incident light into a light signal incident on the image sensor;
the image sensor 1120 is configured to output an image according to the optical signal in a first operating mode, where the output image includes a first image and a second image, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different;
the processor 1130 is configured to perform synthesis processing on the first image and the second image to obtain a third image;
the image sensor 1120 is further configured to output a fourth image when the first operating mode is switched to the second operating mode.
In some embodiments, the exposure time when the image sensor generates the first image is a first exposure time, the exposure time when the image sensor generates the second image is a second exposure time, and the first image exposure time is less than the second image exposure time;
the gain when the image sensor generates the first image is a first gain, the gain when the image sensor generates the second image is a second gain, and the first gain is larger than the second gain.
In some embodiments, the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the first ratio is equal to the second ratio.
In some embodiments, the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the difference between the first ratio and the second ratio is less than 10%.
In some embodiments, the processor 1130 is further configured to control the image sensor to switch between the first operation mode and the second operation mode based on the intensity of the ambient light.
In some embodiments, the processor 1130 uses the first image and the second image for processing to obtain a third image, including:
according to the configuration weight of each pixel position in the weight map, weighting processing is carried out on each pixel value of the first image and the second image to obtain a third image; the weight map is determined according to the first image sequence or the second image sequence; the first image sequence comprises at least one frame of a first image, the second image sequence comprises at least one frame of a second image, and the first image sequence and the second image sequence are determined according to an output image of the image sensor in a first working mode; the configured weight of any pixel location is used to determine the weighted weight of the first image and the second image at that pixel location.
In some embodiments, the processor 1130 is configured to determine a weight map from the first sequence of images, including:
the first image sequence comprises a current frame first image and a historical frame first image, and the weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
In some embodiments, the processor 1130, configured to determine the weight map according to the second image sequence, includes:
the second image sequence comprises a current frame second image and a historical frame second image, and the weight map is determined according to the difference of pixel values of the current frame second image and the historical frame second image.
In some embodiments, the processor 1130 determines the weight map according to the pixel value difference, including:
filtering by using the pixel value difference to obtain a weight map;
and/or sending the pixel value difference into a pre-trained convolutional neural network to obtain a weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the weight map.
In some embodiments, the processor 1130 determines the weight map from the first image sequence or the second image sequence, including:
the first image sequence is a first image, and the weight map is determined according to the position information of the key target in the first image;
and/or the second image sequence is a second image, and the weight map is determined according to the position information of the key target in the second image.
In some embodiments, the processor 1130 determines the weight map according to the location information of the key objects, including:
and obtaining a weight map by utilizing a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on an input image and obtaining the weight map.
In some embodiments, the processor 1130 uses the first image and the second image for processing to obtain a third image, including:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining process is performed.
In some embodiments, the processor 1130 is specifically configured to multiply the first image by the ratio of the average luminance of the second image to the average luminance of the first image to achieve that the average luminance of the first image is the same as the average luminance of the second image.
In some embodiments, the processor 1130 determines the first and second gains by:
determining the first gain according to the brightness information of the first image;
and determining the second gain according to the first gain and the ratio of the second exposure time to the first exposure time.
It should be noted that the embodiments of the camera, the imaging system, the imaging apparatus, and the imaging method may be referred to each other, and the same steps are not described in detail.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (20)

1. An imaging method, comprising:
acquiring an output image of a single image sensor in a first working mode, wherein the output image comprises a first image and a second image, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different; wherein the content of the first and second substances,
the first image and the second image are used for processing to obtain a third image, and the processing at least comprises synthesis processing;
when the image sensor is switched from the first working mode to the second working mode, acquiring an output image of the image sensor in the second working mode, wherein the output image is a fourth image.
2. The method of claim 1, wherein the exposure time for the image sensor to generate the first image is a first exposure time, the exposure time for the image sensor to generate the second image is a second exposure time, and the first image exposure time is less than the second image exposure time;
the gain when the image sensor generates the first image is a first gain, the gain when the image sensor generates the second image is a second gain, and the first gain is larger than the second gain.
3. The method of claim 2, wherein the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the first ratio is equal to the second ratio.
4. The method of claim 2, wherein the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the difference between the first ratio and the second ratio is less than 10%.
5. The method of claim 1, wherein switching between the first mode of operation and the second mode of operation is based on ambient light levels.
6. The method of claim 1, wherein the first image and the second image are used for processing to obtain a third image, comprising:
according to the configuration weight of each pixel position in the weight map, weighting processing is carried out on each pixel value of the first image and the second image to obtain a third image; the weight map is determined according to the first image sequence or the second image sequence; the first image sequence comprises at least one frame of a first image, the second image sequence comprises at least one frame of a second image, and the first image sequence and the second image sequence are determined according to an output image of the image sensor in a first working mode; the configured weight of any pixel location is used to determine the weighted weight of the first image and the second image at that pixel location.
7. The method of claim 6, wherein determining the weight map from the first sequence of images comprises:
the first image sequence comprises a current frame first image and a historical frame first image, and the weight map is determined according to the difference of pixel values of the current frame first image and the historical frame first image.
8. The method of claim 6, wherein determining the weight map from the second sequence of images comprises:
the second image sequence comprises a current frame second image and a historical frame second image, and the weight map is determined according to the difference of pixel values of the current frame second image and the historical frame second image.
9. The method according to claim 7 or 8, wherein determining the weight map from pixel value differences comprises:
filtering by using the pixel value difference to obtain a weight map;
and/or sending the pixel value difference into a pre-trained convolutional neural network to obtain a weight map, wherein the pre-trained convolutional neural network is used for carrying out motion detection on the input pixel value difference and obtaining the weight map.
10. The method according to claim 6, wherein determining the weight map from the first image sequence and/or the second image sequence comprises:
the first image sequence is a first image, and the weight map is determined according to the position information of the key target in the first image;
and/or the second image sequence is a second image, and the weight map is determined according to the position information of the key target in the second image.
11. The method of claim 10, wherein determining the weight map based on location information of key objects comprises:
and obtaining a weight map by utilizing a pre-trained convolutional neural network, wherein the pre-trained convolutional neural network is used for carrying out target detection on an input image and obtaining the weight map.
12. The method of claim 1, wherein the first image and the second image are used for processing to obtain a third image, comprising:
when the average luminance of the first image is different from the average luminance of the second image, the average luminance of the first image is adjusted to be the same as the average luminance of the second image before the combining processing is performed.
13. The method of claim 12, wherein the average luminance of the first image is the same as the average luminance of the second image by multiplying the first image by the ratio of the average luminance of the second image to the average luminance of the first image.
14. The method of claim 2, wherein the first gain and the second gain are determined by:
determining the first gain according to the brightness information of the first image;
and determining the second gain according to the first gain and the ratio of the second exposure time to the first exposure time.
15. An imaging system, comprising: a control unit, an image sensor and a processing unit; wherein:
a control unit for determining exposure parameters, the exposure parameters including an exposure time and a gain;
the image sensor is used for outputting an image in a first working mode, the output image comprises a first image and a second image, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different;
the processing unit is used for processing the first image and the second image to obtain a third image, and the processing at least comprises synthesis processing;
the image sensor is further used for outputting a fourth image when the first working mode is switched to the second working mode.
16. The imaging system of claim 15, wherein the exposure time for the image sensor to generate the first image is a first exposure time and the exposure time for the image sensor to generate the second image is a second exposure time, the first image exposure time being less than the second image exposure time;
the gain when the image sensor generates the first image is a first gain, the gain when the image sensor generates the second image is a second gain, and the first gain is larger than the second gain.
17. The imaging system of claim 16, wherein the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the first ratio is equal to the second ratio.
18. The imaging system of claim 16, wherein the ratio of the first exposure time to the second exposure time is a first ratio, the ratio of the second gain to the first gain is a second ratio, and the difference between the first ratio and the second ratio is less than 10%.
19. The imaging system of claim 18,
the control unit is further configured to control the image sensor to switch between the first operating mode and the second operating mode based on the intensity of the ambient light.
20. A camera, comprising: a lens, an image sensor and a processor; wherein:
the lens is used for processing incident light into a light signal incident to the image sensor;
the image sensor is used for outputting an image according to the optical signal in a first working mode, the output image comprises a first image and a second image, and when the image sensor generates the first image and the second image, the exposure time is different and the gain is different;
the processor is configured to use the first image and the second image for processing to obtain a third image, where the processing at least includes synthesis processing;
the image sensor is further used for outputting a fourth image when the first working mode is switched to the second working mode.
CN202110502434.1A 2021-05-08 2021-05-08 Imaging method, imaging system and camera Active CN115314629B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110502434.1A CN115314629B (en) 2021-05-08 2021-05-08 Imaging method, imaging system and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110502434.1A CN115314629B (en) 2021-05-08 2021-05-08 Imaging method, imaging system and camera

Publications (2)

Publication Number Publication Date
CN115314629A true CN115314629A (en) 2022-11-08
CN115314629B CN115314629B (en) 2024-03-01

Family

ID=83854484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110502434.1A Active CN115314629B (en) 2021-05-08 2021-05-08 Imaging method, imaging system and camera

Country Status (1)

Country Link
CN (1) CN115314629B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4127411B1 (en) * 2007-04-13 2008-07-30 キヤノン株式会社 Image processing apparatus and method
CN101729789A (en) * 2008-10-21 2010-06-09 索尼株式会社 Imaging apparatus, imaging method and program
JP2013026722A (en) * 2011-07-19 2013-02-04 Toshiba Corp Image processing apparatus
US20140307117A1 (en) * 2013-04-15 2014-10-16 Htc Corporation Automatic exposure control for sequential images
CN107197167A (en) * 2016-03-14 2017-09-22 杭州海康威视数字技术股份有限公司 A kind of method and device for obtaining image
CN108200354A (en) * 2018-03-06 2018-06-22 广东欧珀移动通信有限公司 Control method and device, imaging device, computer equipment and readable storage medium storing program for executing
CN108322669A (en) * 2018-03-06 2018-07-24 广东欧珀移动通信有限公司 The acquisition methods and device of image, imaging device, computer readable storage medium and computer equipment
CN109951646A (en) * 2017-12-20 2019-06-28 杭州海康威视数字技术股份有限公司 Image interfusion method, device, electronic equipment and computer readable storage medium
CN110149484A (en) * 2019-04-15 2019-08-20 浙江大华技术股份有限公司 Image composition method, device and storage device
CN110493494A (en) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 Image fusion device and image interfusion method
CN110557572A (en) * 2018-05-31 2019-12-10 杭州海康威视数字技术股份有限公司 image processing method and device and convolutional neural network system
WO2020038069A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Exposure control method and device, and electronic apparatus
US20200213501A1 (en) * 2018-12-26 2020-07-02 Himax Imaging Limited Automatic exposure imaging system and method
CN112422784A (en) * 2020-10-12 2021-02-26 浙江大华技术股份有限公司 Imaging method, imaging apparatus, electronic apparatus, and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4127411B1 (en) * 2007-04-13 2008-07-30 キヤノン株式会社 Image processing apparatus and method
CN101729789A (en) * 2008-10-21 2010-06-09 索尼株式会社 Imaging apparatus, imaging method and program
JP2013026722A (en) * 2011-07-19 2013-02-04 Toshiba Corp Image processing apparatus
US20140307117A1 (en) * 2013-04-15 2014-10-16 Htc Corporation Automatic exposure control for sequential images
CN107197167A (en) * 2016-03-14 2017-09-22 杭州海康威视数字技术股份有限公司 A kind of method and device for obtaining image
CN109951646A (en) * 2017-12-20 2019-06-28 杭州海康威视数字技术股份有限公司 Image interfusion method, device, electronic equipment and computer readable storage medium
CN108322669A (en) * 2018-03-06 2018-07-24 广东欧珀移动通信有限公司 The acquisition methods and device of image, imaging device, computer readable storage medium and computer equipment
CN108200354A (en) * 2018-03-06 2018-06-22 广东欧珀移动通信有限公司 Control method and device, imaging device, computer equipment and readable storage medium storing program for executing
CN110557572A (en) * 2018-05-31 2019-12-10 杭州海康威视数字技术股份有限公司 image processing method and device and convolutional neural network system
WO2020038069A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Exposure control method and device, and electronic apparatus
US20200213501A1 (en) * 2018-12-26 2020-07-02 Himax Imaging Limited Automatic exposure imaging system and method
CN110149484A (en) * 2019-04-15 2019-08-20 浙江大华技术股份有限公司 Image composition method, device and storage device
CN110493494A (en) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 Image fusion device and image interfusion method
CN112422784A (en) * 2020-10-12 2021-02-26 浙江大华技术股份有限公司 Imaging method, imaging apparatus, electronic apparatus, and storage medium

Also Published As

Publication number Publication date
CN115314629B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
KR101926490B1 (en) Apparatus and method for processing image
JP5832855B2 (en) Image processing apparatus, imaging apparatus, and image processing program
CN110602467B (en) Image noise reduction method and device, storage medium and electronic equipment
JP4986747B2 (en) Imaging apparatus and imaging method
CN110213502B (en) Image processing method, image processing device, storage medium and electronic equipment
KR102170695B1 (en) Image processing apparatus and image processing method
JP2010016743A (en) Distance measuring apparatus, distance measuring method, distance measuring program, or imaging device
JP2011178301A (en) Obstacle detection device, obstacle detection system including the same, and obstacle detection method
JP6083974B2 (en) Image processing apparatus, image processing method, and program
JP6740866B2 (en) Image output device
JP2019517215A (en) Image data processing for multiple exposure wide dynamic range image data
CN111246093B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2006237851A (en) Image input apparatus
CN110581957B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113994654A (en) Sensor device and signal processing method
JP6600216B2 (en) Image processing apparatus, image processing method, program, and storage medium
US20100021008A1 (en) System and Method for Face Tracking
CN113784014A (en) Image processing method, device and equipment
JP4775230B2 (en) Image processing apparatus, imaging apparatus, and image processing program
CN112911160B (en) Image shooting method, device, equipment and storage medium
CN115314629A (en) Imaging method, system and camera
CN115314627A (en) Image processing method, system and camera
CN115314628A (en) Imaging method, system and camera
JP2014098859A (en) Imaging apparatus and imaging method
CN112750087A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant