WO2020133193A1 - Hdr图像成像方法、装置及系统 - Google Patents

Hdr图像成像方法、装置及系统 Download PDF

Info

Publication number
WO2020133193A1
WO2020133193A1 PCT/CN2018/124796 CN2018124796W WO2020133193A1 WO 2020133193 A1 WO2020133193 A1 WO 2020133193A1 CN 2018124796 W CN2018124796 W CN 2018124796W WO 2020133193 A1 WO2020133193 A1 WO 2020133193A1
Authority
WO
WIPO (PCT)
Prior art keywords
photosensitive
thickness
film array
hdr image
block
Prior art date
Application number
PCT/CN2018/124796
Other languages
English (en)
French (fr)
Inventor
王星泽
赖嘉炜
舒远
Original Assignee
合刃科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合刃科技(深圳)有限公司 filed Critical 合刃科技(深圳)有限公司
Priority to CN201880071467.1A priority Critical patent/CN111316634A/zh
Priority to PCT/CN2018/124796 priority patent/WO2020133193A1/zh
Publication of WO2020133193A1 publication Critical patent/WO2020133193A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures

Definitions

  • the invention relates to the field of image processing, in particular to an HDR image imaging method, device and system.
  • HDR Harmonic Range
  • Chinese High Dynamic Range
  • HDR imaging in computer graphics and film photography, is used to achieve a larger exposure dynamic range (that is, a larger difference between light and dark) than ordinary digital image technology.
  • Group technology can provide more dynamic range and image details than ordinary images.
  • HDR imaging technology has been widely used in photography, security monitoring, image display and other fields due to its ability to highly restore the wide radiance range in the real world.
  • HDR imaging technology is realized by synchronous or asynchronous way, under different exposure conditions to capture images under different brightness dynamic range, enhance the ability to capture detailed information, and use image fusion technology to synthesize HDR images.
  • the current HDR imaging technology usually uses a single image recording device, by changing the exposure time, sequentially records LDR images under different exposure conditions, and then combines multiple LDRs for HDR synthesis.
  • This asynchronous method requires time-sharing to capture multiple images, so additional exposure time is required, which makes the HDR imaging process very slow and lack of real-time.
  • an HDR imaging system for multi-camera shooting has been developed in the prior art, which uses a stereo system of multiple cameras to simultaneously shoot the same scene.
  • the disadvantage of this method is that the calibration between multiple camera positions is extremely complicated. It is necessary to calculate the parallax of different camera systems before synthesizing HDR, so as to ensure that the scene images taken by different camera positions are the same. The calculation amount is greatly improved, and the execution efficiency is low.
  • An HDR image imaging system including:
  • An image sensor the image sensor includes a photosensitive surface, the photosensitive surface is divided into at least two groups of photosensitive blocks arranged, a group of the photosensitive blocks includes at least two photosensitive points, the photosensitive points correspond to at least one pixel point;
  • An optical thin film array disposed on the surface of the photosensitive surface the thickness of the optical thin film array corresponds to the light transmittance of the optical thin film array, and the thickness of the optical thin film array at different photosensitive points within the same photosensitive block is different ,
  • the thickness of the optical film array at different positions of the photosensitive blocks has the same value space.
  • the HDR image imaging system further includes:
  • a processor connected to the image sensor is used to generate at least two material images based on the light signals collected by the image sensor at the photosensitive points at the same thickness of the optical thin film array in each photosensitive block, and obtained by fusing the at least two material images HDR image.
  • the thickness value space of the optical film array at different photosensitive blocks is consistent as:
  • the thickness of the optical film array at the same relative position in the different photosensitive blocks is the same.
  • the processor is used to generate at least two material images based on the light signals collected by the image sensor at the same relative position in each photosensitive block, and the HDR image is obtained by fusing the at least two material images .
  • the thickness distribution of the optical film array at different positions of the photosensitive points in the same photosensitive block is a non-monotonic arrangement or a staggered arrangement.
  • the manner in which the optical thin film array is disposed on the surface of the image sensor includes at least one of deposition, patterning, or etching processes.
  • the distribution of the light transmittance corresponding to the thickness of the optical film array at different positions of the photosensitive points in the same photosensitive block exhibits a nonlinear attenuation change.
  • the distribution of the light transmittance corresponding to the thickness of the optical film array at different positions of the photosensitive points in the same photosensitive block is attenuated in an equal ratio.
  • An HDR image imaging method based on the processor in the aforementioned HDR image imaging system, the method includes:
  • An HDR image imaging device including:
  • a signal collection module configured to obtain the optical signal collected by the image sensor at the photosensitive point in the photosensitive block, and generate a pixel value corresponding to the photosensitive point according to the optical signal;
  • the material image extraction module is used to extract pixel values corresponding to the photosensitive points at the same thickness of the optical film array in each photosensitive block to generate at least two material images, and one optical film array thickness corresponds to one material image;
  • An image fusion module is used to fuse the at least two material images to obtain an HDR image.
  • Figure 1 is a schematic diagram of HDR image imaging
  • FIG. 2 is a schematic diagram of the principle of generating HDR images by multiple exposures of a single camera in the conventional technology
  • FIG. 3 is a schematic diagram of the principle that multiple cameras simultaneously expose at different angles to generate HDR images in the conventional technology
  • FIG. 4 is a schematic diagram of the structure of an image sensor and an optical thin film array of an HDR image imaging system in an embodiment
  • FIG. 5 is a schematic diagram of dividing a photosensitive block and a photosensitive point on a photosensitive surface of an image sensor of an HDR image imaging system in an embodiment
  • FIG. 6 is a schematic diagram of a CMOS image sensor in conventional technology
  • FIG. 7 is a schematic diagram of a CMOS image sensor provided with an optical film array on the photosensitive surface of an embodiment
  • FIG. 8 is a schematic diagram of the thickness distribution of the optical film array corresponding to the photosensitive points in different photosensitive blocks in an embodiment
  • FIG. 9 is a schematic diagram of the thickness distribution of the optical film array corresponding to the photosensitive points in different photosensitive blocks in another embodiment
  • FIG. 10 is a schematic diagram of a process of generating an HDR image by an HDR image imaging system in an embodiment
  • FIG. 11 is a schematic diagram of the thickness of the optical film array corresponding to the photosensitive points in the same photosensitive block in one embodiment is not monotonously distributed or misaligned;
  • FIG. 12 is a flowchart of an HDR image imaging method based on an HDR image imaging system in an embodiment
  • FIG. 13 is a schematic diagram of an HDR image imaging device based on an HDR image imaging system in an embodiment
  • FIG. 14 is a schematic diagram of the composition of a computer system running the aforementioned HDR image imaging method in one embodiment.
  • the pictures taken by ordinary cameras have limited dynamic range, usually only 256 brightness ranges, so the pictures taken are usually LDR (English: Low Dynamic, Chinese: low dynamic range) images.
  • LDR Terms: Low Dynamic, Chinese: low dynamic range
  • a low-exposure LDR image 1 is obtained by an ordinary camera in the same scene in a dark environment, and a normal-exposure LDR image 2 is obtained in the same scene under a normal lighting environment. Under the strong light environment, the same scene was taken and the over-exposed LDR image 3 was obtained.
  • the LDR image 1 preserves the details of the scene under a high-brightness background (the sky and other bright places in the figure).
  • LDR image 3 has better preservation of scene details on low-brightness backgrounds (dark and shaded areas in the figure), while LDR image 2 is relatively balanced, and details on high-brightness backgrounds and low-brightness backgrounds are not as good as LDR image 1 and LDR Image 3.
  • the HDR imaging principle in the conventional technology is obtained by combining the above-mentioned LDR1, LDR2, and LDR3 with different exposure levels through image merging.
  • Figure 1 it can be seen that the merged HDR1 image uses more details of LDR1 in a high-brightness background and LDR3 details in a low-brightness background, so that the resulting HDR image
  • the dynamic range is expanded, and the details of each part of the image are displayed with appropriate brightness, which clearly shows the details of the shooting scene.
  • the HDR image imaging method of a single camera in the conventional technology can be referred to FIG. 2.
  • a single camera is used to generate LDR images of different brightness ranges by controlling the exposure time, and then combined.
  • the single camera is used to aim at the same shooting scene, and the direction of the camera is not moved during the shooting to ensure that the scenes shot by multiple exposures are the same scene.
  • the low-exposure LDR1 with low overall brightness is collected, and then with the medium exposure time t2, the normal-exposure LDR2 with medium overall brightness is collected, and then after a long At exposure time t3, over-exposed LDR3 with higher overall brightness can be collected, and then HDR images can be obtained by synthesizing LDR1, LDR2, and LDR3.
  • the conventional multi-camera HDR image imaging method can refer to FIG. 3.
  • the lens 1 acquires a low-exposure LDR1 with a low overall brightness through a short exposure time t1, and the lens 2 passes With a longer exposure time t2, a high-exposure LDR2 with higher overall brightness is collected.
  • the overall exposure time takes a larger value, ie exposure time t2. Compared with the single camera HDR imaging method, it takes less time and has faster imaging.
  • the present invention specifically proposes an HDR image imaging system and HDR based on the system Image imaging method and device.
  • the HDR image imaging system includes:
  • the image sensor 10 includes a photosensitive surface, and the photosensitive surface is divided into at least two groups of photosensitive blocks arranged in array, and a group of the photosensitive blocks includes at least two photosensitive points, and the photosensitive points correspond to at least one pixel point.
  • An image sensor, or photosensitive element is a device that uses the photoelectric conversion function of a photoelectric device to convert the optical signal on the photosensitive surface into an electrical signal in a proportional relationship with the optical signal.
  • Image sensors mainly include CCD (English: Charge-coupled Device, Chinese: charge-coupled device, a detection element that expresses the size of the signal with the amount of charge, and transmits the signal by coupling) and CMOS (English: Complementary Metal Oxide Semiconductor, Chinese: complementary Metal oxide semiconductor, a photosensitive element).
  • the image sensor 10 is generally of a flat-plate structure, and includes a photosensitive surface, and a plurality of pixel points are arranged in order on the photosensitive surface, that is, the collected optical signals are converted into electrical signals, and the electrical signals are encoded into sampling points of pixels of the image.
  • a plurality of pixels are arranged in order on the photosensitive surface of the image sensor 10, and these pixels are divided into several photosensitive blocks according to the position, and each photosensitive block is divided into several photosensitive points.
  • the photosensitive surface of the image sensor 10 is divided into a row and b column a total of a ⁇ b photosensitive blocks, and each photosensitive block, as shown in FIG. 5, contains 3 ⁇ 3 photosensitive points.
  • the resolution of the image sensor 10 is a ⁇ b ⁇ 3 ⁇ 3 ⁇ n.
  • the resolution of the image sensor 10 is 20 million pixels, it can be divided into 1980 ⁇ 1080 photosensitive blocks, each photosensitive block contains 9 photosensitive points, and each photosensitive point corresponds to 1 pixel.
  • the division of the photosensitive block and the photosensitive point is a logical division of the pixel on the photosensitive surface of the image sensor 10, and the specific position is based on the position of the pixel on the photosensitive surface.
  • a description of the collection of pixels in the area As shown in FIG. 5, the set of pixels included in the position of the upper left corner 11-11 of the photosensitive surface of the image sensor is described as the photosensitive point 11-11, and the position area of the 9 photosensitive points in the upper left corner of the photosensitive surface of the image sensor (11 -11 to 11-33 where the 9 photosensitive points are located), can be described as photosensitive block 11, correspondingly, the same row as photosensitive block 11 is described as photosensitive block 12, photosensitive block 13... photosensitive block 1b, the same row is sequentially described as photosensitive block 21, photosensitive block 31... photosensitive block a1.
  • the HDR image imaging system further includes an optical film array 20 disposed on the surface of the photosensitive surface.
  • the thickness of the optical film array 20 corresponds to the light transmittance of the optical film array.
  • the optical film array 20 The thicknesses of the positions of different photosensitive points in the same photosensitive block are different, and the spatial distribution of the thickness of the optical film array at the positions of the different photosensitive blocks is consistent.
  • FIG. 4 shows the structural characteristics of the image sensor 10 and the optical film array 20 in a photosensitive block.
  • the photosensitive block contains 4 photosensitive points, and corresponding optical thin film arrays are provided on the 4 photosensitive points.
  • the thickness of the optical film array at the positions of the four photosensitive spots is different, respectively, d1, d2, d3 and d4, and d1, d2, d3 and d4 are used to represent the photosensitive spots at the four positions.
  • the optical film array has a certain light transmittance, and the light transmittance usually shows a non-linear decreasing state as the film thickness increases.
  • the thickness of the optical film array at the four photosensitive points in the photosensitive block is in order from small to large It is d1, d2, d3, and d4, therefore, the light transmittance at the four positions of the photosensitive points d1, d2, d3, and d4 is sequentially attenuated.
  • the photosensitive block When the photosensitive block is imaged by light signal collection, because the exposure time of the photosensitive points on the same image sensor is the same, the light transmittance at the photosensitive points d1, d2, d3 and d4 is different, so that the photosensitive points d1, d2, d3 The exposures of the light signals collected with d4 are different, which makes the photosensitive points d1, d2, d3, and d4 collect the image signals of the corresponding brightness range.
  • each photosensitive block can be used as a super pixel, and since the super pixel contains photosensitive points with different light transmittances, each super pixel collects multiple Image signals in the brightness range, and the number of brightness ranges corresponds to the number of different thickness values.
  • the spatial distribution of the thickness values is the same, if the thickness of the corresponding optical film array is included in the two photosensitive blocks, the thickness belongs to the photosensitive with the thickness value space ⁇ d1, d2, d3... and dn ⁇ Point, but does not limit the specific thickness values corresponding to specific specific positions, the spatial distribution of the thickness values of the optical film array at the positions of the two photosensitive blocks is consistent, that is, all are ⁇ d1, d2, d3... And dn ⁇ .
  • the thickness value of the optical film array at the position of the photosensitive block includes four values of d1, d2, d3, and d4, the optical film with the thickness of these four values covers the four values in the photosensitive area A photosensitive point, in other photosensitive blocks of the image sensor, the thickness of the optical film array at the position of the four photosensitive points also includes four values of d1, d2, d3, and d4. It should be noted that in this embodiment, the relative position of the optical film array with a specific thickness value at a specific photosensitive point is not limited, and only the optical film array with this thickness value needs to be provided in each photosensitive block .
  • the photosensitive block A and the photosensitive B each include 4 photosensitive points, and the thickness values of the four photosensitive points A-11, A-12, A-21, and A-22 of the photosensitive block A are d3, d4, d2 and d1, therefore, the spatial distribution of the thickness of the optical film array at the position of the photosensitive block A is a set of four thickness values ⁇ d1, d2, d3 and d4 ⁇ .
  • the thickness values of the four photosensitive points B-11, B-12, B-21, and B-22 of the photosensitive block B are d4, d3, d1, and d2, respectively.
  • the optical film array is located at the position of the photosensitive block B
  • the spatial distribution of thickness values is also a set of four thickness values ⁇ d1, d2, d3 and d4 ⁇ , and the relative position of the thickness value d1 in the photosensitive block A is A-22, and the relative position in the photosensitive block B It is B-21, that is to say, in the thickness distribution, the relative positions of the photosensitive points corresponding to the same thickness value in each photosensitive block may be different.
  • the optical film array may be set to have the same thickness at the photosensitive points in the same relative position in different photosensitive blocks.
  • the photosensitive block A includes four photosensitive points A-11, A-12, A-21, and A-22
  • the photosensitive block B includes four photosensitive points B-11, B-12, In B-21 and B-22, the relative positions of A-11 and B-11 are the same, and the thickness is d3, and the relative positions of A-12 and B-12 are the same, and the thickness is d4.
  • the spatial distribution and thickness distribution of the thickness of the optical film array in each photosensitive block are set, so that when the image signal collected by the photosensitive points with the same light transmittance of each photosensitive block is subsequently extracted, it can be directly based on The relative position can be used to complete the extraction without using additional storage space to record the relative position of the photosensitive dots in the optical film array with the same thickness value in each photosensitive block, which reduces the space complexity.
  • the HDR image imaging system further includes a processor 30 connected to the image sensor 10, and is used for sensing points at the same thickness of the optical film array in each photosensitive block according to the image sensor 10
  • the collected optical signals generate at least two material images, and an HDR image is obtained by fusing at least two material images.
  • FIG. 10 shows the whole process and principle from signal acquisition to extraction of LDR image as material image to fusion of material image into HDR image.
  • each light-sensitive point on the image sensor 10 collects a corresponding optical signal and transforms it into an electrical signal that can be encoded as a pixel value.
  • the processor 30 may acquire the light signal collected by the image sensor at the photosensitive point in the photosensitive block, generate pixel values corresponding to the photosensitive point according to the optical signal, and then extract the optical film array in each photosensitive block
  • the pixel values corresponding to the photosensitive points at the same thickness generate at least two material images, and an optical film array thickness corresponds to a material image.
  • the at least two material images are fused to obtain an HDR image.
  • the photosensitive surface of the image sensor 10 includes 1920 ⁇ 1080 photosensitive blocks, which are numbered 11, 12, 21.... until ab, where a is 1920 and b is 1080, each photosensitive block contains 3 ⁇ 3 photosensitive points, which are sequentially numbered 11, 12, 21....33 in the same photosensitive block,
  • the photosensitive points 11-11 are the 11th photosensitive points in the photosensitive block 11.
  • the processor 30 may be used to extract pixel values corresponding to the photosensitive points at the same thickness of the optical film array in each photosensitive block to generate at least two material images, and one optical film array thickness corresponds to one material image.
  • the processor can be used to acquire the photosensitive blocks 11, 12, 21.... until ab, in each photosensitive block
  • the thickness of the set optical film array is the pixel value corresponding to the photosensitive point at the position d1.
  • the photosensitive point at the position where the thickness of the optical film array disposed in the photosensitive block 12 is d1 is the photosensitive point 12-12
  • the pixel value corresponding to the photosensitive point 12-12 is obtained, and so on. , 12, 21.... until the pixel point corresponding to the photosensitive point at the position of the thickness of the optical film array in the photosensitive block at the lower right corner ab is d1
  • a total of 1920 ⁇ corresponding to the thickness d1 can be finally taken out 1080 pixels (that is, one for each photosensitive block), and then a material image with a resolution of 1920 ⁇ 1080 can be generated.
  • the photosensitive block is regarded as a super pixel, and the resolution of the image sensor 10 relative to the super pixel is also 1920 ⁇ 1080, that is to say, the image sensor 10 can use the photosensitive block as a sub unit of image acquisition, to The number and arrangement of photosensitive blocks generate material images with corresponding resolutions.
  • a 3 ⁇ 3 photosensitive block contains thickness values d1 to d9 of 9 optical film arrays, correspondingly, 3 ⁇ 3 pairs of material images can be extracted. Since each material image corresponds to the same thickness value of the optical film array, that is, all pixels in the same material image correspond to the same light transmittance, so the pixels in the same material image express the same brightness range (exposure degree) Image details. Each material image corresponds to a different thickness value, that is, each material image corresponds to a different light transmittance, then each material image expresses image details in different brightness ranges (exposure levels).
  • a photosensitive point corresponding to 4 pixels can be set, and the photosensitive block is still set to 1920 ⁇ 1080, each photosensitive The block is still set to contain 3 ⁇ 3 photosensitive points.
  • the processor when the processor generates a 1920 ⁇ 1080 material image and obtains the pixel value corresponding to the photosensitive point, the light of 4 pixels corresponding to the photosensitive point can be The pixel values of the signal conversion are normalized and averaged or weighted average.
  • the processor 30 may fuse the at least two material images to obtain an HDR image.
  • the processor merges the material images to generate an HDR image
  • it is not necessary to fuse all the material images into an HDR image you can filter the acquired material images, or you can obtain only part of the material through a certain strategy
  • the image is then fused into an HDR image.
  • the optical film array 20 may be set to have the same thickness at the photosensitive points in the same relative position in different photosensitive blocks.
  • the processor may be used to generate at least two material images according to the light signals collected by the image sensor at the same relative position in each photosensitive block, and obtain an HDR image by fusing the at least two material images.
  • the processor can obtain the photosensitive block 11, 12.... until ab, the pixel values at the positions of 11-11, 12-11, 21-11...ab-11 generate the material image corresponding to d1, if the thickness value corresponds to d2
  • the relative position of the dot in the photosensitive area is 23, then the processor can obtain the photosensitive blocks 11, 12, 21.... until ab, the photosensitive dots 11-23, 12-23, 21-23...ab
  • the pixel value at the -23 position generates a material image corresponding to d1.
  • the processor 30 can directly obtain the photosensitive position of the corresponding position according to the rules when generating material images corresponding to the same thickness value of all pixels, ie corresponding to the same light transmittance
  • the pixel of the dot is sufficient, and it is not necessary to confirm the relative position of the photosensitive dot corresponding to the same thickness value in a photosensitive block during the acquisition process, thereby improving the execution efficiency.
  • the manner in which the optical thin film array is disposed on the surface of the image sensor includes at least one of deposition, patterning, or etching processes. That is to say, the formation of the optical thin film array can be formed by secondary processing of the photosensitive surface of the image sensor 10, and the method of secondary processing may include depositing optical media of different thicknesses at different locations, or etching the photosensitive surface to change The thickness of the photosensitive surface at different positions.
  • an additional anti-reflection coating may be deposited on the surface layer of the optical thin film array that requires high transmittance pixels to increase the transmittance.
  • optical film array may be arranged such that the distribution of light transmittance corresponding to the thickness of different positions of the photosensitive points in the same photosensitive block exhibits a nonlinear attenuation change.
  • the transmittance of the optical medium of the optical thin film array has a nonlinear attenuation relationship with the thickness. Therefore, when the optical thin film array is set, the thickness can be set to an equidistant linear attenuation, which can realize the transparency of the optical thin film array The light rate decays nonlinearly.
  • t k n -k ⁇ t 1
  • t 1 is the highest light transmittance
  • t k the light transmittance of the kth thickness value
  • n is the basic attenuation rate.
  • an optical thin film array made of an optical medium whose transmittance is attenuated in proportion to the thickness value can be selected, so that when setting the thickness values of different photosensitive points in the same photosensitive block, the equal difference setting can be used.
  • the light transmittance corresponding to each photosensitive point in the ⁇ 3 photosensitive block is attenuated proportionally. Using this light transmittance or thickness setting method, more dynamic digits can be used to record data in the high brightness range, highlighting details in bright places.
  • optical film array may be arranged such that the thickness distribution of different positions of the photosensitive points in the same photosensitive block is arranged non-monotonically or staggered.
  • FIG. 11 is the distribution of the thickness of photosensitive dots in an m ⁇ n photosensitive block, in which the shades of color gray represent the thickness value. It can be seen from Figure 11 that the thickness value of the photosensitive dots in the entire m ⁇ n photosensitive block varies unevenly, showing a non-monotonic change, and the thickness of other photosensitive dots adjacent to a certain photosensitive dot has a large difference.
  • the advantage is that when a sub-region of the m ⁇ n photosensitive block (including several photosensitive points) has noise, other regions of the photosensitive block include photosensitive points whose light transmittance is close to the photosensitive points in the sub-region In this way, the sampling of a certain large brightness range will not be disturbed by noise.
  • the thickness distribution of the optical film array in the same photosensitive block is set to a non-monotonic arrangement or a staggered arrangement, and the noise resistance performance is even more.
  • the neighboring pixels of the photosensitive point M x,y are lost due to saturation or below the noise level ⁇ M x-1,y-1 ,M x,y+1 ,M x-1,y ,M x+1 , Y ⁇ must store valid information in the imaging area.
  • the value of the lost area can be obtained by interpolation. Compared to directly fitting known pixel values to calculate the missing area, fitting a smooth surface with known pixel values is more conducive to noise suppression, and then using the approximate values of pixels on the smooth surface to interpolate to find the missing signal.
  • the neighborhood of the known photosensitive point can be further expanded, such as replacing the known pixel value of the photosensitive point with the average value of pixels in the neighborhood of the known photosensitive point in a certain direction.
  • a HDR image imaging method is also provided correspondingly, the execution of the method is based on a computer program, and can be run on the processor 30 based on the aforementioned HDR image imaging system ,
  • the processor may be a chip embedded in the image sensor 10, or it may be another external computer device connected to the image sensor 10 of the von Neumann system
  • the method includes:
  • Step S102 Obtain an optical signal collected by the image sensor at the photosensitive point in the photosensitive block, and generate a pixel value corresponding to the photosensitive point according to the optical signal.
  • Step S104 extract pixel values corresponding to the photosensitive points at the same thickness of the optical film array in each photosensitive block to generate at least two material images, and one optical film array thickness corresponds to one material image.
  • Step S106 Fusion of the at least two material images to obtain an HDR image.
  • an HDR image imaging device is also provided correspondingly. Specifically, as shown in FIG. 13, it includes a signal acquisition module 102, a material image extraction module 104, and an image fusion Module 106, where:
  • the signal collection module 102 is configured to acquire the optical signal collected by the image sensor at the photosensitive point in the photosensitive block, and generate a pixel value corresponding to the photosensitive point according to the optical signal.
  • the material image extraction module 104 is configured to extract pixel values corresponding to the photosensitive points at the same thickness of the optical film array in each photosensitive block to generate at least two material images, and one optical film array thickness corresponds to one material image.
  • the image fusion module 106 is configured to fuse the at least two material images to obtain an HDR image.
  • the "consistent" mentioned above means the same or the same meaning within a certain error range. Due to the semiconductor process manufacturing process, it is inevitable that the actual thickness of some optical thin-film media will be the same as the theoretically designed thickness. There is a deviation, but as long as it is within a certain error range, it will only affect the range of the light transmittance of the optical film medium. When the overlap of the light transmittance range corresponding to the two photosensitive points is greater than a certain preset ratio, the two The light transmittance of the two is consistent, or the spatial distribution or thickness distribution of the thickness of the optical film medium corresponding to the photosensitive point is consistent.
  • FIG. 8 shows a computer system based on the von Neumann system running the above HDR image imaging method. Specifically, it may include an external input interface 1001 connected to the system bus, a processor 1002, a memory 1003, and an output interface 1004.
  • the external input interface 1001 may optionally include at least a network interface 10012 and a USB interface 10014.
  • the memory 1003 may include an external memory 10032 (such as a hard disk, an optical disk, or a floppy disk, etc.) and an internal memory 10034.
  • the output interface 1004 may include at least a display screen 10042 and other devices.
  • the operation of the method is based on a computer program whose program files are stored in the external memory 10032 of the aforementioned computer system 10 based on the von Neumann system, and are loaded into the internal memory 10034 during operation. It is then compiled into machine code and passed to the processor 1002 for execution, so that a logical signal acquisition module 102, material image extraction module 104, and image fusion module 106 are formed in the computer system 10 based on the von Neumann system.
  • the input parameters are received through the external input interface 1001 and passed to the memory 1003 for buffering, and then input to the processor 1002 for processing.
  • the processed result data may be cached in the memory 1003 Perform subsequent processing, or be passed to the output interface 1004 for output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本发明实施例公开了一种HDR图像成像系统,包括图像传感器(10),所述图像传感器(10)包括一感光面,所述感光面被划分为至少两组排列的感光区块,一组所述感光区块包括至少两个感光点,所述感光点对应至少一像素点;设置在所述感光面表面的光学薄膜阵列(20),所述光学薄膜阵列(20)的厚度对应所述光学薄膜阵列的透光率,所述光学薄膜阵列(20)在同一感光区块内的不同感光点所在位置的厚度不同,所述光学薄膜阵列(20)在不同感光区块所在位置的厚度分布一致。采用本系统,可提高HDR图像成像的速度,同时,计算量较小,执行效率更高。

Description

HDR图像成像方法、装置及系统 技术领域
本发明涉及图像处理领域,特别涉及一种HDR图像成像方法、装置及系统。
背景技术
HDR(英文:High Dynamic Range,中文:高动态范围)成像,在计算机图形学与电影摄影术中,是用来实现比普通数字图像技术更大曝光动态范围(即更大的明暗差别)的一组技术,相比普通的图像,可以提供更多的动态范围和图像细节。HDR成像技术凭借其高度还原真实世界中宽广辐射亮度范围的能力已经被广泛被用于摄影、安防监控、图像显示等领域。
传统技术中,HDR成像技术的实现是通过同步或异步的方式,在不同曝光条件下拍摄不同亮度动态范围下的图像,增强细节信息的捕捉能力,并利用图像融合技术合成出HDR图像。
目前的HDR成像技术通常为利用单个图像记录设备,通过改变曝光时间,按顺序先后记录不同曝光条件下的LDR图像,再将多张LDR进行HDR合成。这种异步方式需要分时拍摄多张图像,因此需要额外的曝光时间,这就使得HDR成像的过程很慢,实时性不足。
为了解决实时性的问题,现有技术中研发出多机位拍摄的HDR成像系统,利用多个摄像机的立体系统对同一场景进行同时拍摄。然而,采用这种方法的缺点是,多个拍摄机位之间的校准极为复杂,需要在合成HDR前要对不同摄像系统进行视差计算,从而保证不同机位拍摄的场景图像相同,对这就大大提高了计算量,执行效率较低。
发明内容
基于此,为解决现有技术中单摄像头进行HDR图像成像方法成像时间慢,以及多摄像头进行HDR图像成像时需要计算视差导致的计算量大的技术问题,特提出了一种HDR图像成像系统。
一种HDR图像成像系统,包括:
图像传感器,所述图像传感器包括一感光面,所述感光面被划分为至少两组排列的感光区块,一组所述感光区块包括至少两个感光点,所述感光点对应至少一像素点;
设置在所述感光面表面的光学薄膜阵列,所述光学薄膜阵列的厚度对应所述光学薄膜阵列的透光率,所述光学薄膜阵列在同一感光区块内的不同感光点所在位置的厚度不同,所述光学薄膜阵列在不同感光区块所在位置的厚度取值空间一致。
在其中一个实施例中,所述HDR图像成像系统还包括:
与所述图像传感器连接的处理器,用于根据图像传感器在各感光区块内光学薄膜阵列厚度一致处的感光点采集的光信号生成至少二素材图像,通过将所述至少二素材图像融合得到HDR图像。
在其中一个实施例中,所述光学薄膜阵列在不同感光区块所在位置的厚度取值空间一致为:
所述光学薄膜阵列在不同感光区块内同一相对位置的感光点处的厚度一致。
在其中一个实施例中,所述处理器用于根据图像传感器在各感光区块内相对位置相同处的感光点采集的光信号生成至少二素材图像,通过将所述至少二素材图像融合得到HDR图像。
在其中一个实施例中,所述光学薄膜阵列在同一所述感光区块内的不同所述感光点所在位置的厚度分布为非单调排列或错位排列。
在其中一个实施例中,所述光学薄膜阵列设置在所述图像传感器表面的方式包括沉积、图形化或刻蚀工艺中的至少一种。
在其中一个实施例中,所述光学薄膜阵列在同一所述感光区块内的不同所述感光点所在位置的厚度对应的透光率的分布呈非线性衰减变化。
在其中一个实施例中,所述光学薄膜阵列在同一所述感光区块内的不同所述感光点所在位置的厚度对应的透光率的分布呈等比衰减。
此外,为解决现有技术中单摄像头进行HDR图像成像方法成像时间慢,以及多摄像头进行HDR图像成像时需要计算视差导致的计算量大的技术问题,特提出了一种基于前述HDR图像成像系统中的处理器运行的HDR图像成像方法。
一种HDR图像成像方法,基于前述的HDR图像成像系统中的处理器,所述方法包括:
获取所述图像传感器在感光区块内的所述感光点采集的光信号,根据所述光信号生成与所述感光点对应的像素值;
提取各感光区块内光学薄膜阵列厚度一致处的感光点对应的像素值生成至少二素材图像,一光学薄膜阵列厚度对应一素材图像;
对所述至少二素材图像融合得到HDR图像。
此外,为解决现有技术中单摄像头进行HDR图像成像方法成像时间慢,以及多摄像头进行HDR图像成像时需要计算视差导致的计算量大的技术问题,特提出了一种与前述HDR图像成像方法对应的HDR图像成像装置。
一种HDR图像成像装置,包括:
信号采集模块,用于获取所述图像传感器在感光区块内的所述感光点采集的光信号,根据所述光信号生成与所述感光点对应的像素值;
素材图像提取模块,用于提取各感光区块内光学薄膜阵列厚度一致处的感光点对应的像素值生成至少二素材图像,一光学薄膜阵列厚度对应一素材图像;
图像融合模块,用于对所述至少二素材图像融合得到HDR图像。
实施本发明实施例,将具有如下有益效果:
采用了上述采用上述HDR图像成像系统及相应的方法和装置之后,相比于传统技术中单摄像头的方法,获取LDR的素材图像时,所有感光点同时曝光,通过光学薄膜阵列在各个感光点位置的厚度不同所产生的透光率不同,控制各用于融合的LDR图像的曝光程度,使得生成素材图像的过程为异步同时执行,所有感光点同时曝光采样的过程,而不需要像传统技术中单摄像头的方 法中,需要同步等待曝光拍摄一张,再拍摄下一张LDR素材图像,因此,耗时更短。
而相比与传统技术中多摄像头采集的方法,由于图像传感器上的同一感光区块内的感光点相隔距离十分小,视差可忽略,因此,在成像的过程中不会产生容易造成图像融合误差的视差情况,因此计算的冗余度较低,提高了执行效率。
附图说明
下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为HDR图像成像的原理图;
图2为传统技术中单摄像头多次曝光生成HDR图像的原理示意图;
图3为传统技术中多摄像头以不同视角同时曝光生成HDR图像的原理示意图;
图4为一个实施例中HDR图像成像系统的图像传感器和光学薄膜阵列的结构构成的示意图;
图5为一个实施例中HDR图像成像系统的图像传感器的感光面上划分感光区块和感光点的示意图;
图6为传统技术中CMOS图像传感器的示意图;
图7为一个实施例感光面上设置了光学薄膜阵列的CMOS图像传感器的示意图;
图8为一个实施例中不同感光区块内感光点对应的光学薄膜阵列的厚度分布示意图;
图9为另一个实施例中不同感光区块内感光点对应的光学薄膜阵列的厚度分布示意图;
图10为一个实施例中一种HDR图像成像系统生成HDR图像的过程示意 图;
图11为一个实施例中同一感光区块内感光点对应的光学薄膜阵列的厚度非单调分布或错位分布的示意图;
图12为一个实施例中基于一种HDR图像成像系统的HDR图像成像方法的流程图;
图13为一个实施例中基于一种HDR图像成像系统的HDR图像成像装置的示意图
图14为一个实施例中运行前述HDR图像成像方法的计算机系统的组成示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在传统技术中,普通相机拍摄的图片由于动态范围有限,通常只有256个亮度范围,因此拍照得到的图片通常为LDR(英文:Low Dynamic Range,中文:低动态范围)图像,当拍照时,外部光照环境的不同则会使得生成的LDR图像会产生不同的曝光效果,从而遗失部分亮度范围的细节信息。
例如,如图1所示,普通相机在光暗环境下对相同的场景拍摄得到的是低曝光的LDR图像1,在正常光照环境下对相同的场景拍摄得到的是正常曝光的LDR图像2,在强烈光照环境下对相同的场景拍摄得到的是过度曝光的LDR图像3,由图1可看出,LDR图像1对于高亮度背景(图中的天空等光亮处)下的景物细节保存较好,LDR图像3对低亮度背景(图中的暗处和阴影处)的景物细节保存较好,而LDR图像2则相对平衡,高亮度背景和低亮度背景下的细节均不如LDR图像1和LDR图像3。
而传统技术中的HDR成像原理,即通过将上述不同曝光程度的LDR1、LDR2和LDR3通过图像合并得到。观察图1可看出,合并后的HDR1图像,在高亮度背景下更多地采用了LDR1的细节,在低亮度背景下更多地采用了 LDR3的细节,这样就使的生成的HDR图像的动态范围得到扩展,而图像各个部分的细节都得以适当的亮度进行展示,更清晰地展现了拍摄场景的细节特征。
传统技术中的单摄像头的HDR图像成像方法可参考图2所示,在图2中,采用单一摄像头的方式,通过控制曝光时间生成不同亮度范围的LDR图像,再进行合成。通过单一摄像头对准同一拍摄场景进行拍摄,拍摄期间不移动摄像头的方向,保证多次曝光拍摄的场景为同一场景。然后先通过较短的曝光时间t1,采集到低曝光度的,整体亮度较低的LDR1,再通过中等曝光时间t2,采集到正常曝光度的,整体亮度中等的LDR2,然后在经过较长的曝光时间t3,可采集到过度曝光的,整体亮度较高的LDR3,再通过将LDR1、LDR2和LDR3合成即可得到HDR图像。
然而,也可由图2看出,上述单摄像头HDR成像过程中,由于只有单一摄像头采集,因此需要同步等待曝光时间t1、曝光时间t2和曝光时间t3,因此,总体曝光耗时为t1+t2+t3,这就导致传统技术的HDR图像成像较慢。
传统技术中的多摄像头的HDR图像成像方法可参考图3所示,在图3中,镜头1通过较短的曝光时间t1,采集到低曝光度的,整体亮度较低的LDR1,镜头2通过较长的曝光时间t2,采集到高曝光度的,整体亮度较高的LDR2。由于是镜头1和镜头2同时采集图像,因此整体曝光时间耗时取较大值即曝光时间t2,相比于单摄像头的HDR成像方法,耗时时间短,成像快。
然而,也可由图3看出,上述多摄像头HDR成像过程中,由于镜头1和镜头2之间的距离,使得与拍摄场景的视角存在视差,这就导致镜头1拍摄的LDR图片和镜头2拍摄的LDR图片不完全一致,这就需要大量的计算来去除视差产生的用于合成的LDR图片不一致的问题,从而效率较低。
为解决上述传统技术中HDR图像成像技术中单镜头采集耗时长,多镜头采集需要计算视差,计算量较大的技术问题,本发明特提出了一种HDR图像成像系统、以及基于该系统的HDR图像成像方法和装置。
具体的,本发明提出的一种HDR图像成像系统的实现方式可参考图4和图5所示,在本实施例中,该HDR图像成像系统包括:
图像传感器10,包括一感光面,感光面被划分为至少两组排列的感光区 块,一组所述感光区块包括至少两个感光点,所述感光点对应至少一像素点。
图像传感器,或称为感光元件,是利用光电器件的光电转换功能,将感光面上的光信号转换为与光信号成相应比例关系的电信号的装置。图像传感器主要包括CCD(英文:Charge-coupled Device,中文:电荷耦合器件,一种用电荷量表示信号大小,用耦合方式传输信号的探测元件)和CMOS(英文:Complementary Metal Oxide Semiconductor,中文:互补金属氧化物半导体,一种感光元件)。
图像传感器10通常为平板型结构,包括一感光面,在感光面上有序排列有多个像素点,即采集光信号转化为电信号,并将电信号编码成为图像的像素的采样点。在本实施例中,图像传感器10的感光面上有序排列有多个像素点,这些像素点按照位置被划分为数个感光区块,每个感光区块中又被划分为数个感光点。参考图5所示,在本实施例中,图像传感器10的感光面上被划分为a行b列共a×b个感光区块,每个感光区块中如图5中所示,包含3×3个感光点。若一感光点对应n个像素点,则该图像传感器10的分辨率即为a×b×3×3×n。例如,若图像传感器10的分辨率为2000万像素,则可划分为1980×1080个感光区块,每个感光区块包含9个感光点,每个感光点对应1像素点。
需要说明的是,在本实施例中,感光区块和感光点的划分是对于图像传感器10的感光面上的像素点的逻辑划分,是以像素点在感光面上的位置为依据对特定位置区域的像素点的集合的描述。如图5中图像传感器的感光面的左上角11-11的位置包含的像素点的集合被描述为感光点11-11,图像传感器的感光面的左上角的9个感光点的位置区域(11-11至11-33的9个感光点所在的方形区域),可被描述为感光区块11,相应的,与感光区块11属于同一行的依次被描述为感光区块12、感光区块13...感光区块1b,同一列的依次被描述为感光区块21、感光区块31...感光区块a1。
也就是说图像传感器10的生产工艺中,并不需要显性地指出感光区块和感光点的具体物理形态,而只需要在成像过程中,应用其逻辑含义即可。
在本实施例中,如图4所示,该HDR图像成像系统还包括设置在感光面表面的光学薄膜阵列20,光学薄膜阵列20的厚度对应所述光学薄膜阵列的透光率,光学薄膜阵列20在同一感光区块内的不同感光点所在位置的厚度不同, 光学薄膜阵列在不同感光区块所在位置的厚度取值空间分布一致。
图4展示出的为图像传感器10和光学薄膜阵列20在一个感光区块中的结构特征,该感光区块中包含了4个感光点,该4个感光点上设置有相应的光学薄膜阵列,但光学薄膜阵列在该4个感光点位置上的厚度各不相同,分别为d1、d2、d3和d4,以下以d1、d2、d3和d4分别表示4个位置的感光点。光学薄膜阵列具有一定的透光性,且透光性通常随着薄膜厚度的增大而呈非线性递减状态,由于光学薄膜阵列在该感光区块内4个感光点位置的厚度从小到大依次为d1、d2、d3和d4,因此,感光点d1、d2、d3和d4这四个位置的透光率依次衰减。当该感光区块进行光信号采集成像时,由于同一图像传感器上的感光点的曝光时间均相同,感光点d1、d2、d3和d4处的透光率不同,使得感光点d1、d2、d3和d4采集的光信号的曝光度各不相同,这就使得感光点d1、d2、d3和d4各自采集了相应亮度范围的图像信号。扩展到感光面上的其他感光区块,即每个感光区块可作为一个超级像素,且该超级像素由于包含了透光率各不相同的感光点,因此每一个超级像素均采集了多个亮度范围的图像信号,且亮度范围的个数与不同的厚度值的数量对应。
具体的,对于设置有光学薄膜阵列的CMOS元件的结构特征可参考图6和图7所示。
需要说明的是,厚度取值空间分布一致为,若两个感光区块中均包含了对应的光学薄膜阵列的厚度属于厚度取值空间为{d1、d2、d3...和dn}的感光点,但不限定具体特定位置对应的具体的厚度取值,则该光学薄膜阵列在该两个感光区块所在位置的厚度取值空间分布一致,即均为{d1、d2、d3...和dn}。
例如,如图8所示,若光学薄膜阵列在感光区块所在位置的厚度值包括d1、d2、d3和d4四个值,厚度为该4个值的光学薄膜覆盖了该感光区域内的四个感光点,则在该图像传感器的其他感光区块内,同样包含的4个感光点的位置处的光学薄膜阵列的厚度值也为d1、d2、d3和d4四个值。需要说明的是,在本实施例中,并不限定特定的厚度值的光学薄膜阵列在特定的感光点的相对位置,只需要各感光区块内均设置有该厚度值的光学薄膜阵列即可。
如图8中,感光区块A和感光B均包含4个感光点,感光区块A的四个感光点A-11、A-12、A-21和A-22的厚度值分别为d3、d4、d2和d1,因此, 光学薄膜阵列在感光区块A所在位置的厚度取值空间分布为{d1、d2、d3和d4}四个厚度值的集合。感光区块B的四个感光点B-11、B-12、B-21和B-22的厚度值分别为d4、d3、d1和d2,因此,光学薄膜阵列在感光区块B所在位置的厚度取值空间分布也为{d1、d2、d3和d4}四个厚度值的集合,而厚度值d1在感光区块A中的相对位置为A-22,在感光区块B中的相对位置为B-21,也就是说,厚度分布中,同一厚度值对应的感光点在各感光区块中的相对位置可以不同。
进一步的,为了方便提取图像的像素点,光学薄膜阵列可设置为在不同感光区块内同一相对位置的感光点处的厚度一致。
例如,参考图9所示,感光区块A包含四个感光点A-11、A-12、A-21和A-22,感光区块B包含四个感光点B-11、B-12、B-21和B-22,A-11与B-11相对位置相同,且厚度均为d3,A-12与B-12相对位置相同,且厚度均为d4。采用此种方式设置光学薄膜阵列在各感光区块内的厚度取值空间分布和厚度分布,可使得后续在提取各感光区块的透光率一致的感光点采集的图像信号时,可直接根据相对位置即可完成提取,而不用额外的存储空间去记录处于设置有同一厚度值的光学薄膜阵列的感光点在各个感光区块中的相对位置,降低了空间复杂度。
进一步的,如图4所示,该HDR图像成像系统,还包括与所述图像传感器10连接的处理器30,用于根据图像传感器10在各感光区块内光学薄膜阵列厚度一致处的感光点采集的光信号生成至少二素材图像,通过将至少二素材图像融合得到HDR图像。
参考图10所示,图10展示了从信号采集到作为素材图像的LDR图像的提取,到将素材图像融合为HDR图像的整个过程和原理。当该HDR图像成像系统成像时,图像传感器10上各感光点均采集到相应的光信号,变转换成可编码为像素值的电信号。处理器30可获取所述图像传感器在感光区块内的所述感光点采集的光信号,根据所述光信号生成与所述感光点对应的像素值,然后提取各感光区块内光学薄膜阵列厚度一致处的感光点对应的像素值生成至少二素材图像,一光学薄膜阵列厚度对应一素材图像对所述至少二素材图像融合得到HDR图像。
具体的,参考图5所示,若图像传感器10分辨率为2000万像素,一感光点对应一像素点,图像传感器10的感光面包含1920×1080个感光区块,其编号为11、12、21....直到ab,其中a为1920,b为1080,每个感光区块包含3×3个感光点,在同一感光区块内依次编号为11、12、21....33,感光点11-11即为感光区块11中的11号感光点。
处理器30可用于提取各感光区块内光学薄膜阵列厚度一致处的感光点对应的像素值生成至少二素材图像,一光学薄膜阵列厚度对应一素材图像。
再参考图5所示,若感光点11-11处的光学薄膜阵列的厚度为d1,则处理器可用于获取感光区块11、12、21....直到ab中,各个感光区块内设置的光学薄膜阵列的厚度为d1的位置的感光点对应的像素值。
例如,若感光区块12内设置的光学薄膜阵列的厚度为d1的位置的感光点为感光点12-12,则获取感光点12-12对应的像素值,以此类推分别获取感光区块11、12、21....直到最右下角ab处的感光区块内设置的光学薄膜阵列的厚度为d1的位置的感光点对应的像素点,则最终可取出与厚度d1对应的共计1920×1080个像素点(即每个感光区块1个),然后即可生成一副分辨率为1920×1080的素材图像。如前所述,将感光区块视为超级像素,图像传感器10相对于超级像素的分辨率也为1920×1080,也就是说,图像传感器10可以感光区块作为一个图像采集的子单元,以感光区块的数量和排列生成相应分辨率的素材图像。
同样,由于一个3×3的感光区块内,包含9个光学薄膜阵列的厚度值d1至d9,相应的,可提取出3×3副素材图像。由于每张素材图像对应相同的光学薄膜阵列的厚度值,即为同一素材图像中的所有像素点对应相同透光率,因此同一素材图像中的像素点均表达了同一亮度范围内(曝光程度)的图像细节。而各个素材图像对应不同的厚度值,即为各个素材图像对应不同的透光率,则各个素材图像表达了不同亮度范围内(曝光程度)的图像细节。
需要说明的是,若一感光点对应多个像素点,例如对于8000万像素分辨率的图像传感器10,可设置一感光点对应4像素点,感光区块仍设置为1920×1080,每个感光区块仍设置为包含3×3个感光点,在此情况下,在处理器在生成1920×1080的素材图像,获取感光点对应的像素值时,可将感光点对 应的4像素点的光信号转换的像素值归一化取平均值或加权平均值处理。
在获取了d1至d9分别对应的素材图像之后,处理器30即可对所述至少二素材图像融合得到HDR图像。
需要说明的是,处理器在对素材图像进行融合生成HDR图像时,并不需要将所有素材图像融合成HDR图像,可对已获取的素材图像进行筛选,或者可通过一定地策略仅获取部分素材图像,然后融合成HDR图像。如前例中,并不需要将d1至d9分别对应的9张素材图像融合成HDR图像,若拍照环境光照强烈,则可尽量筛选对应的厚度值较大、透光率较低的素材图像融合成HDR图像,若拍照环境光照较暗,则可尽量筛选对应的厚度值较小、透光率较低的素材图像融合成HDR图像。这样就可以减少图像融合的计算量,提高执行效率。
进一步的,如前所述,光学薄膜阵列20可设置为在不同感光区块内同一相对位置的感光点处的厚度一致。在此情况下,处理器可用于根据图像传感器在各感光区块内相对位置相同处的感光点采集的光信号生成至少二素材图像,通过将所述至少二素材图像融合得到HDR图像。
如前例中,若厚度值d1对应的感光点在感光区域中的相对位置为11(共有11、12、21、...33这9个相对位置),则处理器可获取感光区块11、12、21....直到ab中,感光点11-11、12-11、21-11....ab-11位置的像素值生成与d1对应的素材图像,若厚度值d2对应的感光点在感光区域中的相对位置为23,则处理器可获取感光区块11、12、21....直到ab中,感光点11-23、12-23、21-23....ab-23位置的像素值生成与d1对应的素材图像。
由此可更进一步看出,采用此种方式设置光学薄膜阵列,处理器30在生成所有像素点对应同一厚度值,即对应同一透光率的素材图像时,可直接按照规则获取相应位置的感光点的像素指即可,而不需要在获取过程中先确认对应同一厚度值的感光点在某感光区块中的相对位置,从而提高了执行效率。
采用上述HDR图像成像系统,相比于传统技术中单摄像头的方法,获取LDR的素材图像时,所有感光点同时曝光,通过光学薄膜阵列在各个感光点位置的厚度不同所产生的透光率不同,控制各用于融合的LDR图像的曝光程度,使得生成素材图像的过程为异步同时执行,所有感光点同时曝光采样的过 程,而不需要像传统技术中单摄像头的方法中,需要同步等待曝光拍摄一张,再拍摄下一张LDR素材图像,因此,耗时更短。
而相比与传统技术中多摄像头采集的方法,由于图像传感器10上的同一感光区块内的感光点相隔距离十分小,视差可忽略,因此,在成像的过程中不会产生容易造成图像融合误差的视差情况,因此计算的冗余度较低,提高了执行效率。
进一步的,光学薄膜阵列设置在所述图像传感器表面的方式包括沉积、图形化或刻蚀工艺中的至少一种。也就是说,光学薄膜阵列的形成可通过对图像传感器10的感光面二次加工而成,二次加工的方法可包括在不同位置沉积不同厚度的光学介质,或者对感光面进行刻蚀来改变感光面不同位置的厚度。
在其他实施例中,为了进一步提高动态范围,可以在需要高透射率像素的光学薄膜阵列表层额外沉积一层减反膜(anti-reflection coating),提高透射率。
进一步的,光学薄膜阵列可设置为在同一所述感光区块内的不同所述感光点所在位置的厚度对应的透光率的分布呈非线性衰减变化。
由于通常情况下,光学薄膜阵列的光学介质的透光率与厚度成非线性衰减关系,因此,在设置光学薄膜阵列时,可设置为厚度为等差线性衰减,即可实现光学薄膜阵列的透光率呈非线性衰减。
进一步的,光学薄膜阵列在同一所述感光区块内的不同所述感光点所在位置的厚度对应的透光率的分布呈等比衰减,即t k=n -k×t 1,其中,k为厚度取值空间分布,t 1为最高透光率,t k为第k个厚度值的透光率,n为基础衰减率。如前例中,对于3×3的感光区块,则厚度取值空间中的厚度值的个数为9,厚度值最低的感光点的透光率为t 1,则可依次根据公式设置t k
进一步的,可选择透光率与厚度值成等比衰减的光学介质制成的光学薄膜阵列,这样在设置同一感光区块内的不同感光点的厚度值时,采用等差设置即可。如前例中,对于3×3的感光区块,可分别在各感光点处设置光学薄膜阵列的厚度值为d1=d、d2=2d、d3=3d...d9=9d,这样就使得3×3的感光区块中的各感光点对应的透光率呈等比衰减。利用这种透光率或厚度设置方式,可将更多的动态位数用于记录高亮度范围的数据,突出明亮处细节。
进一步的,光学薄膜阵列可设置为在同一所述感光区块内的不同所述感光 点所在位置的厚度的分布为非单调排列或错位排列。
参考图11所示,图11为一个m×n感光区块内的感光点厚度的分布,其中颜色灰度的深浅代表着厚度值的大小。由图11可看出,整个m×n感光区块内的感光点的厚度值变化不均匀,呈非单调性的变化,与某一感光点毗邻的其他感光点的厚度差异较大,这样设置的好处,当m×n感光区块的某个子区域(包含数个感光点)出现噪点时,在该感光区块的其他区域包含有透光率与该子区域中的感光点接近的感光点,这样就不会使得对某个大范围的亮度范围的采样被噪声干扰。而若设置为厚度沿感光点11、12、21...mn单调递增,则当某个子区域出现噪点时,连续较大亮度范围的图像采样将被噪声干扰。也就是说,光学薄膜阵列在同一感光区块内的厚度分布设置为非单调排列或错位排列,抗噪性能更加。
进一步的,由于饱和或者低于噪声水平而丢失感光点M x,y的邻近像素点{M x-1,y-1,M x,y+1,M x-1,y,M x+1,y}必定储存了该成像区域内的有效信息。将邻近的各像素点按照感光区块内的透射率T x,y进行归一化后,可以利用插值的方法求出丢失区域的值。相比于直接拟合已知像素值计算丢失区域,先利用已知像素值拟合出一个平滑面更有利于抑制噪声,再利用平滑面上各像素点的近似值进行插值求出丢失的信号。为了充分利用感光区块内的感光点,可以进一步扩展已知感光点的邻域,如用某一方向的已知感光点邻域范围内的像素平均值代替该感光点已知的像素值。
在一个实施例中,基于前述的HDR图像成像系统,还与之对应地提供了一种HDR图像成像方法,该方法的执行基于计算机程序,可运行于基于前述HDR图像成像系统的处理器30上,该处理器可以是嵌入在图像传感器10中的芯片,也可以是与图像传感器10连接的其他冯诺依曼体系的外部计算机设备
具体的,如图12所示,该方法包括:
步骤S102:获取所述图像传感器在感光区块内的所述感光点采集的光信号,根据所述光信号生成与所述感光点对应的像素值。
步骤S104:提取各感光区块内光学薄膜阵列厚度一致处的感光点对应的 像素值生成至少二素材图像,一光学薄膜阵列厚度对应一素材图像。
步骤S106:对所述至少二素材图像融合得到HDR图像。
在一个实施例中,针对上述HDR图像成像方法,还与之对应地提供了一种HDR图像成像装置,具体的,如图13所示,包括信号采集模块102、素材图像提取模块104和图像融合模块106,其中:
信号采集模块102,用于获取所述图像传感器在感光区块内的所述感光点采集的光信号,根据所述光信号生成与所述感光点对应的像素值。
素材图像提取模块104,用于提取各感光区块内光学薄膜阵列厚度一致处的感光点对应的像素值生成至少二素材图像,一光学薄膜阵列厚度对应一素材图像。
图像融合模块106,用于对所述至少二素材图像融合得到HDR图像。
实施本发明实施例,将具有如下有益效果:
采用了上述采用上述HDR图像成像系统及相应的方法和装置之后,相比于传统技术中单摄像头的方法,获取LDR的素材图像时,所有感光点同时曝光,通过光学薄膜阵列在各个感光点位置的厚度不同所产生的透光率不同,控制各用于融合的LDR图像的曝光程度,使得生成素材图像的过程为异步同时执行,所有感光点同时曝光采样的过程,而不需要像传统技术中单摄像头的方法中,需要同步等待曝光拍摄一张,再拍摄下一张LDR素材图像,因此,耗时更短。
而相比与传统技术中多摄像头采集的方法,由于图像传感器10上的同一感光区块内的感光点相隔距离十分小,视差可忽略,因此,在成像的过程中不会产生容易造成图像融合误差的视差情况,因此计算的冗余度较低,提高了执行效率。
需要说明的是,前文所述的“一致”即为在一定误差范围内相同或相同的含义,由于半导体工艺制作过程中,不可避免地会造成部分光学薄膜介质的实际厚度与理论设计的厚度值有偏差,但只要在一定误差范围内,只会影响光学 薄膜介质的透光率的范围,当该两个感光点对应的透光率的范围重合部分大于一定预设比例时,即可认为二者的透光率一致,或者说感光点对应的光学薄膜介质厚度取值空间分布或厚度分布一致。
在一个实施例中,如图14所示,图8展示了一种运行上述HDR图像成像方法的基于冯诺依曼体系的计算机系统。具体的,可包括通过系统总线连接的外部输入接口1001、处理器1002、存储器1003和输出接口1004。其中,外部输入接口1001可选的可至少包括网络接口10012和USB接口10014。存储器1003可包括外存储器10032(例如硬盘、光盘或软盘等)和内存储器10034。输出接口1004可至少包括显示屏10042等设备。
在本实施例中,本方法的运行基于计算机程序,该计算机程序的程序文件存储于前述基于冯诺依曼体系的计算机系统10的外存储器10032中,在运行时被加载到内存储器10034中,然后被编译为机器码之后传递至处理器1002中执行,从而使得基于冯诺依曼体系的计算机系统10中形成逻辑上的信号采集模块102、素材图像提取模块104和图像融合模块106。且在上述HDR图像成像方法执行过程中,输入的参数均通过外部输入接口1001接收,并传递至存储器1003中缓存,然后输入到处理器1002中进行处理,处理的结果数据或缓存于存储器1003中进行后续地处理,或被传递至输出接口1004进行输出。
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (10)

  1. 一种HDR图像成像系统,其特征在于,包括:
    图像传感器,所述图像传感器包括一感光面,所述感光面被划分为至少两组排列的感光区块,一组所述感光区块包括至少两个感光点,所述感光点对应至少一像素点;
    设置在所述感光面表面的光学薄膜阵列,所述光学薄膜阵列的厚度对应所述光学薄膜阵列的透光率,所述光学薄膜阵列在同一感光区块内的不同感光点所在位置的厚度不同,所述光学薄膜阵列在不同感光区块所在位置的厚度取值空间分布一致。
  2. 根据权利要求1所述的HDR图像成像系统,其特征在于,所述HDR图像成像系统还包括:
    与所述图像传感器连接的处理器,用于根据图像传感器在各感光区块内光学薄膜阵列厚度一致处的感光点采集的光信号生成至少二素材图像,通过将所述至少二素材图像融合得到HDR图像。
  3. 根据权利要求2所述的HDR图像成像系统,其特征在于,所述光学薄膜阵列在不同感光区块所在位置的厚度取值空间一致为:
    所述光学薄膜阵列在不同感光区块内同一相对位置的感光点处的厚度一致。
  4. 根据权利要求3所述的HDR图像成像系统,其特征在于,所述处理器用于根据图像传感器在各感光区块内相对位置相同处的感光点采集的光信号生成至少二素材图像,通过将所述至少二素材图像融合得到HDR图像。
  5. 根据权利要求1至4任一项所述的HDR图像成像系统,其特征在于,所述光学薄膜阵列在同一所述感光区块内的不同所述感光点所在位置的厚度分布为非单调排列或错位排列。
  6. 根据权利要求1至4任一项所述的HDR图像成像系统,其特征在于,所述光学薄膜阵列设置在所述图像传感器表面的方式包括沉积、图形化或刻蚀工艺中的至少一种。
  7. 根据权利要求1至4任一项所述的HDR图像成像系统,其特征在于, 所述光学薄膜阵列在同一所述感光区块内的不同所述感光点所在位置的厚度对应的透光率的分布呈非线性衰减变化。
  8. 根据权利要求7所述的HDR图像成像系统,其特征在于,所述光学薄膜阵列在同一所述感光区块内的不同所述感光点所在位置的厚度对应的透光率的分布呈等比衰减。
  9. 一种HDR图像成像方法,其特征在于,基于权利要求2至8任一项所述的HDR图像成像系统中的处理器,所述方法包括:
    获取所述图像传感器在感光区块内的所述感光点采集的光信号,根据所述光信号生成与所述感光点对应的像素值;
    提取各感光区块内光学薄膜阵列厚度一致处的感光点对应的像素值生成至少二素材图像,一光学薄膜阵列厚度对应一素材图像;
    对所述至少二素材图像融合得到HDR图像。
  10. 一种HDR图像成像装置,其特征在于,基于权利要求1至8任一项所述的HDR图像成像系统中的处理器,所述装置包括:
    信号采集模块,用于获取所述图像传感器在感光区块内的所述感光点采集的光信号,根据所述光信号生成与所述感光点对应的像素值;
    素材图像提取模块,用于提取各感光区块内光学薄膜阵列厚度一致处的感光点对应的像素值生成至少二素材图像,一光学薄膜阵列厚度对应一素材图像;
    图像融合模块,用于对所述至少二素材图像融合得到HDR图像。
PCT/CN2018/124796 2018-12-28 2018-12-28 Hdr图像成像方法、装置及系统 WO2020133193A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880071467.1A CN111316634A (zh) 2018-12-28 2018-12-28 Hdr图像成像方法、装置及系统
PCT/CN2018/124796 WO2020133193A1 (zh) 2018-12-28 2018-12-28 Hdr图像成像方法、装置及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/124796 WO2020133193A1 (zh) 2018-12-28 2018-12-28 Hdr图像成像方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2020133193A1 true WO2020133193A1 (zh) 2020-07-02

Family

ID=71129483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124796 WO2020133193A1 (zh) 2018-12-28 2018-12-28 Hdr图像成像方法、装置及系统

Country Status (2)

Country Link
CN (1) CN111316634A (zh)
WO (1) WO2020133193A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101738840A (zh) * 2008-11-21 2010-06-16 索尼株式会社 图像摄取设备
CN105516698A (zh) * 2015-12-18 2016-04-20 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
US20170347042A1 (en) * 2016-05-24 2017-11-30 Semiconductor Components Industries, Llc Imaging systems with high dynamic range and phase detection pixels

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007329721A (ja) * 2006-06-08 2007-12-20 Matsushita Electric Ind Co Ltd 固体撮像装置
JP2014175553A (ja) * 2013-03-11 2014-09-22 Canon Inc 固体撮像装置およびカメラ
US9147704B2 (en) * 2013-11-11 2015-09-29 Omnivision Technologies, Inc. Dual pixel-sized color image sensors and methods for manufacturing the same
US9666631B2 (en) * 2014-05-19 2017-05-30 Omnivision Technologies, Inc. Photodiode and filter configuration for high dynamic range image sensor
US9888198B2 (en) * 2014-06-03 2018-02-06 Semiconductor Components Industries, Llc Imaging systems having image sensor pixel arrays with sub-pixel resolution capabilities
US9979907B2 (en) * 2015-09-18 2018-05-22 Sony Corporation Multi-layered high-dynamic range sensor
US10578739B2 (en) * 2015-10-01 2020-03-03 Ams Sensors Singapore Pte. Ltd. Optoelectronic modules for the acquisition of spectral and distance data
US9954020B1 (en) * 2016-12-30 2018-04-24 Omnivision Technologies, Inc. High-dynamic-range color image sensors and associated methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101738840A (zh) * 2008-11-21 2010-06-16 索尼株式会社 图像摄取设备
CN105516698A (zh) * 2015-12-18 2016-04-20 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
US20170347042A1 (en) * 2016-05-24 2017-11-30 Semiconductor Components Industries, Llc Imaging systems with high dynamic range and phase detection pixels

Also Published As

Publication number Publication date
CN111316634A (zh) 2020-06-19

Similar Documents

Publication Publication Date Title
CN110428366B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN109636754B (zh) 基于生成对抗网络的极低照度图像增强方法
KR101588877B1 (ko) 이종 이미저를 구비한 모놀리식 카메라 어레이를 이용한 이미지의 캡처링 및 처리
US9681057B2 (en) Exposure timing manipulation in a multi-lens camera
TWI504257B (zh) 在產生數位影像中曝光像素群組
EP2636018B1 (en) Method for producing high dynamic range images
TWI496463B (zh) 形成全彩色影像之方法
US20110150357A1 (en) Method for creating high dynamic range image
JP2021072615A (ja) 映像復元装置及び方法
JP2002204389A (ja) 露出制御方法
GB2496241A (en) Multiple image high dynamic range (HDR) imaging from a single sensor array
CN112991245B (zh) 双摄虚化处理方法、装置、电子设备和可读存储介质
Cho et al. Single‐shot High Dynamic Range Imaging Using Coded Electronic Shutter
CN108781250A (zh) 摄像控制装置、摄像控制方法和摄像装置
JP7516471B2 (ja) 制御装置、撮像装置、制御方法およびプログラム
CN107613216A (zh) 对焦方法、装置、计算机可读存储介质和电子设备
Cheng et al. A mutually boosting dual sensor computational camera for high quality dark videography
CN113689335A (zh) 图像处理方法与装置、电子设备及计算机可读存储介质
CN108769510A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
WO2020133193A1 (zh) Hdr图像成像方法、装置及系统
CN114793262B (zh) 一种图像传感器、摄像头、电子设备及控制方法
CN112581365B (zh) 一种跨尺度自适应信息映射成像方法及装置、介质
US11988849B2 (en) Imaging device and method
WO2020227980A1 (zh) 图像传感器、光强感知系统及方法
JP7447947B2 (ja) 電子機器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18945148

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.11.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18945148

Country of ref document: EP

Kind code of ref document: A1