WO2019119295A1 - 图像融合的方法、装置与无人机 - Google Patents

图像融合的方法、装置与无人机 Download PDF

Info

Publication number
WO2019119295A1
WO2019119295A1 PCT/CN2017/117457 CN2017117457W WO2019119295A1 WO 2019119295 A1 WO2019119295 A1 WO 2019119295A1 CN 2017117457 W CN2017117457 W CN 2017117457W WO 2019119295 A1 WO2019119295 A1 WO 2019119295A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
bit
frame
fusion
bit planes
Prior art date
Application number
PCT/CN2017/117457
Other languages
English (en)
French (fr)
Inventor
常坚
任伟
赵超
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2017/117457 priority Critical patent/WO2019119295A1/zh
Priority to CN201780017877.3A priority patent/CN109155061B/zh
Publication of WO2019119295A1 publication Critical patent/WO2019119295A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of image processing, and more particularly to a method, apparatus and drone for image fusion.
  • HDRI high-dynamic range image
  • Multi-frame exposure fusion technology refers to the synthesis of high dynamic range images using multiple frames of different exposure images of the same scene.
  • the motion of the operator or carrier may cause the camera to move, which may cause the scenes of the multi-frame images to be inconsistent, thereby causing the image to be blurred, or the motion of the object in the scene may cause its position change in the multi-frame image. , so that the final synthesized image appears ghost.
  • the prior art calibrates image sequences by image registration to solve the image blurring phenomenon occurring in multi-frame exposure fusion technology, but this solution requires a large algorithm resource overhead.
  • the application provides a method, a device and a drone for image fusion, which can avoid the problem of image blurring during image fusion, and can also reduce the algorithm resource overhead.
  • a method for image fusion comprising: performing bit plane layering on a frame of an original image to obtain a plurality of bit planes of the original image, where the number of bits of the original image is M, M is an integer greater than 1; according to the plurality of bit planes, a multi-frame reconstructed image is obtained, wherein bit planes corresponding to different reconstructed images are not completely identical, and the number of bits of each reconstructed image is N, N is less than M a positive integer; image fusion of the multi-frame reconstructed image to obtain a fused image of the original image.
  • an image pickup apparatus comprising: an image sensor for collecting a frame of an original image, the number of bits of the original image being M, M being an integer greater than 1; and a processor for Performing bit plane layering on the original image to obtain a plurality of bit planes of the original image; the processor is further configured to obtain a multi-frame reconstructed image according to the plurality of bit planes, wherein different reconstructed images are used The corresponding bit planes are not completely identical, and the number of bits of the reconstructed image in each frame is N, and N is a positive integer smaller than M.
  • the processor is further configured to perform image fusion on the multi-frame reconstructed image to obtain a A fused image of the original image.
  • a mobile device comprising a power system and an imaging device provided by the second aspect.
  • a drone includes a body, a power system, and the body is provided with a binocular camera, and the binocular device is configured to execute the image provided by the first aspect The method of integration.
  • an image pickup apparatus including a memory for storing an instruction, the processor for executing an instruction stored by the memory, and an instruction stored in the memory Execution of the processor causes the processor to perform the method of image fusion provided by the first aspect.
  • a chip in a sixth aspect, includes a processing module and a communication interface, the processing module is configured to control the communication interface to communicate with an external, and the processing module is further configured to implement image fusion provided by the first aspect.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a computer, causes the computer to implement the method of image fusion provided by the first aspect.
  • the computer may be the above-described imaging device.
  • a computer program product comprising instructions that, when executed by a computer, cause the computer to implement the method of image fusion provided by the first aspect.
  • a multi-frame reconstructed image is reconstructed by performing bit-plane layering on a single frame of image data, and reconstructing a multi-frame reconstructed image based on a plurality of bit planes obtained by layer plane layering, and then performing image fusion on the multi-frame reconstructed image, whereby, a fused image in which no image blurring exists can be obtained.
  • the multi-frame reconstructed image is directly derived from the same frame image, there is no difference between the moving objects in the multi-frame reconstructed image, thereby ensuring the consistency of the image scene, and can be directly applied to the exposure without image registration. Fusion, and the resulting fused image does not have ghosts.
  • the solution provided by the present application can avoid image blurring on the one hand, and reduce the image fusion algorithm on the other hand, because there is no need to perform deblurring or image registration processing on multiple frames before image fusion. Resource overhead.
  • Figure 1 is a schematic diagram of bit plane layering.
  • FIG. 2 is a schematic flowchart of a method for image fusion according to an embodiment of the present invention.
  • Figure 3 is another schematic diagram of bit plane layering.
  • FIG. 4 is another schematic flowchart of a method for image fusion according to an embodiment of the present invention.
  • FIG. 5 is a simulation result diagram of a method for image fusion according to an embodiment of the present invention.
  • FIG. 6 is a schematic block diagram of an image capturing apparatus according to an embodiment of the present invention.
  • FIG. 7 is a schematic block diagram of a drone provided by an embodiment of the present invention.
  • FIG. 8 is another schematic block diagram of an image capturing apparatus according to an embodiment of the present invention.
  • bit plane layering For ease of understanding and description, the concept of bit plane layering is first described below in conjunction with FIG.
  • a pixel is a number composed of bits. For example, in a 256-level gray scale image, each pixel is composed of 8 bits (ie, 1 byte), as shown in FIG. 1 "an 8-bit byte”. .
  • An image composed of 8-bit pixels is called an 8-bit image, which is also called an 8-bit image.
  • An 8-bit image can be considered to be composed of 8 1-bit planes.
  • an 8-bit image is composed of bit plane 1 to bit plane 8, wherein the gray level information on bit plane 1 is 2 0 ⁇ 2 1 , the gradation information on the bit plane 2 is 2 1 to 2 2 , and so on.
  • the bit orders of the eight bit planes are sequentially increased, or may be considered to be sequentially decreased.
  • the bit order of the bit plane 1 is the lowest, and the bit order of the bit plane 2 is higher than the bit order of the bit plane 1.
  • the bit order of the bit plane 3 is higher than the bit order of the bit plane 2
  • the bit order of the bit plane 8 is higher than the bit order of the bit plane 7, that is, the bit order of the bit plane 8 is the highest.
  • bit plane layering The process from 8-bit image to 8 bit plane is called bit plane layering.
  • bit-plane layering a multi-bit image, multiple bit planes can be obtained.
  • FIG. 1 is only an example and is not limited.
  • the method for layer plane layering involved in the embodiment of the present invention is not limited to the case shown in FIG. 1.
  • an 8-bit image can be considered as 8 1-bits.
  • the plane configuration can also be considered to be composed of less than eight multi-bit planes.
  • an 8-bit image can also be considered to be composed of four 2-bit planes, and so on, which is not limited in the embodiment of the present invention.
  • the multi-bit plane mentioned in the embodiment of the present invention refers to a bit plane in which the gradation information is 2 x 1 ⁇ 2 x 2 , where x2-x1>1.
  • FIG. 2 is a schematic flowchart of a method for image fusion according to an embodiment of the present invention.
  • the method can be performed by an imaging device, such as a binocular camera.
  • the method includes the following steps.
  • bit layer layer layering Perform bit layer layer layering on a frame of the original image to obtain a plurality of bit planes of the original image, where the bit number of the original image is M, and M is a positive integer.
  • the original image can be acquired by an image sensor.
  • M is an integer greater than 8, that is, the number of bits of the original image is greater than 8, in other words, the original image is a multi-bit image.
  • M is equal to 10, ie the original image is a 10-bit image; or M is equal to 12, ie the original image is a 12-bit image.
  • the original image is a black and white image.
  • a plurality of 1-bit planes are obtained by layering the original image through a bit plane.
  • bit plane stratification is performed on the original image (M-bit image), and L bit planes (each bit plane is a 1-bit plane) can be obtained, and L is equal to M.
  • each bit plane obtained by layering the original image through the bit plane is a multi-bit plane.
  • bit plane stratification is performed on the original image (M-bit image), and L bit planes (each bit plane is a multi-bit plane) can be obtained, and L is a positive integer smaller than M.
  • FIG. 3 is another schematic diagram of layer plane layering of an original image.
  • the original image of the bit number M is layered by bit plane to obtain L bit planes.
  • L is equal to M.
  • L is a positive integer less than M. For example, suppose M is an even number such that each bit plane is a 2-bit plane, thereby obtaining M/2 bit planes.
  • bit order is sequentially increased.
  • FIG. 3 is only an example and is not a limitation.
  • the numerical numbers of the respective bit planes in FIG. 3 are only for convenience of distinction and description, and do not limit the scope of protection of the embodiments of the present invention.
  • bit planes obtained through layer plane stratification are generally arranged in order from high to low or low to high in order of bit order, such as bit plane L to specific plane 1 as shown in FIG. (The bit order is high to low), or bit plane 1 to bit plane L as shown in Figure 3 (bit order is low to high).
  • each frame reconstructed image is an image reconstructed from a portion of successive bit planes in a plurality of bit planes (eg, L bit planes in FIG. 3) obtained in step 210.
  • the bit planes corresponding to different reconstructed images are not identical, for example, the plurality of bit planes used to reconstruct the reconstructed image 1 are not exactly the same as the plurality of bit planes used to reconstruct the reconstructed image 2.
  • bit plane L bit plane L-1, ..., bit plane L-6 and bit plane L -7
  • bit plane L-1 bit plane L-1, ..., bit plane L-6 and bit plane L -7
  • bit plane L-8 bit plane L-8
  • the multi-frame reconstructed image obtained in step 220 is used for subsequent image fusion, and the number of bits of the multi-frame reconstructed image is consistent.
  • the number of bits per reconstructed image is 8.
  • the number of bits of the reconstructed image is not limited. In an actual application, the number of bits of the reconstructed image may be determined according to a specific situation.
  • an 8-bit image indicates that the image has 256 (2 8th power) levels of gray; a 2-bit image indicates that the image has only 4 levels of gray; a 1-bit image, It means that the image has only two levels of gray, that is, one image is black or white. In other words, the higher the number of bits in an image, the more grayscale the image can represent, the more detail, the sharper the image.
  • the number of bits of the fused image obtained by the reconstructed image fusion process depends on the number of bits of the reconstructed image. From this perspective, the higher the number of bits of the reconstructed image, the better. For example, the number of bits N of the reconstructed image is 7, 8, 9, 10 or 12.
  • the image output by the image processing device is usually 8 bits, and correspondingly, the image display device is also designed to support presentation of an 8-bit image.
  • the fused image map needs to be mapped to generate an 8-bit image before the image is output.
  • the number of bits N of the reconstructed image per frame is eight.
  • bit number N of the reconstructed image per frame is smaller than the bit number M of the original image.
  • the image fusion process requires at least two frames of images for fusion processing, that is, the number of multi-frame reconstructed images obtained in step 220 is at least two, and accordingly, the number of bits N of the reconstructed image is smaller than the number of bits of the original image. M.
  • the difference between the bit number M of the original image minus the number of bits N of the reconstructed image is greater than or equal to two.
  • some image fusion processing techniques require at least three frames of images for fusion processing, which represent underexposure, normal exposure, and overexposure, respectively. That is, the number of multi-frame reconstructed images obtained in step 220 is at least 3, and correspondingly, the bit number N of the reconstructed image is at least 2 bits smaller than the bit number M of the original image.
  • the number of bits of the original image is 10, and the number of bits of the reconstructed image is 8; or, the number of bits of the original image is 12, and the number of bits of the reconstructed image is 8 or 10.
  • each bit plane obtained in step 210 is a 1-bit plane, and eight consecutive bit planes are needed to reconstruct one frame of reconstructed image.
  • each bit plane obtained in step 210 is a 2-bit plane, and four consecutive bit planes are needed to reconstruct one frame of reconstructed image.
  • the segmentation strategy of the plurality of bit planes obtained in step 210 is determined according to the following factors 3, 4 and 5, to reconstruct a plurality of reconstructed images: 3 reconstructing a bit plane required for reconstructing a frame by one frame The number, the number of multiple bit planes obtained in step 4, and the number of images required for 5 image fusion.
  • step 210 the number of the plurality of bit planes obtained in step 210 is 10
  • the number of bit planes required to reconstruct a frame of reconstructed image is 8, and the number of images required for subsequent image fusion is 3, then the step is
  • the segmentation strategy of multiple bit planes obtained by 210 is:
  • the first frame reconstructed image is reconstructed by using the bit plane 10, the bit plane 9, ..., the bit plane 3; starting from the bit plane 9 with the highest order of the bit order, using the bit plane 9 , the bit plane 8, ..., the bit plane 2 is reconstructed to obtain a second frame reconstructed image; starting from the bit plane 8 of the second highest order of the bit order, reconstructed by using the bit plane 8, the bit plane 7, ..., the bit plane 1 Three frames reconstruct the image. In this way, a 3-frame reconstructed image required for image fusion is obtained.
  • the number of the plurality of bit planes obtained in step 210 is 10
  • the number of bit planes required to reconstruct a frame of reconstructed image is 4, and the number of images required for subsequent image fusion is 3, then
  • the segmentation strategy of multiple bit planes obtained in step 210 is:
  • the first frame reconstructed image is reconstructed by using the bit plane 10, the bit plane 9, the bit plane 8 and the bit plane 7; starting from the bit plane 7, using the bit plane 7, the bit plane 6
  • the bit plane 5 and the bit plane 4 obtain a second frame reconstructed image; starting from the bit plane 4, the third frame reconstructed image is reconstructed by using the bit plane 4, the bit plane 3, the bit plane 2 and the bit plane 1. In this way, a 3-frame reconstructed image required for image fusion is obtained.
  • the segmentation strategy for reconstructing the reconstructed image and the plurality of bit planes is not fixed, and may be dynamically adjusted according to one or more of the following factors: the original image is divided by the bit plane.
  • the number of bit planes obtained by the layer the number of bits per bit plane, the number of bits required to reconstruct an image per frame, and the number of images required for image fusion.
  • the existing image fusion technology or the image fusion technology obtained by the future development of the technology may be used to perform the fusion processing on the multi-frame reconstructed image to obtain a corresponding fused image, which is not limited by the embodiment of the present invention.
  • the bit plane is layered on the single frame image data, and the multi-frame reconstructed image is reconstructed based on the plurality of bit planes obtained by the bit plane layering, and then the multi-frame reconstructed image is image-fused to obtain a fused image.
  • the multi-frame reconstructed image in the embodiment of the present invention is directly derived from the same frame image. Therefore, there is no difference between the moving objects in the image in the multi-frame reconstructed image, thereby ensuring the consistency of the image scene without image registration. It can be directly applied to exposure fusion, and the resulting fused image does not have ghost images. Therefore, compared with the prior art, the solution provided by the present application does not need to perform deblurring or image registration processing on a multi-frame image before image fusion, thereby reducing the algorithm resource overhead of image fusion.
  • the fused image obtained according to the solution provided by the present application does not have a ghost image. Therefore, the solution provided by the present application can also be overcome.
  • using different exposed images of the same scene to perform image fusion is likely to cause image blurring. .
  • the original image in the embodiment of the present invention is a multi-bit image, and thus image details can be guaranteed.
  • the image obtained by image fusion of the multi-frame reconstructed image obtained in step 220 may be a high dynamic range image.
  • embodiments of the present invention can synthesize images with rich details in light and dark regions. Therefore, the present application can solve the problem that the details of the dark areas and the bright areas in the image are not obvious when the camera is imaged in the prior art.
  • the solution provided by the embodiment of the present invention is not affected by the motion of the photographing device or the object to be photographed. Therefore, the solution provided by the embodiment of the present invention has a wider application range than the prior art.
  • the solution provided by the embodiment of the present invention can extend the application scenario of the exposure fusion technology to a high-speed motion system such as a drone or an unmanned driving system, and can also be extended to machine vision, robot, security monitoring, remote sensing imaging, and photogrammetry. In the field.
  • the solution provided by the embodiment of the present invention has an incomparable advantage over the mechanical fixing stability of the enhanced photographing device or the image capturing speed.
  • step 220 specifically includes: consecutively selecting from the plurality of bit planes in order from highest to lowest in order of bit order, or in order from low to high in order of bit order.
  • the S bit planes constitute a one-frame reconstructed image, and S is a positive integer equal to or smaller than N.
  • FIG. 3 is taken as an example, and reconstruction is performed in the order of the bit order from high to low (that is, the order from the bit plane L to the bit plane 1).
  • the process of obtaining a multi-frame reconstructed image is as follows: in order from high to low in order of the bit order, starting from the bit plane L for the first time, selecting successive S bit planes The first frame reconstructed image is reconstructed (including the bit plane L); the second time, starting from the bit plane X1 (not shown in FIG. 3), starting from the bit plane X1, successive S bit planes (including the bit plane X1) are selected.
  • Reconstructing a second frame reconstructed image wherein the bit plane X1 is separated from the bit plane L by K bit planes, and the value of K can be set in advance; the third time, starting from the bit plane X2 (not shown in FIG. 3)
  • the third S frame reconstructed image is reconstructed by selecting successive S bit planes (including the bit plane X2), wherein the bit plane X2 is separated from the bit plane X1 by K bit planes, and so on, until from the bit plane S (FIG. 3)
  • the continuous S bit planes ie, the bit plane S to the bit plane 1 are selected to reconstruct the last frame reconstructed image, wherein the bit plane S and the bit plane used to reconstruct the reconstructed image of the previous frame are reconstructed.
  • the highest bit order The bit planes are separated by K bit planes.
  • the value of K in the present example may be determined by at least one of the following factors: the number of bit planes obtained by layering the original image through the bit plane, the number of bits per bit plane, and the reconstructed image per frame. The number of bits required, the number of images required for image fusion.
  • each bit plane is a 1-bit plane
  • S is equal to N
  • each bit plane is a multi-bit plane
  • S is smaller than N
  • each bit plane is a 1-bit plane, that is, a plurality of bit planes obtained by performing bit-plane layering on the original image are M bit-planes, and S is equal to N, that is, step 220 specifically includes: According to the order of the bit order from low to high or low to high, N bit planes are successively selected from a plurality of bit planes each time to form a reconstructed image with a bit number of bits of N.
  • the step 220 specifically includes: selecting the i-th to the (i-1)th bit planes in the M bit planes according to the order of the bit order from high to low or from low to high.
  • the i-th frame reconstructed image is constructed, i is 1, ..., M-(N-1).
  • the multi-frame reconstructed image used for image fusion is derived from the original image of the same frame, so that there is no difference between the moving objects in the image in the multi-frame reconstructed image, thereby ensuring the consistency of the image scene. It can be directly applied to exposure fusion without image registration, and the resulting fused image does not have ghost images.
  • the original image is a multi-bit image
  • the reconstructed reconstructed image is at least an 8-bit image
  • the solution provided by the present application can solve the problem that the details in the dark region and the bright region are not obvious in the low dynamic range image, and overcome the problem that the image blur is easily caused when the high dynamic range image is synthesized by using different exposure images of the same scene.
  • image fusion can be implemented in a variety of ways.
  • step 230 specifically includes performing mean fusion on the multi-frame reconstructed image to obtain the fused image.
  • step 220 obtains a P-frame reconstructed image, and obtains an average value of pixels corresponding to positions in the P-frame reconstructed image to obtain pixel values at corresponding positions in the fused image.
  • step 230 specifically includes: determining weight information of the corresponding image according to contrast information or gray level information of each frame image in the multi-frame reconstructed image; reconstructing the multi-frame according to the weight information The image is subjected to weighted fusion to obtain the fused image.
  • step 220 obtains a P frame reconstructed image. First, determining weight information of the corresponding image according to contrast information or gray scale information of each frame image, and then reconstructing a corresponding position in the P frame based on the determined weight information. Pixel weights are added to obtain pixel values at corresponding locations in the fused image.
  • determining weight information of the corresponding image according to the contrast information or the grayscale information of each frame image in the multi-frame reconstructed image including: reconstructing contrast information of each frame image in the image according to the multi-frame reconstruction , determining the weight information of the corresponding image.
  • determining weight information of the corresponding image according to the contrast information or the grayscale information of each frame image in the multi-frame reconstructed image includes: determining a bright region and/or a darkness of the reconstructed image of the one frame. a region; assigning first grayscale information to the bright region, and/or assigning second grayscale information to the dark region; determining weight information on the bright region according to the first grayscale information, and/or, according to The second gray scale information determines weight information on the dark area.
  • the bright and dark regions are estimated for the multi-frame reconstructed image, and then the gray areas of the bright and dark regions are respectively allocated, that is, the first gray information is allocated to the bright region, and the second gray information is assigned to the dark region, and then The weight value of the bright region is determined according to the grayscale information allocated by the bright region, and the weight value of the dark region is determined according to the grayscale information allocated by the dark region.
  • the bright region uses low bit plane graphics to assign weights to the pixel values
  • the dark region uses high bit plane image to assign weights to the pixel values.
  • step 230 specifically includes performing nonlinear mapping fusion on the multi-frame reconstructed image to obtain the fused image.
  • step 220 obtains a P frame reconstructed image.
  • weight information of each frame image is constructed according to a nonlinear function, and then, based on the determined weight information, pixels of corresponding positions in the P frame reconstructed image are weighted and added to obtain Blends the pixel values at corresponding locations in the image.
  • the non-linear function for constructing the weight information may be a non-linear function in the existing non-linear mapping fusion technology, which is not limited by the embodiment of the present invention.
  • FIG. 4 is another schematic flowchart of a method for image fusion according to an embodiment of the present invention.
  • the method can be performed by an imaging device, in particular by a binocular camera.
  • the method includes the following steps.
  • the binocular camera reads the original image.
  • bit plane layering method is adopted, and the original image is divided into L layer bit planes in binary, and the symbols L, L-1, ..., 2, 1 are respectively used according to the order of the bit order from high to low. Bit planes, as shown in Figure 3.
  • the L-layer bit plane is then segmented to obtain an L-7 8-bit image.
  • the L-7 images obtained in step 420 are subjected to fusion processing using a mean fusion method or a weighted fusion method.
  • the specific method for image fusion is described in detail above, and will not be described here.
  • FIG. 5 the simulation result of the image fusion method provided by the embodiment of the present invention is shown in FIG. 5.
  • Scene 1 Shade, high gain (as shown in the three images in the first line of Figure 5).
  • the middle image of the first row in FIG. 5 is reconstructed from a bit plane of the lower 8 bits (for example, bit plane 8 to bit plane 1 shown in FIG. 3) of the plurality of bit planes obtained by layer plane layering. image.
  • the image on the right side of the first row in FIG. 5 is the bit plane of the upper 8 bits in the plurality of bit planes obtained by layering the bit plane (for example, bit plane L to bit plane L-7 shown in FIG. 3).
  • the image on the left side of the first line in Fig. 5 is a fused image (referred to as fused image 1) obtained by fusing the image in the middle of the first line in Fig. 5 with the image on the right side.
  • the fused image 1 is an image with rich details in the bright and dark regions.
  • Scene 2 The street has no direct light and low gain (as shown in the three images in the second line of Figure 5).
  • the middle image of the second row in FIG. 5 is reconstructed from a bit plane of the lower 8 bits (for example, bit plane 8 to bit plane 1 shown in FIG. 3) of the plurality of bit planes obtained by layer plane layering. image.
  • the image on the right side of the second row in FIG. 5 is the bit plane of the upper 8 bits in the plurality of bit planes obtained by layering the bit plane (for example, bit plane L to bit plane L-7 shown in FIG. 3).
  • the image on the left side of the second line in Fig. 5 is a fused image (referred to as fused image 2) obtained by fusing the image in the middle of the second line in Fig. 5 with the image on the right side.
  • the fused image 2 is an image with rich details in the bright and dark regions.
  • Scene 3 The street has no direct light and high gain (as shown in the three images in the third line of Figure 5).
  • the middle image of the third row in FIG. 5 is reconstructed from a bit plane of the lower 8 bits (for example, bit plane 8 to bit plane 1 shown in FIG. 3) of the plurality of bit planes obtained by layer plane layering. image.
  • the image on the right side of the third row in FIG. 5 is the bit plane of the upper 8 bits in the plurality of bit planes obtained by layering the bit plane (for example, bit plane L to bit plane L-7 shown in FIG. 3).
  • the image on the left side of the third line in Fig. 5 is a fused image (referred to as fused image 3) obtained by fusing the image in the middle of the third line in Fig. 5 with the image on the right side.
  • the fused image 3 is an image with rich details in the bright and dark regions.
  • the embodiment of the present invention can synthesize an image with rich details in the light and dark regions. Therefore, the present application can solve the problem that the details of the dark areas and the bright areas in the image are not obvious when the camera is imaged in the prior art.
  • the embodiment of the present invention performs bit plane layering on single frame image data, reconstructs a multi-frame reconstructed image based on a plurality of bit planes obtained by layer plane layering, and then performs image fusion on the multi-frame reconstructed image. , get a fused image. Because the multi-frame reconstructed image is directly derived from the same frame image, there is no difference between the moving objects in the image in the multi-frame reconstructed image, thereby ensuring the consistency of the image scene, and can be directly applied to the exposure fusion without image registration. Therefore, compared with the prior art, the solution provided by the present application can reduce the algorithm resource overhead of image fusion.
  • the fused image obtained according to the solution provided by the present application does not have a ghost image, and the problem that the image fusion is easily caused by image fusion using different exposure images of the same scene in the prior art can be overcome.
  • the fused image obtained by the embodiment of the present invention is an image with rich details of the light and dark regions, and therefore, the problem that the details of the dark region and the bright region in the image are not obvious when the camera is imaged in the prior art can be solved.
  • the solution provided by the embodiment of the invention can extend the application scenario of the exposure fusion technology to high-speed motion systems such as drones, automatic driving, and unmanned driving, and can also be extended to machine vision, robot, security monitoring, remote sensing imaging, and photography. Measurement and other fields.
  • FIG. 6 is a schematic block diagram of an image capturing apparatus 600 according to an embodiment of the present invention.
  • the image capturing apparatus 600 includes the following components.
  • the image sensor 610 is configured to collect a frame of the original image, where the number of bits of the original image is M, and M is an integer greater than 1.
  • the original image is a multi-bit image
  • the image sensor 610 is an image sensor that can output a multi-bit image.
  • the processor 620 is configured to perform bit plane layering on the original image to obtain a plurality of bit planes of the original image. According to the plurality of bit planes, obtain a multi-frame reconstructed image, where different bit planes corresponding to the reconstructed image are not The same is true, the number of bits of the reconstructed image per frame is N, and N is a positive integer smaller than M; image fusion is performed on the multi-frame reconstructed image to obtain a fused image of the original image.
  • the processor 620 is specifically configured to continuously select S bit planes from the plurality of bit planes in order of bit order from high to low or low to high.
  • S is a positive integer equal to or less than N.
  • each of the bit planes is a 1-bit plane
  • the plurality of bit planes are M bit planes
  • S is equal to N.
  • the processor 620 is specifically configured to select the i-th to the i+th (N+) of the M bit planes according to the order of the bit order from high to low or low to high. -1)
  • the bit plane constitutes the reconstructed image of the ith frame, i is 1, ..., M-(N-1).
  • each of the bit planes is a multi-bit plane, the total number of the plurality of bit planes being less than M, and S being less than N.
  • the processor 620 is specifically configured to perform mean fusion on the multi-frame reconstructed image to obtain the fused image.
  • the processor 620 is configured to determine, according to the contrast information or the grayscale information of each frame image in the multi-frame reconstructed image, weight information of the corresponding image; according to the weight information, The multi-frame reconstructed image is subjected to weighted fusion to obtain the fused image.
  • the processor 620 is specifically configured to: determine a bright region and/or a dark region of a frame of the reconstructed image; assign the first grayscale information to the bright region, and/or, for the dark The region assigns second grayscale information; determines a weight on the bright region based on the first grayscale information, and/or determines a weight on the dark region based on the second grayscale information.
  • the processor 620 is specifically configured to perform nonlinear mapping fusion on the multi-frame reconstructed image to obtain the fused image.
  • the difference between M minus N is greater than or equal to two.
  • M is an integer greater than or equal to 10
  • N is an integer greater than or equal to 8.
  • the camera device 600 is a binocular camera device.
  • the embodiment of the present invention further provides a mobile device, which includes the camera device 600 provided by the foregoing embodiment, and the mobile device further includes a power system for implementing the mobile device to move.
  • the mobile device is any one of the following: a drone, an unmanned vehicle, a robot, a surveillance camera, a remote sensing imaging device.
  • FIG. 7 is a schematic block diagram of a drone 700 according to an embodiment of the present invention.
  • the UAV includes a body 710, a power system 720, and a binocular camera 730 disposed on the body 710.
  • the binocular camera 730 is configured to perform the method of image fusion provided by the above method embodiments.
  • the binocular camera 730 may also correspond to the camera device 600 provided by the device embodiment above.
  • an embodiment of the present invention further provides an image capturing apparatus 800.
  • the image capturing apparatus 800 includes a processor 810 and a memory 820.
  • the memory 820 is configured to store an instruction
  • the processor 810 is configured to execute the memory 820.
  • the instructions, and execution of the instructions stored in the memory 820, cause the processor 810 to perform the method of image fusion provided by the method embodiments above.
  • the camera device 800 further includes a communication interface 830, which is further configured to output a fused image obtained by the processor 810.
  • the embodiment of the present invention further provides a computer storage medium, on which a computer program is stored, and when the computer program is executed by a computer, the computer executes the image fusion method provided by the above method embodiment.
  • Embodiments of the present invention also provide a computer program product comprising instructions, which when executed by a computer, cause a computer to perform the method of image fusion provided by the above method embodiments.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transmission to another website site, computer, server or data center via wired (eg coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (such as a digital video disc (DVD)), or a semiconductor medium (such as a solid state disk (SSD)).
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium such as a digital video disc (DVD)
  • a semiconductor medium such as a solid state disk (SSD)
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processor, or each unit may exist physically separately, or two or more units may be integrated into one unit.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

提供一种图像融合的方法、装置与无人机,该方法包括:对一帧原始图像进行比特平面分层,获得原始图像的多个比特平面,原始图像的比特位数为M,M为大于1的整数;根据多个比特平面,获得多帧重建图像,其中,不同重建图像所对应的比特平面不完全相同,每帧重建图像的比特位数为N,N为小于M的正整数;对多帧重建图像进行图像融合,获得原始图像的融合图像。由于多帧重建图像直接来源于同一帧图像,从而保证了图像场景的一致性,无需图像配准就可以直接应用于曝光融合,且得到的融合图像不会存在鬼像,相对于现有技术,以方便可以避免图像模糊,同时突出暗处和亮出的细节,另一方面,可以降低图像融合的算法资源开销。

Description

图像融合的方法、装置与无人机
版权申明
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专利与商标局的官方记录和档案中所存在的该专利文件或者该专利披露。
技术领域
本申请涉及图像处理领域,并且更为具体地,涉及一种图像融合的方法、装置与无人机。
背景技术
高动态范围图像(high-dynamic range image,HDRI)是一种亮度范围非常广的图像。相比普通的图像,高动态范围图像可以提供更多的动态范围和图像细节。
目前,采用多帧曝光融合技术来合成高动态范围图像。多帧曝光融合技术指的是,利用同一场景的多帧不同曝光图像来合成高动态范围图像。其中,在合成图像时,需要保证相同场景。实际情况中,操作者或载体的运动会引起相机的运动,这种运动会使拍摄的多帧图像的场景不一致,从而引起图像模糊,或者,场景中物体的运动会造成其在多帧图像中位置的改变,使最终合成的图像出现鬼影。现有技术通过图像配准来校准图像序列以解决多帧曝光融合技术中出现的图像模糊现象,但是,这种解决方案需要较大的算法资源开销。
针对上述问题,需要提出一种可以克服图像模糊的图像融合方法。
发明内容
本申请提供一种图像融合的方法、装置与无人机,可以避免图像融合时的图像模糊的问题,还可以降低算法资源开销。
第一方面,提供一种图像融合的方法,所述方法包括:对一帧原始图像进行比特平面分层,获得所述原始图像的多个比特平面,所述原始图像的比特位数为M,M为大于1的整数;根据所述多个比特平面,获得多帧重建图像,其中,不同重建图像所对应的比特平面不完全相同,每帧重建图像的比 特位数为N,N为小于M的正整数;对所述多帧重建图像进行图像融合,获得所述原始图像的融合图像。
第二方面,提供一种摄像装置,所述摄像装置包括:图像传感器,用于采集一帧原始图像,所述原始图像的比特位数为M,M为大于1的整数;处理器,用于对所述原始图像进行比特平面分层,获得所述原始图像的多个比特平面;所述处理器还用于,根据所述多个比特平面,获得多帧重建图像,其中,不同重建图像所对应的比特平面不完全相同,每帧所述重建图像的比特位数为N,N为小于M的正整数;所述处理器还用于,对所述多帧重建图像进行图像融合,获得所述原始图像的融合图像。
第三方面,提供一种移动设备,所述移动设备包括动力系统与第二方面提供的摄像装置。
第四方面,提供一种无人机,所述无人机包括机身、动力系统、所述机身上设置有双目摄像装置,所述双目摄像装置用于执行第一方面提供的图像融合的方法。
第五方面,提供一种摄像装置,所述摄像装置包括存储器和处理器,所述存储器用于存储指令,所述处理器用于执行所述存储器存储的指令,并且对所述存储器中存储的指令的执行使得所述处理器执行第一方面提供的图像融合的方法。
第六方面,提供一种芯片,所述芯片包括处理模块与通信接口,所述处理模块用于控制所述通信接口与外部进行通信,所述处理模块还用于实现第一方面提供的图像融合的方法。
第七方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被计算机执行时使得所述计算机实现第一方面提供的图像融合的方法。具体地,所述计算机可以为上述摄像装置。
第八方面,提供一种包含指令的计算机程序产品,所述指令被计算机执行时使得所述计算机实现第一方面提供的图像融合的方法。
根据本发明提供的方案,通过对单帧图像数据进行比特平面分层,并基于比特平面分层获得的多个比特平面重构出多帧重建图像,然后对该多帧重建图像进行图像融合,从而可以得到不存在图像模糊的融合图像。其中,由于多帧重建图像直接来源于同一帧图像,则这多帧重建图像之间不存在图像中运动物体的差异,从而保证了图像场景的一致性,无需图像配准就可以直 接应用于曝光融合,且得到的融合图像不会存在鬼像。因此,相对于现有技术,本申请提供的方案一方面可以避免图像模糊,另一方面由于无需在图像融合之前对多帧图像进行去模糊或图像配准等处理,从而可以降低图像融合的算法资源开销。
附图说明
图1为比特平面分层的示意图。
图2为本发明实施例提供的图像融合的方法的示意性流程图。
图3为比特平面分层的另一示意图。
图4为本发明实施例提供的图像融合的方法的另一示意性流程图。
图5为本发明实施例提供的图像融合的方法的仿真结果图。
图6为本发明实施例提供的摄像装置的示意性框图。
图7为本发明实施例提供的无人机的示意性框图。
图8为本发明实施例提供的摄像装置的另一示意性框图。
具体实施方式
为了便于理解与描述,下面首先结合图1介绍比特平面分层的概念。
像素是由比特组成的数字,例如,在256级的灰度图中,每个像素是由8比特(即1个字节)组成,如图1中所示的“一个8比特的字节”。
由8比特的像素组成的图像称为8比特图像,也称为8位图像。一幅8比特图像可以认为是由8个1比特平面构成的,如图1所示,一个8比特图像由比特平面1至比特平面8构成,其中,比特平面1上的灰度信息是2 0~2 1,比特平面2上的灰度信息是2 1~2 2,依次类推。这8个比特平面的比特阶数依次增高,或者也可以认为是依次降低,如图1所示,比特平面1的比特阶数最低,比特平面2的比特阶数高于比特平面1的比特阶数,比特平面3的比特阶数高于比特平面2的比特阶数,…,比特平面8的比特阶数高于比特平面7的比特阶数,即比特平面8的比特阶数是最高的。
由8比特图像到8个比特平面的过程,称为比特平面分层。换句话说,通过对多比特图像进行比特平面分层,可以获得多个比特平面。
需要说明的是,图1仅为示例而非限定,本发明实施例中涉及的比特平面分层的方法不限于图1所示的情况,例如,一幅8比特图像可以认为由8 个1比特平面构成,也可以认为由小于8个的多比特平面构成,例如,一幅8比特图像也可以认为由4个2比特平面构成,以此类推,本发明实施例对此不作限定。
还需要说明的是,本发明实施例中提及的1比特平面指的是灰度信息为2 x1~2 x2的比特平面,其中,x2-x1=1。本发明实施例中提及的多比特平面指的是灰度信息为2 x1~2 x2的比特平面,其中,x2-x1>1。
图2为本发明实施例提供的图像融合的方法的示意性流程图。例如,该方法可以由摄像设备,例如双目摄像头,来执行。如图2所示,该方法包括如下步骤。
210,对一帧原始图像进行比特平面分层,获得该原始图像的多个比特平面,该原始图像的比特位数为M,M为正整数。
具体地,可以由图像传感器获取该原始图像。
具体地,M为大于8的整数,即该原始图像的比特位数大于8,换句话说,该原始图像为多比特图像。
可选地,在一些实施例中,M等于10,即该原始图像为10比特图像;或M等于12,即该原始图像为12比特图像。
可选地,在一些实施例中,该原始图像为黑白图像。
可选地,在一些实施例中,通过对该原始图像经过比特平面分层所得到多个1比特平面。
具体地,在本实施例中,对该原始图像(M比特图像)进行比特平面分层,可以得到L个比特平面(每个比特平面的为1比特平面),L等于M。
可选地,在一些实施例中,通过对该原始图像经过比特平面分层所得到的每个比特平面为多比特平面。
具体地,在本实施例中,对该原始图像(M比特图像)进行比特平面分层,可以得到L个比特平面(每个比特平面为多比特平面),L为小于M的正整数。
图3为对原始图像进行比特平面分层的另一示意图。如图3所示,对比特位数为M的原始图像进行比特平面分层,获得L个比特平面。当每个比特平面为1比特平面时,L等于M。当每个比特平面为多比特平面时,L为小于M的正整数。例如,假设M为偶数,使得每个比特平面为2比特平面,从而获得M/2个比特平面。
如图3所示,从比特平面1至比特平面L,比特阶数依次增高。
需要说明的是,图3仅为示例而非限定。图3中各个比特平面的数字编号仅为了便于区分与描述,并不限定本发明实施例的保护范围。
还需要说明的是,经过比特平面分层得到的多个比特平面通常是按照比特阶数从高到低或者从低到高依次排列的,如图3中所示的比特平面L至比平面1(比特阶数从高到低),或者如图3所示的比特平面1至比特平面L(比特阶数从低到高)。
220,根据经过比特平面分层所得的多个比特平面,获得多帧重建图像。
具体地,每一帧重建图像是步骤210得到的多个比特平面(例如图3中的L个比特平面)中的部分连续比特平面重构得到的图像。不同重建图像所对应的比特平面不完全相同,例如用于重构重建图像1的多个比特平面与用于重构重建图像2的多个比特平面不完全相同。
作为一个示例,以图3为例,假设从比特阶数最高的比特平面L开始,利用连续8个比特平面:比特平面L、比特平面L-1,…,比特平面L-6与比特平面L-7,重构得到一帧重建图像;然后从比特阶数次高的比特平面L-1开始,利用连续8个比特平面:比特平面L-1、比特平面L-2,…,比特平面L-7与比特平面L-8,重构得到另一帧重建图像;以此类推,最后,从比特阶数较低的比特平面8开始,利用连续8个比特平面:比特阶数较低的比特平面8、比特平面7,…,比特平面2与比特平面1,重构得到最后一帧重建图像。
步骤220所得到的多帧重建图像用于后续进行图像融合,则这多帧重建图像的比特位数是一致的,例如,每帧重建图像的比特位数均为8。
需要说明的是,本发明实施例对重建图像的比特位数不作限定,实际应用中,可以根据具体情况,确定重建图像的比特位数。
应理解,一幅8位图像,表示该图像有256(2的8次方)种层次的灰度;一幅2位图像,表示该图像只有4种层次的灰度;一幅1位图像,表示该图像只有两种层次的灰度,即一幅图像非黑即白。换句话说,一幅图像的比特位数越高,该图像可以表示的灰度等级越多,细节越多,图像越清晰。
由重建图像融合处理得到的融合图像的比特位数取决于重建图像的比特位数。从这个角度来说,重建图像的比特位数越高越好。例如,重建图像的比特位数N为7、8、9、10或12。
目前,图像处理设备输出的图像通常为8位,对应地,图像显示设备也设计为支持呈现8位图像。在这种场景下,如果由重建图像融合处理得到的融合图像的比特位数不是8(例如为10或12),则在输出图像之前,需要将融合图像映射生成8位图像。
可选地,在一些实施例中,每帧重建图像的比特位数N为8。
本实施例,通过将重建图像的比特位数设计为8,则可以直接得到比特位数为8的融合图像,从而可以避免在图像输出之前进行上述的映射处理,可以减小算法资源开销。
还需要说明的是,在上述各个实施例中,每帧重建图像的比特位数N小于原始图像的比特位数M。
应理解,图像融合处理需要至少两帧图像来进行融合处理,即步骤220中得到的多帧重建图像的数量至少为2,相应地,重建图像的比特位数N要小于原始图像的比特位数M。
可选地,在一些实施例中,原始图像的比特位数M减去重建图像的比特位数N的差值大于或等于2。
还应理解,有些图像融合处理技术需要至少三帧图像来进行融合处理,这三帧图像分别代表欠曝光、正常曝光与过曝光。即步骤220中得到的多帧重建图像的数量至少为3,相应地,重建图像的比特位数N至少要比原始图像的比特位数M小2个比特位数。
例如,原始图像的比特位数为10,重建图像的比特位数为8;或,原始图像的比特位数为12,重建图像的比特位数为8或10。
具体地,根据如下因素①和因素②确定需要连续多少个比特平面重构一帧重建图像:①一帧重建图像所要求具备的比特位数,②步骤210得到的每个比特平面的比特位数。
作为一个示例,假设要求一帧重建图像具备的比特位数为8,步骤210得到的每个比特平面为1比特平面,则需要连续8个比特平面来重构一帧重建图像。
作为另一个示例,假设要求一帧重建图像具备的比特位数为8,步骤210得到的每个比特平面为2比特平面,则需要连续4个比特平面来重构一帧重建图像。
具体地,根据如下因素③、因素④与因素⑤,确定对步骤210得到的多 个比特平面的分段策略,以重构出多个重建图像:③重构一帧重建图像所需的比特平面的数量,④步骤210得到的多个比特平面的数量,⑤图像融合所需的图像的数量。
作为一个示例,假设步骤210得到的多个比特平面的数量为10,重构一帧重建图像所需的比特平面的数量为8,后续进行图像融合所需的图像的数量为3,则对步骤210得到的多个比特平面的分段策略为:
从比特阶数最高的比特平面10开始,利用比特平面10,比特平面9,…,比特平面3重构得到第一帧重建图像;从比特阶数次高的比特平面9开始,利用比特平面9,比特平面8,…,比特平面2重构得到第二帧重建图像;从比特阶数次次高的比特平面8开始,利用比特平面8,比特平面7,…,比特平面1重构得到第三帧重建图像。这样,得到图像融合所需的3帧重建图像。
作为另一个示例,假设步骤210得到的多个比特平面的数量为10,重构一帧重建图像所需的比特平面的数量为4,后续进行图像融合所需的图像的数量为3,则对步骤210得到的多个比特平面的分段策略为:
从比特阶数最高的比特平面10开始,利用比特平面10,比特平面9,比特平面8与比特平面7重构得到第一帧重建图像;从比特平面7开始,利用比特平面7,比特平面6,比特平面5与比特平面4得到第二帧重建图像;从比特平面4开始,利用比特平面4,比特平面3,比特平面2与比特平面1重构得到第三帧重建图像。这样,得到图像融合所需的3帧重建图像。
上述可知,本发明实施例中用于重构重建图像而对多个比特平面的分段策略不是固定的,可以根据如下因素中的一种或多种进行动态调整:对原始图像经过比特平面分层所得的多个比特平面的数量、每个比特平面的比特位数、每帧重建图像所需具备的比特位数、图像融合所需的图像的数量。
上文仅示例性地描述了两种用于重构重建图像而对多个比特平面的分段策略,为了避免累赘不再枚举,但是本领域技术人员在本实施例基础上想到的各种变换方式,均落入本申请的保护范围。
230,对该多帧重建图像进行图像融合,获得该原始图像的融合图像。
具体地,可以采用现有的图像融合技术,或者将来技术发展得到的图像融合技术,对该多帧重建图像进行融合处理,获得对应的融合图像,本发明实施例对此不作限定。
本发明实施例对单帧图像数据进行比特平面分层,并基于比特平面分层获得的多个比特平面重构出多帧重建图像,然后对该多帧重建图像进行图像融合,得到融合图像。可知,本发明实施例中的多帧重建图像直接来源于同一帧图像,因此,这多帧重建图像之间不存在图像中运动物体的差异,从而保证了图像场景的一致性,无需图像配准就可以直接应用于曝光融合,且得到的融合图像不会存在鬼像。因此,相对于现有技术,本申请提供的方案无需在图像融合之前对多帧图像进行去模糊或图像配准等处理,从而可以降低图像融合的算法资源开销。
此外,根据本申请提供的方案得到的融合图像不会存在鬼像,因此,本申请提供的方案也可以克服,现有技术中利用同一场景的不同曝光图像来进行图像融合易造成图像模糊的问题。
应理解,本发明实施例中的原始图像为多比特图像,因此可以保证图像细节。对应地,由步骤220得到的多帧重建图像进行图像融合所得到的图像可以是高动态范围图像。换句话说,本发明实施例可以合成明暗区域细节丰富的图像。因此,本申请能够解决现有技术中摄像头成像时图像中暗区域和亮区域的细节不明显的问题。
上述可知,本发明实施例提供的方案不受拍摄设备或者被拍对象的运动的影响,因此,相对于现有技术,本发明实施例提供的方案具有较广的应用范围。例如,本发明实施例提供的方案可以使得将曝光融合技术的应用场景拓展至无人机、无人驾驶等高速运动系统中,还可以拓展至机器视觉、机器人、安防监控、遥感成像、摄影测量等领域中。
此外,本发明实施例提供的方案相对于增强拍摄设备的机械固定稳定性,或是提升图像采集速度均具有无法比拟的优点。
可选地,在一些实施例中,步骤220具体包括:按照比特阶数从高到低的顺序,或者,按照比特阶数从低到高的顺序,每次从该多个比特平面中连续选取S个比特平面构成一帧重建图像,S为等于或小于N的正整数。
具体地,以图3为例,并以按照比特阶数从高到低的顺序(即从比特平面L到比特平面1的顺序)进行重构为例。假设要求重构得到的重建图像的比特位数为N,获得多帧重建图像的流程如下:按照比特阶数从高到低的顺序,第一次从比特平面L开始,选取连续S个比特平面(包括比特平面L)重构得到第一帧重建图像;第二次,从比特平面X1(图3中未示出)开始, 自比特平面X1开始,选取连续S个比特平面(包括比特平面X1)重构得到第二帧重建图像,其中,该比特平面X1与比特平面L相隔K个比特平面,K的数值可以预先设置;第三次,从比特平面X2(图3中未示出)开始,选取连续S个比特平面(包括比特平面X2)重构得到第三帧重建图像,其中,该比特平面X2与比特平面X1相隔K个比特平面,以此类推,直到从比特平面S(图3中未示出)开始,选取连续S个比特平面(即比特平面S至比特平面1)重构得到最后一帧重建图像,其中,比特平面S与用于重构上一帧重建图像的比特平面中比特阶数最高的比特平面相隔K个比特平面。
具体地,本示例中K的数值可以由如下因素中的至少一种确定:原始图像经过比特平面分层所得的多个比特平面的数量、每个比特平面的比特位数、每帧重建图像所需具备的比特位数、图像融合所需的图像的数量。
具体地,在本示例中,当每个比特平面为1比特平面时,S等于N,当每个比特平面的为多比特平面时,S小于N。
可选地,在一些实施例中,每个比特平面为1比特平面,即对原始图像进行比特平面分层所得的多个比特平面为M个比特平面,S等于N,即步骤220具体包括:按照比特阶数从低到高或从低到高的顺序,每次从多个比特平面中连续选取N个比特平面构成一帧比特位数为N的重建图像。
具体地,作为一个示例,步骤220具体包括:按照比特阶数从高到低或者从低到高的顺序,选取该M个比特平面中的第i个至第i+(N-1)个比特平面构成第i帧重建图像,i为1,…,M-(N-1)。
上述可知,在本实施例中,用于图像融合的多帧重建图像来源于同一帧原始图像,使得这多帧重建图像之间不存在图像中运动物体的差异,从而保证了图像场景的一致性,无需图像配准就可以直接应用于曝光融合,且得到的融合图像不会存在鬼像。此外,由于原始图像为多比特图像,且重构得到的重建图像至少为8比特图像,使得,基于多帧重建图像进行图像融合得到的图像可以保证图像的细节。因此,本申请提供的方案可以解决低动态范围图像中暗区域与亮区域内细节不明显的问题,同时克服了利用同一场景的不同曝光图像合成高动态范围图像时易造成图像模糊的问题。
具体地,在步骤230中,可以采用多种方式来实现图像融合。
可选地,在一些实施例中,步骤230具体包括:对该多帧重建图像进行均值融合,获得该融合图像。
具体地,假设步骤220得到P帧重建图像,通过对P帧重建图像中对应位置的像素求均值,以得到融合图像中对应位置上的像素值。
可选地,在一些实施例中,步骤230具体包括:根据该多帧重建图像中每帧图像的对比度信息或灰度信息,确定对应图像的权重信息;根据该权重信息,对该多帧重建图像进行加权融合,获得该融合图像。
具体地,假设步骤220得到P帧重建图像,首先,根据每帧图像的对比度信息或灰度信息,确定对应图像的权重信息,然后,基于确定的权重信息,对P帧重建图像中对应位置的像素加权相加,以得到融合图像中对应位置上的像素值。
可选地,作为一种实现方式,根据该多帧重建图像中每帧图像的对比度信息或灰度信息,确定对应图像的权重信息,包括:根据该多帧重建图像中每帧图像的对比度信息,确定对应图像的权重信息。
可选地,作为另一种实现方式,根据该多帧重建图像中每帧图像的对比度信息或灰度信息,确定对应图像的权重信息,包括:确定一帧重建图像的明区域和/或暗区域;为该明区域分配第一灰度信息,和/或,为该暗区域分配第二灰度信息;根据该第一灰度信息,确定该明区域上的权重信息,和/或,根据该第二灰度信息,确定该暗区域上的权重信息。
具体地,首先,对多帧重建图像估计明、暗区域,然后分别对明、暗区域灰度进行分配,即为明区域分配第一灰度信息,为暗区域分配第二灰度信息,然后根据明区域分配的灰度信息确定明区域的权重值,根据暗区域分配的灰度信息确定暗区域的权重值。具体地,明区域采用低比特平面图形高权重分配像素值,暗区域采用高比特平面图像高权重分配像素值。
可选地,在一些实施例中,步骤230具体包括:对该多帧重建图像进行非线性映射融合,获得该融合图像。
具体地,假设步骤220得到P帧重建图像,首先,根据非线性函数构建每帧图像的权重信息,然后,基于确定的权重信息,对P帧重建图像中对应位置的像素加权相加,以得到融合图像中对应位置上的像素值。其中,用于构建权重信息的非线性函数可以为现有的非线性映射融合技术中的非线性函数,本发明实施例对此不作限定。
图4为本发明实施例提供的图像融合的方法的另一示意性流程图。例如,该方法可以由摄像设备执行,具体地,由双目摄像头执行。如图4所示,该 方法包括如下步骤。
410,输入一帧比特位数为M的原始图像,M为等于或大于10的整数。
具体地,双目摄像头读取该原始图像。
420,通过对原始图像进行比特平面分层,获得多帧8比特图像。
具体地,采用比特平面分层方法,将原始图像按二进制分为L层比特平面,并按照比特阶数自高至低的顺序,分别采用符号L,L-1,…,2,1标号每个比特平面,如图3中所示。然后将L层比特平面分段截取得L-7幅8位图像。
其中,将L层比特平面分段截取得L-7幅8位图像的流程为:对于L层比特平面,自高至低每次连续取8个比特平面,计起始层标号istart,则终止层标号为iend=istart–7,然后将每层比特平面的数据按二进制右移istart-8构成一幅8位图像,每次向下跳跃一个比特平面,继续依照以上方法构成下一幅8位图像,直至istart=8截止移动,总共产生L–7幅8位图像。
430,对多帧8比特图像进行融合处理,获得融合图像。
具体地,使用均值融合方法或加权融合方法,对步骤420得到的L-7幅图像进行融合处理。关于图像融合的具体方法详见上文的描述,这里不再赘述。
440,输出融合图像。
具体地,本发明实施例提供的图像融合的方法的仿真结果如图5所示。
场景一:树荫,高增益(如图5中第一行的三幅图像所示)。
图5中第一行的中间一幅图像为比特平面分层所得的多个比特平面中低8位的比特平面(例如,图3中所示的比特平面8至比特平面1)重构出的图像。图5中第一行的靠右边的一幅图像为比特平面分层所得的多个比特平面中高8位的比特平面(例如,图3中所示的比特平面L至比特平面L-7)重构出的图像。图5中第一行的靠左边的图像为通过对图5中第一行的中间的图像与靠右边的图像进行融合处理后获得的融合图像(记为融合图像1)。
从图5可知,该融合图像1为明、暗区域细节丰富的图像。
场景二:街道无车灯直射,低增益(如图5中第二行的三幅图像所示)。
图5中第二行的中间一幅图像为比特平面分层所得的多个比特平面中低8位的比特平面(例如,图3中所示的比特平面8至比特平面1)重构出的图像。图5中第二行的靠右边的一幅图像为比特平面分层所得的多个比特平 面中高8位的比特平面(例如,图3中所示的比特平面L至比特平面L-7)重构出的图像。图5中第二行的靠左边的图像为通过对图5中第二行的中间的图像与靠右边的图像进行融合处理后获得的融合图像(记为融合图像2)。
从图5可知,该融合图像2为明、暗区域细节丰富的图像。
场景三:街道无车灯直射,高增益(如图5中第三行的三幅图像所示)。
图5中第三行的中间一幅图像为比特平面分层所得的多个比特平面中低8位的比特平面(例如,图3中所示的比特平面8至比特平面1)重构出的图像。图5中第三行的靠右边的一幅图像为比特平面分层所得的多个比特平面中高8位的比特平面(例如,图3中所示的比特平面L至比特平面L-7)重构出的图像。图5中第三行的靠左边的图像为通过对图5中第三行的中间的图像与靠右边的图像进行融合处理后获得的融合图像(记为融合图像3)。
从图5可知,该融合图像3为明、暗区域细节丰富的图像。
从图5所示的仿真结果可知,本发明实施例可以合成明暗区域细节丰富的图像。因此,本申请能够解决现有技术中摄像头成像时图像中暗区域和亮区域的细节不明显的问题。
上述结合图5的描述提及的高、低增益指的是,手动曝光时调节的曝光增益,也可以理解成曝光程度。
综上所述,本发明实施例对单帧图像数据进行比特平面分层,并基于比特平面分层获得的多个比特平面重构出多帧重建图像,然后对该多帧重建图像进行图像融合,得到融合图像。因为多帧重建图像直接来源于同一帧图像,因此,这多帧重建图像之间不存在图像中运动物体的差异,从而保证了图像场景的一致性,无需图像配准就可以直接应用于曝光融合,因此,相对于现有技术,本申请提供的方案可以降低图像融合的算法资源开销。此外,根据本申请提供的方案得到的融合图像不会存在鬼像,可以克服现有技术中利用同一场景的不同曝光图像来进行图像融合易造成图像模糊的问题。此外,本发明实施例得到的融合图像为明暗区域细节丰富的图像,因此,可以解决现有技术中摄像头成像时图像中暗区域和亮区域的细节不明显的问题。
本发明实施例提供的方案可以使得将曝光融合技术的应用场景拓展至无人机、自动驾驶、无人驾驶等高速运动系统中,还可以拓展至机器视觉、机器人、安防监控、遥感成像、摄影测量等领域中。
上文描述了本申请的方法实施例,下文将描述本申请的装置实施例。应 理解,装置实施例的描述与方法实施例的描述相互对应,因此,未详细描述的内容可以参见前面方法实施例,为了简洁,这里不再赘述。
图6为本发明实施例提供的摄像装置600的示意性框图,该摄像装置600包括如下部件。
图像传感器610,用于采集一帧原始图像,该原始图像的比特位数为M,M为大于1的整数。
具体地,该原始图像为多比特图像,对应地,该图像传感器610为可以输出多比特图像的图像传感器。
处理器620,用于对该原始图像进行比特平面分层,获得该原始图像的多个比特平面;根据该多个比特平面,获得多帧重建图像,其中,不同重建图像所对应的比特平面不完全相同,每帧该重建图像的比特位数为N,N为小于M的正整数;对该多帧重建图像进行图像融合,获得该原始图像的融合图像。
可选地,在一些实施例中,该处理器620具体用于,按照比特阶数从高到低或者从低到高的顺序,每次从该多个比特平面中连续选取S个比特平面构成一帧重建图像,S为等于或小于N的正整数。
可选地,在一些实施例中,每个该比特平面的为1比特平面,该多个比特平面为M个比特平面,S等于N。
可选地,在一些实施例中,该处理器620具体用于,按照比特阶数从高到低或者从低到高的顺序,选取该M个比特平面中的第i个至第i+(N-1)个比特平面构成第i帧该重建图像,i为1,…,M-(N-1)。
可选地,在一些实施例中,每个该比特平面为多比特平面,该多个比特平面的总数小于M,S小于N。
可选地,在一些实施例中,该处理器620具体用于,对该多帧重建图像进行均值融合,获得该融合图像。
可选地,在一些实施例中,该处理器620具体用于,根据该多帧重建图像中每帧图像的对比度信息或灰度信息,确定对应图像的权重信息;根据该权重信息,对该多帧重建图像进行加权融合,获得该融合图像。
可选地,在一些实施例中,该处理器620具体用于,确定一帧重建图像的明区域和/或暗区域;为该明区域分配第一灰度信息,和/或,为该暗区域分配第二灰度信息;根据该第一灰度信息,确定该明区域上的权重,和/或, 根据该第二灰度信息,确定该暗区域上的权重。
可选地,在一些实施例中,该处理器620具体用于,对该多帧重建图像进行非线性映射融合,获得该融合图像。
可选地,在一些实施例中,M减去N的差值大于或等于2。
可选地,在一些实施例中,M为大于或等于10的整数,N为大于或等于8的整数。
可选地,在一些实施例中,该摄像装置600为双目摄像装置。
本发明实施例还提供一种移动设备,该移动设备包括上述实施例提供的摄像装置600,该移动设备还包括用于实现该移动设备移动的动力系统。
可选地,在一些实施例中,该移动设备为下列设备中的任一种:无人机、无人驾驶车辆、机器人、监控摄像装置、遥感成像装置。
图7为本发明实施例提供的无人机700的示意性框图。该无人机包括机身710、动力系统720、该机身710上设置有双目摄像装置730,该双目摄像装置730用于执行上文方法实施例提供的图像融合的方法。
该双目摄像装置730也可以对应于上文装置实施例提供的摄像装置600。
如图8所示,本发明实施例还提供一种摄像装置800,该摄像装置800包括处理器810与存储器820,该存储器820用于存储指令,该处理器810用于执行该存储器820存储的指令,并且对该存储器820中存储的指令的执行使得,该处理器810用于执行上文方法实施例提供的图像融合的方法。
可选地,如图8所示,该摄像装置800还包括通信接口830,该通信接口830还用于输出处理器810得到的融合图像。
本发明实施例还提供一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得,该计算机执行上文方法实施例提供的图像融合的方法。
本发明实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上文方法实施例提供的图像融合的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述 的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理器中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限 于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (25)

  1. 一种图像融合的方法,其特征在于,包括:
    对一帧原始图像进行比特平面分层,获得所述原始图像的多个比特平面,所述原始图像的比特位数为M,M为大于1的整数;
    根据所述多个比特平面,获得多帧重建图像,其中,不同重建图像所对应的比特平面不完全相同,每帧重建图像的比特位数为N,N为小于M的正整数;
    对所述多帧重建图像进行图像融合,获得所述原始图像的融合图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述多个比特平面,获得多帧重建图像,包括:
    按照比特阶数从高到低或者从低到高的顺序,每次从所述多个比特平面中连续选取S个比特平面构成一帧重建图像,S为等于或小于N的正整数。
  3. 根据权利要求2所述的方法,其特征在于,每个所述比特平面为1比特平面,所述多个比特平面为M个比特平面,S等于N。
  4. 根据权利要求3所述的方法,其特征在于,所述按照比特阶数从高到低或者从低到高的顺序,每次从所述多个比特平面中选取连续S个比特平面构成一帧重建图像,包括:
    按照比特阶数从高到低或者从低到高的顺序,选取所述M个比特平面中的第i个至第i+(N-1)个比特平面构成第i帧所述重建图像,i为1,…,M-(N-1)。
  5. 根据权利要求2所述的方法,其特征在于,每个所述比特平面为多比特平面,所述多个比特平面的总数小于M,S小于N。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述对所述多帧重建图像进行图像融合,获得所述原始图像的融合图像,包括:
    对所述多帧重建图像进行均值融合,获得所述融合图像。
  7. 根据权利要求1至5中任一项所述的方法,其特征在于,所述对所述多帧重建图像进行图像融合,获得所述原始图像的融合图像,包括:
    根据所述多帧重建图像中每帧图像的对比度信息或灰度信息,确定对应图像的权重信息;
    根据所述权重信息,对所述多帧重建图像进行加权融合,获得所述融合图像。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述多帧重建图像中每帧图像的对比度信息或灰度信息,确定对应图像的权重信息,包括:
    确定一帧重建图像的明区域和/或暗区域;
    为所述明区域分配第一灰度信息,和/或,为所述暗区域分配第二灰度信息;
    根据所述第一灰度信息,确定所述明区域上的权重信息,和/或,根据所述第二灰度信息,确定所述暗区域上的权重信息。
  9. 根据权利要求1至5中任一项所述的方法,其特征在于,所述对所述多帧重建图像进行图像融合,获得所述原始图像的融合图像,包括:
    对所述多帧重建图像进行非线性映射融合,获得所述融合图像。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,M为10或12,N为8。
  11. 一种摄像装置,其特征在于,包括:
    图像传感器,用于采集一帧原始图像,所述原始图像的比特位数为M,M为大于1的整数;
    处理器,用于对所述原始图像进行比特平面分层,获得所述原始图像的多个比特平面;
    所述处理器还用于,根据所述多个比特平面,获得多帧重建图像,其中,不同重建图像所对应的比特平面不完全相同,每帧所述重建图像的比特位数为N,N为小于M的正整数;
    所述处理器还用于,对所述多帧重建图像进行图像融合,获得所述原始图像的融合图像。
  12. 根据权利要求11所述的摄像装置,其特征在于,所述处理器具体用于,按照比特阶数从高到低或者从低到高的顺序,每次从所述多个比特平面中连续选取S个比特平面构成一帧重建图像,S为等于或小于N的正整数。
  13. 根据权利要求12所述的摄像装置,其特征在于,每个所述比特平面为1比特平面,所述多个比特平面为M个比特平面,S等于N。
  14. 根据权利要求13所述的摄像装置,其特征在于,所述处理器具体用于,按照比特阶数从高到低或者从低到高的顺序,选取所述M个比特平面中的第i个至第i+(N-1)个比特平面构成第i帧所述重建图像,i为1,…,M-(N-1)。
  15. 根据权利要求12所述的摄像装置,其特征在于,每个所述比特平面为多比特平面,所述多个比特平面的总数小于M,S小于N。
  16. 根据权利要求11至15中任一项所述的摄像装置,其特征在于,所述处理器具体用于,对所述多帧重建图像进行均值融合,获得所述融合图像。
  17. 根据权利要求11至15中任一项所述的摄像装置,其特征在于,所述处理器具体用于,
    根据所述多帧重建图像中每帧图像的对比度信息或灰度信息,确定对应图像的权重信息;
    根据所述权重信息,对所述多帧重建图像进行加权融合,获得所述融合图像。
  18. 根据权利要求17所述的摄像装置,其特征在于,所述处理器具体用于,
    确定一帧重建图像的明区域和/或暗区域;
    为所述明区域分配第一灰度信息,和/或,为所述暗区域分配第二灰度信息;
    根据所述第一灰度信息,确定所述明区域上的权重,和/或,根据所述第二灰度信息,确定所述暗区域上的权重。
  19. 根据权利要求11至15中任一项所述的摄像装置,其特征在于,所述处理器具体用于,对所述多帧重建图像进行非线性映射融合,获得所述融合图像。
  20. 根据权利要求11至19中任一项所述的摄像装置,其特征在于,M为大于或等于10的整数,N为大于或等于8的整数。
  21. 根据权利要求11至20中任一项所述的摄像装置,其特征在于,所述摄像装置为双目摄像装置。
  22. 一种无人机,其特征在于,所述无人机包括机身、动力系统、所述机身上设置有双目摄像装置,所述双目摄像装置用于执行如权利要求1至10中任一项所述的图像融合的方法。
  23. 一种摄像装置,其特征在于,包括:存储器与处理器,所述存储器用于存储指令,所述处理器用于执行所述存储器存储的指令,并且对所述存储器中存储的指令的执行使得,所述处理器用于执行如权利要求1至10中任一项所述的图像融合的方法。
  24. 一种计算机存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被计算机执行时使得,所述计算机执行如权利要求1至10中任一项所述的图像融合的方法。
  25. 一种包含指令的计算机程序产品,其特征在于,所述指令被计算机执行时使得计算机执行如权利要求1至10中任一项所述的图像融合的方法。
PCT/CN2017/117457 2017-12-20 2017-12-20 图像融合的方法、装置与无人机 WO2019119295A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/117457 WO2019119295A1 (zh) 2017-12-20 2017-12-20 图像融合的方法、装置与无人机
CN201780017877.3A CN109155061B (zh) 2017-12-20 2017-12-20 图像融合的方法、装置与无人机

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/117457 WO2019119295A1 (zh) 2017-12-20 2017-12-20 图像融合的方法、装置与无人机

Publications (1)

Publication Number Publication Date
WO2019119295A1 true WO2019119295A1 (zh) 2019-06-27

Family

ID=64803430

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117457 WO2019119295A1 (zh) 2017-12-20 2017-12-20 图像融合的方法、装置与无人机

Country Status (2)

Country Link
CN (1) CN109155061B (zh)
WO (1) WO2019119295A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110708473B (zh) * 2019-11-14 2022-04-15 深圳市道通智能航空技术股份有限公司 高动态范围图像曝光控制方法、航拍相机及无人飞行器
CN111401477B (zh) * 2020-04-17 2023-11-14 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN112258417B (zh) * 2020-10-28 2023-02-28 杭州海康威视数字技术股份有限公司 一种图像生成方法、装置及设备
CN115063620B (zh) * 2022-08-19 2023-11-28 启东市海信机械有限公司 一种基于比特分层罗茨风机轴承磨损检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040100586A1 (en) * 2002-11-22 2004-05-27 Jiang Li System and method for scalable portrait video
CN102396219A (zh) * 2009-06-09 2012-03-28 索尼公司 对于具有稀疏直方图的图像的嵌入式图形编码
CN103493490A (zh) * 2011-04-25 2014-01-01 杜比实验室特许公司 非线性视觉动态范围残留量化器
CN104980652A (zh) * 2014-04-11 2015-10-14 韩华泰科株式会社 图像处理设备和图像处理方法
CN105844692A (zh) * 2016-04-27 2016-08-10 北京博瑞空间科技发展有限公司 基于双目立体视觉的三维重建装置、方法、系统及无人机
CN107274424A (zh) * 2017-06-15 2017-10-20 新疆大学 一种彩色图像的分割方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101375662B1 (ko) * 2007-08-06 2014-03-18 삼성전자주식회사 이미지 데이터 압축 방법 및 장치
CN106780647A (zh) * 2016-12-14 2017-05-31 广西师范大学 一种河流断面二维显示系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040100586A1 (en) * 2002-11-22 2004-05-27 Jiang Li System and method for scalable portrait video
CN102396219A (zh) * 2009-06-09 2012-03-28 索尼公司 对于具有稀疏直方图的图像的嵌入式图形编码
CN103493490A (zh) * 2011-04-25 2014-01-01 杜比实验室特许公司 非线性视觉动态范围残留量化器
CN104980652A (zh) * 2014-04-11 2015-10-14 韩华泰科株式会社 图像处理设备和图像处理方法
CN105844692A (zh) * 2016-04-27 2016-08-10 北京博瑞空间科技发展有限公司 基于双目立体视觉的三维重建装置、方法、系统及无人机
CN107274424A (zh) * 2017-06-15 2017-10-20 新疆大学 一种彩色图像的分割方法及装置

Also Published As

Publication number Publication date
CN109155061A (zh) 2019-01-04
CN109155061B (zh) 2021-08-27

Similar Documents

Publication Publication Date Title
WO2019119295A1 (zh) 图像融合的方法、装置与无人机
US10579908B2 (en) Machine-learning based technique for fast image enhancement
JP7077395B2 (ja) 多重化高ダイナミックレンジ画像
CN112529775A (zh) 一种图像处理的方法和装置
US9344638B2 (en) Constant bracket high dynamic range (cHDR) operations
Bandoh et al. Recent advances in high dynamic range imaging technology
CN113168670A (zh) 使用神经网络的亮点去除
JP2017520944A (ja) 3dラドン画像の生成および使用
KR20050009694A (ko) 브래킷된 이미지 시퀀스로부터 고 동적 범위 이미지를생성하는 시스템 및 프로세스
CN110602467A (zh) 图像降噪方法、装置、存储介质及电子设备
JP2014179980A (ja) 高ダイナミックレンジ画像を生成するために画像のセットからサブセットを選択する方法
JP6103649B2 (ja) 複数のレベルの中間閾値ビットマップを用いた、hdr画像処理におけるゴーストアーティファクトの検出および除去方法
US10600170B2 (en) Method and device for producing a digital image
JP5765893B2 (ja) 画像処理装置、撮像装置および画像処理プログラム
KR20200011000A (ko) 증강 현실 프리뷰 및 위치 추적을 위한 장치 및 방법
CN112771843A (zh) 信息处理方法、装置和成像系统
CN113962859A (zh) 一种全景图生成方法、装置、设备及介质
CN113538211A (zh) 一种画质增强装置及相关方法
CN113793272B (zh) 图像降噪方法及装置、存储介质、终端
CN109035181A (zh) 一种基于图像平均亮度的宽动态范围图像处理方法
JP2009224901A (ja) 画像のダイナミックレンジ圧縮方法、画像処理回路、撮像装置およびプログラム
CN114092562A (zh) 噪声模型标定方法、图像去噪方法、装置、设备和介质
CN113781321A (zh) 图像高亮区域的信息补偿方法、装置、设备及存储介质
Schedl et al. Coded exposure HDR light‐field video recording
US11521300B2 (en) Edge preserving noise reduction algorithm using inverse exponential function optimization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17935154

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17935154

Country of ref document: EP

Kind code of ref document: A1