WO2023272457A1 - 用于图像拼接的装置、系统和相关联的方法 - Google Patents

用于图像拼接的装置、系统和相关联的方法 Download PDF

Info

Publication number
WO2023272457A1
WO2023272457A1 PCT/CN2021/102879 CN2021102879W WO2023272457A1 WO 2023272457 A1 WO2023272457 A1 WO 2023272457A1 CN 2021102879 W CN2021102879 W CN 2021102879W WO 2023272457 A1 WO2023272457 A1 WO 2023272457A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
source
lut
images
pixel
Prior art date
Application number
PCT/CN2021/102879
Other languages
English (en)
French (fr)
Inventor
肖龙
吴建泽
吴毓宇
王志福
范志干
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180099013.7A priority Critical patent/CN117480522A/zh
Priority to PCT/CN2021/102879 priority patent/WO2023272457A1/zh
Publication of WO2023272457A1 publication Critical patent/WO2023272457A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Definitions

  • the present disclosure relates to the field of image processing, and more particularly to devices, systems and associated methods for image stitching.
  • a panoramic image with a wider field of view can be obtained.
  • image stitching technology can be applied to various fields. For example, for a driving scene, multiple images collected by imaging devices deployed in different positions of the vehicle can be stitched together to obtain a 360° panoramic view of the vehicle. For on-site monitoring scenarios, multiple images collected from different locations can be spliced to obtain a wider field of view monitoring screen.
  • the image stitching process involves a large amount of complex image processing, with high computational overhead and high delay, and sometimes it may be difficult to meet scenarios with high real-time requirements.
  • Embodiments of the present disclosure provide a solution for image stitching.
  • an apparatus for image stitching includes: a distortion correction module configured to: obtain a first look-up table (LUT), the first LUT indicating a distortion correction relationship from pixel coordinates in a composite image to at least one source pixel coordinate of a plurality of source images, and Using the first LUT, performing distortion correction on multiple source images to obtain multiple corrected images;
  • the equalization module is configured to: acquire a second LUT, and the second LUT indicates a brightness and/or chrominance balance relationship for the multiple corrected images , and use the second LUT to adjust a plurality of corrected images to obtain a plurality of equalized images;
  • the image fusion module is configured to: obtain a third LUT, and the third LUT indicates the coordinates from the respective pixel coordinates of the plurality of equalized images to the synthesized image Blending relationships between pixel coordinates, and using a third LUT to combine multiple equalized images into a composite image.
  • multiple operations of image stitching are realized by pre-configuring multi-level LUTs in a dedicated processing device, which can improve processing performance, speed up processing, and reduce requirements on system bandwidth.
  • the apparatus further includes: a cache module configured to cache pixel values of a plurality of source images, wherein the distortion correction module is configured to read pixel values from the cache module for performing distortion Correction.
  • a cache module configured to cache pixel values of a plurality of source images
  • the distortion correction module is configured to read pixel values from the cache module for performing distortion Correction.
  • the distortion correction module is configured to: for each target block in the plurality of blocks divided by the synthesized image, determine the target pixel coordinates in the target block through progressive scanning ; Using the first LUT, determine at least one source pixel coordinate in a plurality of source images to which the target pixel coordinate in the target block is mapped; read the pixel value of at least one source pixel coordinate; and based on the first LUT indicated from A distortion correction relationship of the target pixel coordinates to the at least one source pixel coordinates to transform the read pixel value of the at least one source pixel coordinates. Distortion correction can be done quickly and accurately by traversing multiple blocks and traversing within blocks, for example by zigzag scanning within a block and zigzag scanning across blocks.
  • the distortion correction module is configured to read at least one source pixel coordinate from the cache module, and the cache module caches at least one source pixel coordinate mapped to the target block in a plurality of source images Pixel values.
  • the multiple blocks have different sizes, and the blocks located at the edge of the composite image are smaller in size than the blocks located in the center of the composite image.
  • This division takes into account that distortions are usually higher at the edges of composite images, so each pixel coordinate may be mapped to more source images and/or more source pixel coordinates. Therefore, if the size is the same, for the block at the edge, there will be more pixel values that need to be cached. Therefore, by adjusting the size of the block, the cache space can be more fully utilized and the cache hit rate can be improved.
  • the distortion correction module is configured to perform the following operations in parallel: perform distortion correction on the first source image and the second source image among the multiple source images to obtain the first corrected image, the second Pixel coordinates of a source image and a second source image are mapped to different parts in the composite image; and distortion correction is performed on a third source image and a fourth source image of the plurality of source images to obtain a second corrected image, a third The pixel coordinates of the source image and the fourth source image are mapped to different parts in the composite image.
  • the source images mapped to different parts of the composite image can be distortion corrected in the same processing flow, and the corrected images obtained by multiple parallel processing flows can be easily merged later, which not only simplifies the distortion correction processing flow, but also parallel processing can Improve processing efficiency.
  • the device further includes: a histogram statistics module configured to determine respective brightness and/or chrominance histograms of a plurality of corrected images, and the second LUT is based on the determined brightness and/or Or as determined by a chromaticity histogram.
  • a histogram statistics module configured to determine respective brightness and/or chrominance histograms of a plurality of corrected images, and the second LUT is based on the determined brightness and/or Or as determined by a chromaticity histogram.
  • the distortion correction module is further configured to: obtain a fourth LUT, the fourth LUT indicates the pixel coordinates of the second synthesized image and at least one source pixel coordinate of the second plurality of source images The distortion correction relationship between the two sources, and using the fourth LUT to perform distortion correction on the second plurality of source images to obtain the second plurality of corrected images.
  • the equalization module is configured to: acquire a fifth LUT indicating a luminance and/or chrominance equalization relationship for the second plurality of corrected images, and adjust the second plurality of corrected images using the fifth LUT to obtain the second plurality of corrected images a balanced image.
  • the image fusion module is further configured to: obtain a sixth LUT, the sixth LUT indicates the fusion between the pixel coordinates of the second synthesized image and the pixel coordinates of the second plurality of equalized images relationship, and using the sixth LUT, combining the second plurality of equalized images into a second composite image.
  • the device for image stitching of the present disclosure can be multiplexed for image stitching in different scenarios by controlling the source image and LUT provided to the image stitching device.
  • the image fusion module is further configured to: acquire a seventh LUT, where the seventh LUT indicates the fusion relationship between the pixel coordinates of the first composite image and the pixel coordinates of the second composite image, And using the seventh LUT, the first composite image and the second composite image are composited into a third composite image.
  • the apparatus for image stitching of the present disclosure can also be used to stitch a larger number of source images by being called repeatedly.
  • the apparatus includes an Application Specific Integrated Circuit (ASIC) chip.
  • ASIC Application Specific Integrated Circuit
  • the plurality of source images, the first LUT, the second LUT and the third LUT are retrieved from a first external storage device of the ASIC chip.
  • the composite image is written to a second external storage device of the ASIC chip. Implementation on an ASIC chip could further improve the processing of tasks dedicated to image stitching.
  • an image processing system includes: the device according to any one of the implementation manners of the first aspect; and at least one storage device for storing a plurality of source images, a first LUT, a second LUT and a third LUT.
  • system further includes: a general processing device configured to determine a second LUT based on respective luminance and/or chrominance histograms of the plurality of corrected images.
  • a method for image stitching includes performing distortion correction on a plurality of source images to obtain a plurality of corrected images using a first look-up table (LUT), the first LUT indicating from a pixel coordinate in a composite image to at least one source pixel coordinate in a plurality of source images Distortion correction relationship; use the second LUT to adjust a plurality of corrected images to obtain a plurality of balanced images, and the second LUT indicates the brightness and/or chroma balance relationship for the plurality of corrected images; and use the third LUT to adjust the plurality of The equalized images are merged into a synthesized image, and the third LUT indicates a fusion relationship from respective pixel coordinates of the plurality of equalized images to pixel coordinates of the synthesized image.
  • LUT look-up table
  • the method further includes: caching pixel values of multiple source images in a cache area, wherein correcting the distortion of multiple source images includes: reading pixel values from the cache area to use to perform distortion correction.
  • performing distortion correction on the multiple source images includes: for each target block in the multiple blocks divided by the synthesized image, determining the pixels of the target block by progressive scanning coordinates; using the first LUT, determine at least one source pixel coordinate in the plurality of source images to which the target pixel coordinate in the target block is mapped; read the pixel value of the at least one source pixel coordinate; and based on the indicated first LUT The distortion correction relationship from the target pixel coordinates to the at least one source pixel coordinates to transform the read pixel values of the at least one source pixel coordinates.
  • reading the pixel value of at least one source pixel coordinate includes: reading at least one source pixel coordinate from a cache area, and the cache area at least caches a plurality of source images that are mapped to the target The pixel value of the block's source pixel coordinates.
  • the multiple blocks have different sizes, and the size of the block located at the edge of the composite image is smaller than the size of the block located at the center of the composite image.
  • performing distortion correction on multiple source images includes performing the following operations in parallel: performing distortion correction on the first source image and the second source image among the multiple source images to obtain the first correction image, the pixel coordinates of the first source image and the second source image are mapped to different parts in the composite image; and distortion correction is performed on a third source image and a fourth source image of the plurality of source images to obtain a second corrected image , the pixel coordinates of the third source image and the fourth source image are mapped to different parts in the composite image.
  • the method further includes: determining respective luminance and/or chrominance histograms of the plurality of corrected images, and the second LUT is determined based on the determined luminance and/or chromaticity histograms.
  • the method further includes: using a fourth LUT to perform distortion correction on the second plurality of source images to obtain a second plurality of corrected images, the fourth LUT indicating the second synthesized image The distortion correction relationship between the pixel coordinates of and at least one source pixel coordinate in the second plurality of source images; the fifth LUT is used to adjust the second plurality of corrected images to obtain the second plurality of equalized images, and the fifth LUT indicates for Luminance and/or chrominance equalization relationships of the second plurality of corrected images; and merging the second plurality of equalized images into a second composite image using a sixth LUT, the sixth LUT indicating the pixel coordinates of the second composite image in relation to the second composite image The fusion relationship between the pixel coordinates of multiple balanced images.
  • the method further includes: acquiring a seventh LUT, where the seventh LUT indicates the fusion relationship between the pixel coordinates of the first composite image and the pixel coordinates of the second composite image; Seven LUTs, combining the first composite image and the second composite image into a third composite image.
  • the method is implemented at an Application Specific Integrated Circuit (ASIC) chip.
  • ASIC Application Specific Integrated Circuit
  • the plurality of source images, the first LUT, the second LUT and the third LUT are retrieved from a first external storage device of the ASIC chip.
  • the synthesized image is written into a second external storage device of the ASIC chip.
  • FIG. 1A and 1B illustrate examples of image stitching according to some embodiments of the present disclosure
  • Figure 2 shows a block diagram of an image processing system according to some embodiments of the present disclosure
  • Fig. 3 shows a schematic diagram of a processing flow in the image stitching device of Fig. 2 according to some embodiments of the present disclosure
  • FIG. 4 shows an example of pixel coordinate space correspondence of source images to composite images according to some embodiments of the present disclosure
  • Figure 5 shows a block diagram of the cache module of Figure 2 according to some embodiments of the present disclosure
  • Figure 6 shows an example of image dicing according to some embodiments of the present disclosure
  • FIG. 7 shows a schematic diagram of an example extended processing flow of the image stitching device of FIG. 2 according to some embodiments of the present disclosure.
  • Fig. 8 shows a schematic flowchart of a method for image stitching according to some embodiments of the present disclosure.
  • the term “comprising” and its similar expressions should be interpreted as an open inclusion, that is, “including but not limited to”.
  • the term “based on” should be understood as “based at least in part on”.
  • the term “one embodiment” or “the embodiment” should be read as “at least one embodiment”.
  • the terms “first”, “second”, etc. may refer to different or the same object.
  • the term “and/or” means at least one of the two items associated with it. For example "A and/or B" means A, B, or A and B. Other definitions, both express and implied, may also be included below.
  • 1A and 1B illustrate image stitching according to some embodiments of the present disclosure.
  • FIG. 1A shows that in a driving scene, four digital imaging devices can be deployed from the front, rear, left, and right of the vehicle 102 to capture still images or dynamic video streams. For example, four source images 111 , 112 , 113 and 114 surrounding the vehicle 102 at a certain moment may be collected. Through image stitching, a composite image 120 showing a look-around panoramic view of the vehicle 102 may be obtained. Note that in FIG. 1A , the vehicle 102 shown in the center of the source and composite images may be a symbolic representation of an actual vehicle.
  • FIG. 1B shows a panoramic synthetic image 150 obtained by splicing and splicing multiple imaging devices deployed indoors, which shows a wider indoor image, which can be used for monitoring and security purposes.
  • FIGS. 1A and 1B only give examples of stitching from multiple source images to a composite image. Embodiments of the present disclosure are not limited to the number of source images, splicing methods, and other forms given in these examples.
  • image stitching involves de-distortion from the source image to the composite image, and various processing techniques to avoid or reduce the distortion of the composite image.
  • various operations of image mosaic are mainly performed by a general-purpose computing device, such as a central computing unit (CPU). For example, it is up to the CPU to determine and correct the distortion of the source image pixel by pixel, handle image distortion, perform image fusion, etc.
  • CPU central computing unit
  • this requires high CPU computing overhead and bandwidth overhead, which is likely to cause a large delay and low performance.
  • an image stitching acceleration scheme based on a look-up table is proposed.
  • the solution implements multiple operations of image stitching by pre-configuring multi-level LUTs, including distortion correction, brightness/chroma balance, and image fusion for multiple source images to be stitched.
  • Such an image stitching solution can be implemented by a dedicated processing device, such as an application-specific integrated circuit (ASIC) chip, thereby improving processing performance, speeding up processing, and reducing requirements on system bandwidth.
  • ASIC application-specific integrated circuit
  • FIG. 2 shows a block diagram of an image processing system 200 according to some embodiments of the present disclosure.
  • the image processing system 200 includes an image stitching device 205 configured to fuse the source images into a composite image.
  • the image stitching device 205 may be a hardware processing device, such as a hardware accelerator.
  • the image stitching device 205 may include or be implemented in a chip.
  • the image stitching device 205 may include or be implemented in an ASIC chip.
  • the image stitching device 205 may also be included in other special-purpose processing devices such as programmable logic devices.
  • the image stitching device 205 is configured to perform image stitching on a plurality of source images 202 - 1 , 202 - 2 , .
  • the plurality of source images 202 - 1 , 202 - 2 , . . . 202 -N may be collectively or individually referred to as source images 202 .
  • the multiple source images 202 for image stitching may come from various data sources, for example, may include still images captured by different digital imaging devices (eg, cameras or video recorders), or video frames in dynamic video streams.
  • the image processing system 200 may further include a storage device 280 .
  • the storage device 280 is communicatively connected with the image stitching device 205 and may be configured to store input images of the image stitching device 205 , such as the plurality of source images 202 .
  • the storage device 280 may also store other input information required during the operation of the image stitching device 205, intermediate processing results, and the like.
  • storage device 280 may also store a plurality of look-up tables LUT 204, LUT 232 and LUT 252. The specific use of these LUTs will be described in detail below. Although a single device is shown in FIG.
  • the image processing system 200 may include multiple storage devices for storing various data/information required by the image stitching device 205 .
  • the source image 202 may be stored in a different storage device than the respective LUTs 204, 232, 252, etc. Embodiments of the present disclosure are not limited in this regard.
  • the image processing system 200 may further include a general processing device 290 , which may be configured to control the storage of the source image 202 and the LUT to the storage device 280 .
  • a general processing device 290 may be configured to control the storage of the source image 202 and the LUT to the storage device 280 .
  • a plurality of source images 202 to be stitched may be input by the user, and stored in the storage device 280 under the control of the general processing device 290 .
  • Each LUT used to implement image mosaic can also be stored, maintained and updated via the general processing device 290 .
  • the general processing device 290 can also control the execution of the image stitching task by controlling the source image 202 and the LUT input to the image stitching device 205 .
  • the general processing means 290 may be configured to present the composite image 254 generated by the image stitching means 205 to a user, to other devices or as input for other tasks, and the like. In some embodiments, the general processing device 290 may also be configured to perform one or more operations required in the image stitching process. Some example functions of general processing device 290 are described in more detail below.
  • image processing system 200 may be implemented as or included in a computing device or computing system.
  • the image processing system 200 may include various devices/systems with computing capabilities such as servers, mainframes, edge computing devices, and terminal devices.
  • the image stitching device 205 includes a plurality of functional modules for implementing various operations in image stitching.
  • the image stitching apparatus 205 includes a distortion correction module 210 configured to perform distortion correction on the plurality of source images 202 based on the LUT 204 (also sometimes referred to herein as a "first LUT").
  • Distortion correction mainly includes perspective transformation of each source image 202 from the pixel coordinate space of the source image to the pixel coordinate space of the synthesized image 252, correction of distortion of the image acquisition device, and the like.
  • the size of the source image 202 and the size of the composite image 254 can be predetermined. In some examples, composite image 254 is larger in size than single source image 202 .
  • LUT 204 indicates a distortion correction relationship from pixel coordinates in composite image 254 to pixel coordinates in source image 202.
  • pixel coordinates refer to coordinates in the two-dimensional space of the image, and are used to represent a pixel in the image.
  • the distortion correction module 210 corrects the multiple source images 202 to obtain multiple corrected images 214-1 to 214-M (where M is greater than 1) by traversing the pixel coordinates of the synthetic image 254 based on the distortion correction relationship indicated by the LUT 204 integer).
  • M can be less than or equal to N.
  • the plurality of corrected images 214 - 1 through 214 -M may be collectively or individually referred to as corrected images 214 .
  • the image stitching device 205 may include a cache module 220 configured to perform cache management on the source image 202 .
  • the cache module 220 includes a cache area, which can be used to cache pixel values corresponding to at least some pixel coordinates of one or more source images 202 .
  • the distortion correction module 210 needs to read the source image 202 when performing distortion correction. At this point, the distortion correction module 210 may send the source pixel coordinates 212 of the source image 202 to be read to the cache module 220 . If the pixel value 222 corresponding to the source pixel coordinate 212 is cached in the cache module 220 , the module can quickly provide the pixel value 222 to the distortion correction module 210 . In some embodiments, if the pixel value 222 is not cached, the cache module 220 may read the pixel value 222 from the storage device 280 , cache the read pixel value 222 and provide it to the distortion correction module 210 .
  • the image stitching apparatus 205 also includes an equalization module 230 configured to perform luminance and/or chrominance equalization on the plurality of corrected images 214 based on a LUT 232 (also sometimes referred to herein as a "second LUT") to adjust A plurality of corrected images 214 is obtained to obtain a plurality of corresponding equalized images 234-1 to 234-M.
  • LUT 232 indicates a luminance and/or chrominance balance relationship for multiple corrected images 214.
  • the plurality of equalized images 234 - 1 through 234 -M may be collectively or individually referred to as equalized images 234 .
  • the image stitching device 205 may further include a histogram statistics module 240 configured to determine respective luminance and/or chrominance histograms of the plurality of corrected images 214 .
  • the determined luma and/or chroma histograms may be used to determine a LUT for luma/chroma equalization.
  • the luma and/or chrominance histograms determined by the histogram statistics module 240 may be provided to the general processing device 290 for the determination of the LUT to be performed by the general processing device 290 .
  • the LUT determined by the general processing device 290 may be provided to the equalization module 230 as the LUT 232 for performing brightness/chromaticity equalization on the current corrected image 214.
  • the LUT determined by the general processing means 290 may be provided for luma/chroma equalization of a corrected image determined from a subsequent source image.
  • the LUT 232 used by the equalization module 230 may be predetermined and not change from one source image to another.
  • the histogram statistics module 240 may be omitted.
  • the image stitching apparatus 205 also includes an image fusion module 250 configured to merge the plurality of equalized images 234 into a composite image 254 based on a LUT 252 (also sometimes referred to herein as a “third LUT”).
  • the LUT 252 indicates how to synthesize the plurality of equalized images 234 into the synthesized image 254, that is, indicates the blending relationship from the respective pixel coordinates of the plurality of equalized images 234 to the pixel coordinates of the synthesized image 252.
  • the fusion relationship may include a weighted fusion relationship, which means that the pixel values corresponding to the respective pixel coordinates of the plurality of equalized images 234 may be weighted and combined into the pixel values of the pixel coordinates of the synthesized image 252.
  • the image stitching device 205 can also include a data distribution network 260, which can be connected to an external storage device 280 through a bus 270, so as to read data from the storage device 280 and send data to the storage device. 280 to write data.
  • the data distribution network 260 can also be configured to transmit the data read from the storage device 280 via the bus 270 to corresponding modules, such as the distortion correction module 210, the cache module 220, the equalization module 230, the histogram statistics module 240 and Image fusion module 250 .
  • the data distribution network 260 can also be configured to transmit the data of each module to its destination device via the bus 270 , such as an external storage device 280 and/or a general processing device 290 .
  • the application of LUT enables each module to complete the required operation through fast look-up table operation.
  • the calculation speed can be greatly improved, the time delay can be reduced, and the general
  • the computing burden of the computing device reduces the occupancy of system bandwidth caused by frequent reading and writing of intermediate results during the computing process.
  • each module in the image stitching device 200 is briefly introduced above. The specific operations of each module in the image stitching device 200 will be discussed in more detail below with reference to other drawings.
  • FIG. 3 shows a schematic diagram of a processing flow in the image stitching device 205 of FIG. 2 according to some embodiments of the present disclosure.
  • the number of source images to be stitched is four, including source images 202-1, 202-2, 202-3, and 202-4.
  • distortion correction module 210 is configured to read LUT 204 and pixel values of respective source images 202. As mentioned above, the LUT 204 a distortion corrected relationship from pixel coordinates in the composite image 254 to one or more source pixel coordinates of at least one source image 202. The distortion correction module 210 is configured to perform distortion correction on the plurality of source images 202 by using the distortion correction relationship indicated by the LUT 204 to obtain a plurality of corrected images 214 in the first-level LUT application stage. In the example of FIG. 3 , number of corrected images 214 includes corrected images 214-1 and 214-2. Corrected image 214 may correspond to the pixel coordinate space of final composite image 254 , which means that a pixel coordinate in corrected image 214 may be mapped to a pixel coordinate in composite image 254 .
  • each source image 202 perspective transforms the camera coordinate space from the image's pixel coordinate space, and performs distortion correction in the camera coordinate space, and then transforms the distortion-corrected image from the camera coordinate space to the image's pixel coordinates space.
  • LUT 204 may include a list of entries, each entry including an index of pixel coordinates of composite image 254, an image identification (ID) of one or more source images 202, and one or more source pixel coordinates therein, and weights corresponding to one or more source pixel coordinates in the source image 202 .
  • the weights here may indicate the distortion correction of pixel coordinates in composite image 254 to corresponding source pixel coordinates.
  • the LUT 204 can be configured for a specific scene.
  • the corresponding LUT 204 can be configured for a vehicle-mounted surround view stitching scene of a specific vehicle.
  • the pre-configured LUT 204 may each be used to perform distortion correction on multiple source images acquired from multiple digital imaging devices deployed on the vehicle.
  • the LUT 204 can be updated to adapt to such changes without changing the configuration of the distortion correction module 210.
  • the distortion correction module 210 may be configured to read the LUT 204 from an external storage device 280 via the data distribution network 260 .
  • Distortion correction module 210 may be configured to determine from LUT 204 one or more source pixel coordinates 212 of source image 202 to which each pixel coordinate maps, per pixel coordinate of composite image 254.
  • Distortion correction module 210 may read one or more pixel values 222 corresponding to one or more source pixel coordinates 212 of source image 202 from cache module 220 or directly from storage device 280 (in the example without cache module 220 ).
  • the distortion correction module 210 may transform the read pixel value 222 based on the distortion correction relationship indicated by the LUT 204 (eg, the weight corresponding to the source pixel coordinates).
  • the weights corresponding to the multiple source pixel coordinates can be used to perform bilinear difference on the read source pixel value 222 .
  • the distortion correction module 210 determines the obtained pixel value 222 as the pixel value at the pixel coordinate corresponding to the composite image 254 in the corrected image 214, thereby forming the corrected image 214, which is used for subsequent further processing to obtain the final The synthetic image of 254.
  • the distortion correction module 210 may be configured to perform generation of multiple corrected images 214 in parallel. In some embodiments, distortion correction module 210 may be configured to transform each source image 202 into the pixel coordinate space of composite image 254 through distortion correction. In this way, for each source image 202, a corrected image 214 can be obtained.
  • the distortion correction module 210 may be configured to perform distortion correction on at least two source images 202 mapped to different regions, resulting in one corrected image 214 .
  • corrected image 214-1 is obtained by performing distortion correction from source images 202-1 and 202-3, where source image 202-1 is mainly mapped to the upper portion of corrected image 214-1 (also corresponds to the upper portion of the composite image 254), while the source image 202-3 is mainly mapped to the lower portion of the corrected image 214-1 (also corresponding to the lower portion of the composite image 254).
  • Corrected image 214-2 is obtained by performing distortion correction from source images 202-2 and 202-4, where source image 202-2 is mainly mapped to the left portion of corrected image 214-2 (which also corresponds to the left portion of composite image 254) , while the source image 202-4 is mainly mapped to the right portion of the corrected image 214-3 (which also corresponds to the right portion of the composite image 254).
  • the labeling of the source image 202 in the corrected images 214-1 and 214-2 is mainly to indicate the mapping relationship with the source image 202, not to represent the corresponding part of the corrected image 214 and the marked source image 202 pixels. same value.
  • FIG. 4 shows the pixel coordinate space correspondence from the source image 202 to the composite image 254 in an example driving scene.
  • the four digital acquisition devices are respectively deployed in four directions of the vehicle 410, and the four source images (for example, source images 202-1 to 201-4) collected by these devices are respectively captured from the vehicle 410.
  • the pictures in the four directions of the field of view 411 , 412 , 413 and 414 are seen.
  • the four source images 202-1 to 201-4 are merged into a surround-view stitching scene of the vehicle.
  • pixel coordinates in image portion 421 may be determined from source images 202-1 and 202-2; pixel coordinates in image portion 422 may be determined from source image 202-1 only ; pixel coordinates in image portion 423 may be determined from source images 202-1 and 202-4; pixel coordinates in image portion 424 may be determined from source image 202-2; pixel coordinates in image portion 426 may be determined from source image 202-4; pixel coordinates in image portion 427 may be determined from source images 202-2 and 202-3; pixel coordinates in image portion 428 may be determined from source image 202-3; and Pixel coordinates in image portion 429 may be determined from source images 202-3 and 202-4.
  • the central region of the composite image 254 may not be mapped to any of the input source images if no data capture device is used to capture the headspace of
  • the pixel values in the source image 202-1 are used to determine the upper region of the composite image 254, namely image parts 421, 422, and 423, while the pixel values in the source image 202-3
  • the values are used to determine the lower regions of composite image 254 , ie image portions 427 , 428 and 429 .
  • the pixel coordinates of source images 202 - 1 and 203 - 2 are thus considered to map to different regions in composite image 254 .
  • the pixel coordinates of source images 202-2 and 202-4 are also considered to be mapped to different regions in composite image 254, namely the left partial region composed of image parts 421, 424 and 427, and the region composed of image parts 423, The right part of the area composed of 426 and 429.
  • the LUT 204 may be configured to indicate the distortion correction relationship between the pixel coordinates of the synthesized image 254 to the pixel coordinates of the source images 202-1 and 202-3, and The distortion correction relationship between the pixel coordinates of the composite image 254 to the pixel coordinates of the source images 202-2 and 202-4 is also indicated.
  • the LUT 204 may include two sub-LUTs, which are respectively used to indicate the aforementioned two distortion correction relationships.
  • the distortion correction module 210 may be configured to generate corresponding corrected images 214-1, 214-2 from different sets of source images 202, respectively.
  • each corrected image 214 may correspond to the composite image 254 .
  • distortion correction module 204 may be configured to perform distortion correction for source pixels 202-1 and 202-3 and for source pixels 202-2 and 202-4 in parallel, thereby improving processing speed.
  • the corrected images 214 - 1 and 214 - 2 obtained after parallel processing are provided to the next processing module, namely the equalization module 230 .
  • the distortion correction module 204 may also perform generation of multiple corrected images 214 in series.
  • the pixel space coordinate correspondence of the source image to the composite image is shown in FIGS. 3 and 4 , but is given as an example for illustrative purposes only.
  • the region division of the spatial correspondence between the source image and the synthesized image is not a regular shape as shown in Fig. 3 and Fig. 4 .
  • one or more pixel coordinates or regions of the composite image may be mapped to pixel coordinates of two or more source images, depending on the arrangement of the digital capture devices of the source images.
  • the LUT 204 can be configured to correctly indicate the distortion correction relationship between the pixel coordinates of the synthesized image and the pixel coordinates of the corresponding source image.
  • the distortion correction module 210 needs to continuously read the source pixel value 222 of the source image 202 in order to determine the pixel value at the pixel coordinate corresponding to the synthesized image 254 in the corrected image 214 .
  • some pixel values in the one or more source images 202 may be cached by the caching module 220 .
  • the distortion correction module 210 may read the required source pixel values from the cache module 220 . Compared with directly reading the source pixel value from the storage device 280 every time, the speed of reading the source pixel value from the cache module 220 is faster, thereby further speeding up the speed.
  • FIG. 5 shows a block diagram of the cache module 220 of FIG. 2 according to some embodiments of the present disclosure.
  • the cache module 220 includes a cache area 510 , a cache supply submodule 520 and a pixel read submodule 530 .
  • the distortion correction module 210 provides the cache module 220 with source pixel coordinates 212 in the source image 202 to be read.
  • the source pixel coordinates 212 include an image ID identifying the source image 202 and a pixel coordinate in the pixel coordinate space of the source image 202 .
  • the cache module 212 determines 512 whether the pixel value corresponding to the source pixel coordinate 212 is cached in the cache area 510 , that is, whether there is a cache hit.
  • the cache module 220 determines external storage address information for a corresponding pixel value based on the source pixel coordinates 212 . Whether the corresponding pixel value is stored in the cache area 510 may be determined based on the determined storage address information.
  • the cache module 220 can fetch the cached pixel value from the cache area 510 , and provide the buffered pixel values to the distortion correction module 210 .
  • the cached pixel values retrieved from the cache area 510 may be provided to the cache provisioning sub-module 520, and the cache provisioning sub-module 520 controls the pixel values 222 provided to the distortion correction module 210 based on the control signal.
  • the source pixel coordinates 212 sent to the cache module 220 may be multiple to request a range of pixel values.
  • the cache provisioning sub-module 520 may include a control queue 522 and a data queue 524 .
  • the control queue 522 can be used to arrange control signals from the cache area 510
  • the data queue 524 can be used to arrange pixel values obtained from the cache area 510 and/or from the pixel reading sub-module 530 .
  • the pixel values in the data queue 524 are sequentially provided to the distortion correction module 210 under the control of the control signal.
  • the cache module 220 can send a control signal to the pixel reading sub- Module 530, to request the pixel reading sub-module 530 to obtain corresponding pixel values.
  • the external storage address of the pixel value to be read can be indicated in the control signal.
  • the pixel reading sub-module 530 sends a read request to an external storage device, such as the storage device 280 , and reads a pixel value from the storage device 280 .
  • the pixel reading sub-module 530 may also receive multiple control signals to request to read corresponding pixel values.
  • the pixel reading sub-module 530 may be configured to request deduplication processing 532, so as to deduplicate requests for the same storage address.
  • the pixel values read by the pixel reading sub-module 530 may be provided to the buffer supply sub-module 520 to be provided to the distortion correction module 210 .
  • the pixel values read by the pixel reading sub-module 530 can also be buffered in the cache area 510 , so that they can be quickly provided to the distortion correction module 210 from the cache area 510 when being read later.
  • the distortion correction corresponding to the pixel coordinates is determined line by line. Because the pixel coordinates in each row of the synthesized image may map to points in different rows in the source image, and the row span is relatively large (for example, the maximum span may exceed 200 rows). In this way, it may be necessary to repeatedly read discrete pixel values in the source image when performing distortion correction in progressive scanning. If data is read from the external storage device every time, a large amount of bandwidth will be consumed, and the reading delay will be large and the performance will be poor. By setting the cache, the processing speed in image mosaic can be increased.
  • the synthesized image 254 can also be divided into multiple blocks, and the distortion correction module 210 can perform distortion correction on the source image block by block.
  • the distortion correction module 210 can determine the source pixel coordinates 212 corresponding to the source image 202 for each target pixel coordinate in the block in a progressive scanning manner, and obtain the source pixel coordinates 212 Pixel value 222 to perform distortion correction.
  • the distortion correction module 210 may also traverse multiple blocks in a progressive scanning manner.
  • FIG. 6 illustrates an example of image slicing according to some embodiments of the present disclosure.
  • the entire image area of composite image 254 may be divided into a plurality of blocks (identified by numbers 0, 1, 2, 3, 4, 5, 6, 7, etc. in FIG. 6).
  • each block is processed as the target block.
  • pixel coordinates in a target block in composite image 254 pixel values in one or more blocks in one or more source images 202 may be mapped to pixel coordinates in the target block. Note that although segmentation of blocks in source image 202 is shown in FIG. 6 , these blocks do not necessarily correspond one-to-one with blocks in composite image 254 .
  • the distortion correction module 210 may be configured to perform distortion correction for each block in the synthesized image 250 .
  • the distortion correction module 210 may be configured to scan the target pixel coordinates of the block line by line in a zigzag order 610 .
  • the distortion correction module 210 may be configured to scan the plurality of blocks one by one in a zigzag order 620 until the distortion correction for the source image 202 is completed for the entire composite image 254 .
  • distortion correction module 210 may be configured to perform distortion correction as discussed above.
  • the pixel coordinates of the block are determined by progressive scanning, and the LUT 204 is used to determine one or more source pixel coordinates 212 in the source image 202 to which the target pixel coordinates scanned into the block are mapped.
  • the distortion correction module 210 may read the pixel values 222 of the source pixel coordinates 212 of the source image 202 based on the source pixel coordinates 212 .
  • the distortion correction module 210 can read the pixel value 222 through the cache module 220 . Then, the distortion correction module 210 can transform the read pixel value 222 based on the distortion correction relationship indicated by the LUT 204, so as to obtain the pixel value of the corrected image 214 at the target pixel coordinate.
  • the cache space of the cache module 220 is limited and may not be able to store all the source images 202 .
  • the corresponding pixel values may be cached in the cache module 220 in the scan order of the blocks of the composite image 254 .
  • cache utilization can be improved through a zigzag scan order. For example, the pixel values in the source image 202 to which the target pixel coordinates in a block of the synthesized image 254 are mapped can be determined in scanning order, and then these pixel values are cached in the cache area 510 .
  • the original cached pixel values may be reused, which can avoid repeated reading of the same pixel value by the cache module 510 and improve cache read and write performance.
  • the pixel values corresponding to the pixel coordinates of the block (the block marked with "0") that are mapped to the upper right corner of the composite image 254 in the source image 202 may be first cached to the cache module 220 in. In this way, when the block is scanned, the corresponding pixel value can be quickly read from the cache module 220 . As blocks are scanned, pixel values in subsequent blocks may continue to be cached.
  • regions at different locations in the composite image 254 may be mapped to regions of different sizes in the source image 202 due to different degrees of distortion at different locations. Therefore, in some embodiments, composite image 254 may be divided into multiple blocks of different sizes. In some embodiments, more blocks may be divided at the edges of the composite image 254, each block having a smaller size. In some embodiments, fewer tiles may be divided at the center of the composite image 254, and each tile may have a larger size. That is, the size of the block located at the edge of the composite image 254 may be smaller than the size of the block located at the center of the composite image 254 . For example, in the example of FIG. 6 , the blocks in the composite image 254 marked by the numbers "0" and "3" may be smaller than the blocks in the center marked by the numbers "5" and "6".
  • This division takes into account that distortions are usually higher at the edges of composite images, so each pixel coordinate may be mapped to more source images and/or more source pixel coordinates. Therefore, if the size is the same, for the block at the edge, there will be more pixel values that need to be cached. Therefore, by adjusting the size of the block, the cache space can be more fully utilized and the cache hit rate can be improved.
  • the composite image 254 When the composite image 254 is divided into blocks for scanning, different numbers of blocks can be divided according to actual applications (for example, the size of the source image/composite image, the size of the cache space, etc.), and the size of each block is also Can be configured as needed.
  • the composite image 254 can be divided into 4 blocks per row, wherein block 0 has a size of 32x32 pixels, block 1 has a size of 64x32 pixels, block 4 has a size of 32x64 pixels, and block 0 has a size of 32x32 pixels. 5 with a size of 64x64 pixels, etc.
  • block division method and block size setting are feasible.
  • the distortion correction module 210 may provide the resulting plurality of corrected images 214 - 1 and 214 - 2 to the equalization module 230 .
  • the equalization module 230 is configured to obtain a LUT 232 and use the LUT 232 to adjust the brightness of the plurality of corrected images 214-1 and 214-2 in the secondary LUT processing stage to obtain corresponding plurality of equalized images 234-1 and 234-2 .
  • LUT 232 indicates a luma/chroma balance relationship for multiple corrected images 214. In some embodiments, LUT 232 may indicate a luma/chroma balance relationship for each corrected image 214.
  • LUT 232 may indicate a mapping of RGB values in that corrected image 214 to equalized RGB values. By adjusting the RGB values, brightness and/or chroma balance can be achieved. During brightness and/or chroma equalization, using the LUT 232, the equalization module 230 can adjust the corresponding brightness value in the corrected image 214 to an equalized brightness value. By performing luma/chroma equalization on the plurality of corrected images 214-1 and 214-2, the resulting equalized images 234-1 and 234-2 may have balanced and uniform luma and/or chroma.
  • the equalization module 230 can obtain the LUT 232 to be used from the storage device 280 via the data distribution network 260.
  • the plurality of corrected images 214 may be provided to a luma/chroma equalization statistics module 240, which determines histogram statistics 242 of the luma values of the plurality of corrected images 214, and provides the histogram statistics 242 to General processing means 290 .
  • the histogram statistical module 240 can write the histogram statistical result 242 into the storage device 280 and the general processing device 290 can read the statistical result from the storage device 280 .
  • the general processing device 290 may determine or update the LUT 232 used by the equalization module 230 based on the histogram statistics 242.
  • the histogram statistics module 240 may perform brightness and/or chrominance histogram statistics on overlapping regions of the multiple corrected images 214 .
  • the histogram statistics module 240 may support multi-color channel, multi-region luma and/or chrominance histogram statistics. Based on the histogram statistical results 242, the general processing device 290 can use various luma/chroma equalization algorithms to determine the luma/chroma equalization relationship, thereby obtaining the LUT 232.
  • the LUT 232 determined based on the histogram statistics 242 of the plurality of corrected images 214 may be provided back to the equalization module 230 for performing luma/chroma equalization on the plurality of corrected images 214.
  • the histogram statistical results of previously obtained corrected images can be used for multiple corrected images 214 currently being processed
  • a defined LUT 232 is used to perform luma/chroma equalization.
  • the LUT 232 used for luma/chroma equalization may be fixed for a particular scene. In this case, the histogram statistics module 240 can be omitted.
  • the plurality of equalized images 234 - 1 and 234 - 2 subjected to luma/chroma equalization may be provided to the image fusion module 250 .
  • Image fusion module 250 is configured to obtain LUT 252 and utilize LUT 252 to combine multiple equalized images 234-1 and 234-2 to obtain composite image 254 in a three-level LUT processing stage.
  • the image fusion module 250 can be configured to read the LUT 252 from an external storage device 280 via the data distribution network 260.
  • the LUT 252 indicates a blending relationship from the respective pixel coordinates of the plurality of equalized images 234 to the pixel coordinates of the synthesized image 254.
  • the pixel space coordinates of the equalized image 234 and the pixel space coordinates of the synthesized image 254 may also correspond. Accordingly, respective pixel coordinates of the two equalized images 234 may be mapped to corresponding pixel coordinates of the composite image 254 .
  • LTU 252 may include a list of entries, each entry including an index of pixel coordinates of composite image 254, an index of pixel coordinates of corresponding plurality of equalized images 234, and a pixel coordinate for corresponding plurality of equalized images 234.
  • the weight of the coordinates may indicate the fusion relationship between the pixel coordinates of the equalized image 234 and the pixel coordinates of the synthesized image 254 .
  • the weights in LUT 252 can be determined by various image fusion algorithms. Examples of image fusion algorithms may include alpha fusion algorithms, multiband fusion algorithms, and the like.
  • the image fusion module 250 can read the pixel values of the corresponding pixel coordinates in the balanced images 234-1 and 234-2 by scanning each pixel coordinate of the synthesized image 254 line by line, and based on the LUT 252 The indicated weights are used to fuse the read pixel values to obtain the pixel values at the pixel coordinates of the synthesized image 254 .
  • Figure 3 depicts the stitching of a set of source images.
  • video streams can be collected continuously from multiple image acquisition devices, and video frames at different time points may need to be spliced to obtain a composite image of the panorama.
  • the LUTs used in the first-level LUT and the third-level LUT may be pre-configured.
  • the LUT 232 for luma/chroma equalization in the secondary LUT can be updated in real-time or based on the luma and/or chroma histogram statistics of the corrected image obtained from previous video frame processing during the image stitching process.
  • the number of source images input to the image stitching device 205 for synthesis is predetermined (in the example of FIG. 3 , this predetermined number is 4) due to the pre-determined module configuration and LUT, And the numbers of corrected images 214 and equalized image data 234 to be processed in the middle may also be predetermined (in the example of FIG. 3 , the predetermined number is 2).
  • the image stitching device 205 can be used to stitch other numbers of source images by pre-configuring the modules and pre-determining the LUT, and the number of corrected images 214 and equalized image data 234 processed in the middle can also be Variety.
  • the image stitching process according to the embodiments of the present disclosure can be further extended to introduce other processing required in the image stitching process.
  • processing can be realized by introducing corresponding functional modules into the image splicing device 205, or through a processing algorithm in a general processing device.
  • the number of source images input to the image stitching device 205 for composition may be predetermined.
  • the number of source images input to the image stitching device 205 is four.
  • the image stitching device 205 can be multiplexed to more than or a stitching of less than a predetermined number of source images.
  • the source images can be directly input to the image stitching device 205, or one or more blank images can be added to form a predetermined number of input images together with the source images .
  • the image stitching means 205 may be called multiple times to implement image stitching. In multiple calls, the input that may be provided to the image stitching device 205 each time may be equal to or less than a predetermined number of images as input. For example, if the number of source images to be spliced is an integer multiple of the predetermined number, a predetermined number of source images may be input each time.
  • the number of source images to be stitched is not an integer multiple of the predetermined number, less than the predetermined number of source images may be input in some calls. Then, multiple composite images generated by the image stitching device 205 after multiple calls may be used as input, and provided to the image stitching device 205 again for stitching until a final composite image is obtained. In some embodiments, if the number of input images is less than a predetermined number or is not supplemented by blank images, an indication may also be provided to the image stitching device 205 (eg, provided by the general control device 290 ) to indicate the number of input images. In this way, the image stitching device 205 can know the specific number of images to be accessed during the processing.
  • multiple LUTs required in the multi-level LUT processing process can be preset.
  • the general processing device 290 can control the images input to the image stitching device 205 each time and the LUTs of various levels to be used, so as to obtain a correct composite image.
  • FIG. 7 shows a schematic diagram of an example extended processing flow of the image stitching device 205 according to some embodiments of the present disclosure.
  • source image 702 eight source images 702 - 1 to 702 - 8 (collectively or individually referred to as source image 702 ) are to be stitched together to obtain a panoramic stitched image.
  • These source images 702 may respectively present pictures in a 360-degree direction at a certain position in the capture space.
  • the eight source images 702 - 1 to 702 - 8 can be divided into two groups, and each group has four source images.
  • the first set includes source images 702-1, 702-3, 702-5, and 702-7
  • the second set includes source images 702-2, 702-4, 702-6, and 702-8.
  • the division of the grouping groups of the source images may be divided in any manner, and the embodiments of the present disclosure are not limited in this respect.
  • source images 702 in the same group may be mapped into different regions of final composite image 780 .
  • the source images 702-1, 702-3, 702-5 and 702-7 of the first group can be used as the input of the image stitching device 205 first.
  • Each module in the image stitching device 205 processes the source images 702-1, 702-3, 702-5, and 702-7 according to a process similar to that discussed in FIG. 3 .
  • LUT 710 is provided to distortion correction module 210, which is used by distortion correction module 210 to perform distortion correction on these source images, resulting in a plurality of corrected images 715-1 and 715-2.
  • LUT 720 is provided to equalization module 230, and is used by equalization module 230 to perform luma/chrominance equalization on plurality of corrected images 715-1 and 715-2, resulting in plurality of equalized images 725-1 and 725-2.
  • a plurality of equalized images 725-1 and 725-2 are provided to the image fusion module 250, and the image fusion module 250 performs image fusion using the LUT 730 of the three-level LUTs to output a composite image 750-1.
  • the second set of source images 702 - 2 , 702 - 4 , 702 - 6 and 702 - 8 are then also input to the image stitching device 205 .
  • Each module in the image stitching device 205 processes the source images 702-2, 702-4, 702-6, and 702-8 according to a process similar to that discussed in FIG. 3 .
  • LUT 712 is provided to distortion correction module 210, which is used by distortion correction module 210 to perform distortion correction on these source images, resulting in a plurality of corrected images 715-3 and 715-4.
  • LUT 722 is provided to equalization module 230, and is used by equalization module 230 to perform luma/chrominance equalization on plurality of corrected images 715-3 and 715-4, resulting in plurality of equalized images 725-3 and 725-4.
  • the plurality of equalized images 725-3 and 725-4 are provided to the image fusion module 250, and the image fusion module 250 performs image fusion using the LUT 730 of the three-level LUTs to output a composite image 750-2.
  • the combined images 750 - 1 and 750 - 2 obtained by invoking the image stitching device 205 for the first two times may be used as an input of the image stitching device 205 and provided to the image stitching device 205 again for stitching.
  • blank images 760-1 and 760-2 may also be input, and these blank images have the same size as the combined images 750-1 and 750-2, but each pixel The pixel value of the coordinate is 0 (or empty).
  • each module of the image stitching apparatus 205 can implement image stitching in the mode of 2 input images.
  • the primary LUT and The distortion correction relationship and the luma/chroma equalization relationship in the secondary LUT may be configured not to perform transformation on the pixel values of the composite images 750-1 and 750-2.
  • "Equalized images" 775-1 and 775-2 provided by equalization module 240 to image fusion module 250 may then be identical to composite images 750-1 and 750-2, respectively.
  • Image fusion module 250 may continue to acquire LUT 736 and perform image fusion on “balanced images” 775-1 and 775-2 using LUT 726 to obtain final composite image 780.
  • the general processing means 290 can control the calls to the image stitching means 205 , and the input image and the LUT to be used are provided to the image stitching means 205 in each call.
  • the LUT used by each level of LUT processing may be predetermined in each call.
  • the pre-configured image stitching device 205 can be reused in various applications, so that the application flexibility of the dedicated image stitching device 205 is higher.
  • Fig. 8 shows a schematic flowchart of a method 800 for image stitching according to some embodiments of the present disclosure.
  • the method 800 may be implemented at the image stitching device 205 . It should be understood that method 800 may also include additional actions not shown and/or actions shown may be omitted. The scope of the present disclosure is not limited in this respect.
  • the image stitching device 205 performs distortion correction on a plurality of source images using a first LUT to obtain a plurality of corrected images, the first LUT indicating from the pixel coordinates in the synthesized image to at least one source pixel coordinate in the plurality of source images distortion correction relationship.
  • the image stitching device 205 adjusts the brightness of the multiple corrected images by using the second LUT to obtain multiple balanced images, and the second LUT indicates the brightness and/or chromaticity balance relationship for the multiple corrected images.
  • the image stitching device 205 merges the plurality of equalized images into a composite image by using a third LUT, and the third LUT indicates a fusion relationship from respective pixel coordinates of the plurality of equalized images to pixel coordinates of the composite image.
  • the method further includes caching pixel values of the plurality of source images in a cache (eg, cache module 220 ).
  • the image stitching device 205 reads pixel values from a cache for performing distortion correction.
  • the image stitching device 205 can determine the pixel coordinates of the target block by progressive scanning for each target block among the multiple blocks divided by the synthesized image ; Utilizing the first LUT, determining at least one source pixel coordinate in a plurality of source images to which the target pixel coordinate in the target block is mapped, reading the pixel value of at least one source pixel coordinate, and based on the first LUT indicated from A distortion correction relationship of the target pixel coordinates to the at least one source pixel coordinates to transform the read pixel value of the at least one source pixel coordinates.
  • the image mosaic device 205 reads the pixel value of at least one source pixel coordinate from the cache area.
  • the cache area at least caches pixel values mapped to source pixel coordinates of the target block in the plurality of source images.
  • the plurality of tiles are of different sizes, and the tiles located at the edges of the composite image are smaller in size than the tiles located in the center of the composite image.
  • performing distortion correction on a plurality of source images includes performing the following operations in parallel: performing distortion correction on a first source image and a second source image among the plurality of source images to obtain a first corrected image, a first source image and pixel coordinates of the second source image are mapped to different parts in the composite image; and distortion correction is performed on a third source image and a fourth source image of the plurality of source images to obtain a second corrected image, a third source image and Pixel coordinates of the fourth source image are mapped to different parts in the composite image.
  • the image stitching device 205 further determines respective luminance and/or chrominance histograms of the plurality of corrected images, and the second LUT is determined based on the determined luminance and/or chrominance histograms.
  • the determined luma and/or chrominance histograms are provided to the general processing means 290 for determination of the second LUT by the general processing means 290 .
  • the LUTs determined by the general processing means 290 based on the determined luma and/or chrominance histograms are used to adjust the other plurality of corrected images determined by distortion correction from the other plurality of source images of.
  • the image stitching device 205 also uses a fourth LUT to perform distortion correction on the second plurality of source images to obtain a second plurality of corrected images, and the fourth LUT indicates that the pixel coordinates of the second synthesized image are different from the second plurality of Distortion correction relationship between at least one source pixel coordinate in a source image; use the fifth LUT to adjust the brightness and/or chroma of the second plurality of corrected images to obtain a second plurality of equalized images, and the fifth LUT indicates for Luminance and/or chrominance equalization relationships of the second plurality of corrected images; and merging the second plurality of equalized images into a second composite image using a sixth LUT, the sixth LUT indicating the pixel coordinates of the second composite image in relation to the second composite image The fusion relationship between the pixel coordinates of multiple balanced images.
  • the image stitching device 205 also obtains a seventh LUT, which indicates the fusion relationship between the pixel coordinates of the first composite image and the pixel coordinates of the second composite image; and uses the seventh LUT to combine the first The composite image and the second composite image are composited into a third composite image.
  • the various embodiments of the present application may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be executed by a controller, microprocessor or other computing device. While various aspects of the embodiments of the present application are shown and described as block diagrams, flowcharts, or using some other pictorial representation, it should be understood that the blocks, devices, systems, techniques or methods described herein may be implemented as, without limitation, For example, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controllers or other computing devices, or some combination thereof.
  • the present application also provides at least one computer program product tangibly stored on a non-transitory computer-readable storage medium.
  • the computer program product includes computer-executable instructions, such as instructions included in program modules, which are executed in a device on a real or virtual processor of a target to perform the processes/methods described in the above embodiments.
  • program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • the functionality of the program modules may be combined or divided as desired among the program modules.
  • Machine-executable instructions for program modules may be executed within local or distributed devices. In a distributed device, program modules may be located in both local and remote storage media.
  • Program codes for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本公开涉及用于图像拼接的装置、系统和相关联的方法。用于图像拼接的装置包括畸变校正模块,其被配置为:获取第一查找表(LUT),以及利用第一LUT,对多个源图像执行畸变校正,得到多个校正图像。用于图像拼接的装置还包括均衡模块,其被配置为:获取第二LUT以及利用第二LUT来调整多个校正图像的亮度,得到多个均衡图像;以及图像融合模块,被配置为:获取第三LUT以及利用第三LUT,将多个均衡图像合并为合成图像。这样的图像拼接方案能够由专用处理设备来实现,从而提高处理性能,加快处理速度,并且降低了对系统带宽的需求。

Description

用于图像拼接的装置、系统和相关联的方法 技术领域
本公开涉及图像处理领域,更具体而言涉及用于图像拼接的装置、系统和相关联的方法。
背景技术
将具有不同视场的数字成像设备拍摄到的多个图像进行拼接,可以得到视场宽度较大的全景图像。这样的图像拼接技术可以被应用到多种领域中。例如,对于驾驶场景,可以将部署在车辆不同位置的成像设备采集到的多个图像进行拼接,从而得到车辆的360°的环视全景图。对于现场监控场景,可以将从不同位置采集到的多个图像进行拼接,得到现场的更宽视野的监控画面。图像拼接过程涉及大量复杂的图像处理,计算开销较大,时延较高,有时可能难以满足对实时性要求高的场景。
发明内容
本公开的实施例提供了一种用于图像拼接的方案。
在本公开的第一方面,提供了一种用于图像拼接的装置。该装置包括:畸变校正模块,被配置为:获取第一查找表(LUT),第一LUT指示从合成图像中的像素坐标到多个源图像中的至少一个源像素坐标的畸变校正关系,以及利用第一LUT,对多个源图像执行畸变校正,得到多个校正图像;均衡模块,被配置为:获取第二LUT,第二LUT指示针对多个校正图像的亮度和/或色度均衡关系,以及利用第二LUT来调整多个校正图像,得到多个均衡图像;以及图像融合模块,被配置为:获取第三LUT,第三LUT指示从多个均衡图像各自的像素坐标到合成图像的像素坐标之间的融合关系,以及利用第三LUT,将多个均衡图像合并为合成图像。
在本公开中,通过在专用处理设备中,通过预先配置多级LUT来实现图像拼接的多个操作,可以提高处理性能,加快处理速度,并且降低了对系统带宽的需求。
在第一方面的一种实现方式中,该装置还包括:高速缓存模块,用于缓存多个源图像的像素值,其中畸变校正模块被配置从高速缓存模块读取像素值以用于执行畸变校正。通过设置高速缓存模块,可以减少在从外部存储装置读取数据时导致的大量带宽消耗和高延时,从而能实现对源图像中的像素值的快速读取,提升处理速度。
在第一方面的又一种实现方式中,畸变校正模块被配置为:针对由合成图像分割的多个区块中的每个目标区块,通过逐行扫描确定目标区块中的目标像素坐标;利用第一LUT,确定目标区块中的目标像素坐标映射到的多个源图像中的至少一个源像素坐标;读取至少一个源像素坐标的像素值;以及基于第一LUT所指示的从目标像素坐标到至少一个源像素坐标的畸变校正关系,来变换所读取的至少一个源像素坐标的像素值。通过遍历多个区块和区块内遍历,例如通过在区块内的Z字形扫描和跨区块的Z字形扫描,可以快速、准确地完成畸变校正。
在第一方面的又一种实现方式中,畸变校正模块被配置从高速缓存模块读取至少一个源像素坐标,高速缓存模块至少缓存多个源图像中被映射到目标区块的源像素坐标的像素值。通过针对合成图像的逐个区块来执行畸变校正,高速缓存模块中缓存的像素值可以被更多次 命中,提高了缓存的像素值的利用率。因此,利用较小的高速缓存区就能实现较高的缓存命中率。
在第一方面的又一种实现方式中,多个区块的尺寸不同,并且位于合成图像的边缘的区块的尺寸比位于合成图像的中心的区块的尺寸更小。这样划分方式是考虑到,通常在合成图像的边缘处畸变程度更高,因此每个像素坐标可能会被映射到更多源图像和/或更多源像素坐标。因此,如果尺寸相同,对于边缘处的区块,需要缓存的像素值将会更多。因此,通过调整区块的尺寸,可以更充分利用高速缓存空间,提高缓存命中率。
在第一方面的又一种实现方式中,畸变校正模块被配置为并行执行以下操作:对多个源图像中的第一源图像和第二源图像执行畸变校正,得到第一校正图像,第一源图像和第二源图像的像素坐标被映射到合成图像中的不同部分;以及对多个源图像中的第三源图像和第四源图像执行畸变校正,得到第二校正图像,第三源图像和第四源图像的像素坐标被映射到合成图像中的不同部分。被映射到合成图像的不同部分的源图像可以在同一处理流程中被畸变校正,多个并行处理流程得到的校正图像可以后续被容易地合并,这样不仅简化畸变校正处理流程,而且并行处理还能提高处理效率。
在第一方面的又一种实现方式中,该装置还包括:直方图统计模块,被配置为确定多个校正图像各自的亮度和/或色度直方图,第二LUT基于确定的亮度和/或色度直方图所确定。通过亮度/色度直方图统计,可以确定出更准确的用于亮度和/或色度均衡的LUT。
在第一方面的又一种实现方式中,畸变校正模块还被配置为:获取第四LUT,第四LUT指示第二合成图像的像素坐标与第二多个源图像中的至少一个源像素坐标之间的畸变校正关系,以及利用第四LUT来对第二多个源图像执行畸变校正,得到第二多个校正图像。均衡模块被配置为:获取第五LUT,第五LUT指示针对第二多个校正图像的亮度和/或色度均衡关系,以及利用第五LUT来调整第二多个校正图像,得到第二多个均衡图像。在第一方面的又一种实现方式中,图像融合模块还被配置为:获取第六LUT,第六LUT指示第二合成图像的像素坐标与第二多个均衡图像的像素坐标之间的融合关系,以及利用第六LUT,将第二多个均衡图像合并为第二合成图像。在不改变对各个模块的功能配置的情况下,可以通过控制提供到图像拼接装置的源图像和LUT,本公开的用于图像拼接的装置可以被复用于不同场景下的图像拼接。
在第一方面的又一种实现方式中,图像融合模块还被配置为:获取第七LUT,第七LUT指示第一合成图像的像素坐标与第二合成图像的像素坐标之间的融合关系,以及利用第七LUT,将第一合成图像和第二合成图像合成为第三合成图像。本公开的用于图像拼接的装置还可以通过被重复调用,用于对更大数目的源图像进行拼接。
在第一方面的又一种实现方式中,装置包括专用集成电路(ASIC)芯片。在第一方面的又一种实现方式中,多个源图像、第一LUT、第二LUT和第三LUT从ASIC芯片的第一外部存储装置被获取。在第一方面的又一种实现方式中,其中合成图像被写入ASIC芯片的第二外部存储装置。在ASIC芯片上的实现可以进一步提高专用于图像拼接的任务的处理。
在本公开的第二方面,提供了一种图像处理系统。该系统包括:根据第一方面中的任一种实现方式的装置;以及至少一个存储装置,用于存储多个源图像、第一LUT、第二LUT和第三LUT。
在第二方面的一种实现方式中,该系统还包括:通用处理装置,被配置为基于多个校正图像各自的亮度和/或色度直方图,确定第二LUT。
在本公开的第三方面,提供了一种用于图像拼接的方法。该方法包括:利用第一查找表(LUT)对多个源图像执行畸变校正,得到多个校正图像,第一LUT指示从合成图像中的像素坐标到多个源图像中的至少一个源像素坐标的畸变校正关系;利用第二LUT来调整多个校正图像,得到多个均衡图像,第二LUT指示针对多个校正图像的亮度和/或色度均衡关系;以及利用第三LUT,将多个均衡图像合并为合成图像,第三LUT指示从多个均衡图像各自的像素坐标到合成图像的像素坐标之间的融合关系。
在第三方面的一种实现方式中,该方法还包括:在高速缓存区中缓存多个源图像的像素值,其中对多个源图像畸变校正包括:从高速缓存区读取像素值以用于执行畸变校正。
在第三方面的又一种实现方式中,对多个源图像执行畸变校正包括:针对由合成图像分割的多个区块中的每个目标区块,通过逐行扫描确定目标区块的像素坐标;利用第一LUT,确定目标区块中的目标像素坐标映射到的多个源图像中的至少一个源像素坐标;读取至少一个源像素坐标的像素值;以及基于第一LUT所指示的从目标像素坐标到至少一个源像素坐标的畸变校正关系,来变换所读取的至少一个源像素坐标的像素值。
在第三方面的又一种实现方式中,读取至少一个源像素坐标的像素值包括:从高速缓存区读取至少一个源像素坐标,高速缓存区至少缓存多个源图像中被映射到目标区块的源像素坐标的像素值。
在第三方面的又一种实现方式中,多个区块的尺寸不同,并且位于合成图像的边缘的区块的尺寸比位于合成图像的中心的区块的尺寸更小。
在第三方面的又一种实现方式中,对多个源图像执行畸变校正包括并行执行以下操作:对多个源图像中的第一源图像和第二源图像执行畸变校正,得到第一校正图像,第一源图像和第二源图像的像素坐标被映射到合成图像中的不同部分;以及对多个源图像中的第三源图像和第四源图像执行畸变校正,得到第二校正图像,第三源图像和第四源图像的像素坐标被映射到合成图像中的不同部分。
在第三方面的又一种实现方式中,该方法还包括:确定多个校正图像各自的亮度和/或色度直方图,第二LUT基于确定的亮度和/或色度直方图所确定。
在第三方面的又一种实现方式中,该方法还包括:利用第四LUT来执行对第二多个源图像执行畸变校正,得到第二多个校正图像,第四LUT指示第二合成图像的像素坐标与第二多个源图像中的至少一个源像素坐标之间的畸变校正关系;利用第五LUT来调整第二多个校正图像,得到第二多个均衡图像,第五LUT指示针对第二多个校正图像的亮度和/或色度均衡关系;以及利用第六LUT,将第二多个均衡图像合并为第二合成图像,第六LUT指示第二合成图像的像素坐标与第二多个均衡图像的像素坐标之间的融合关系。
在第三方面的又一种实现方式中,该方法还包括:获取第七LUT,第七LUT指示第一合成图像的像素坐标与第二合成图像的像素坐标之间的融合关系;以及利用第七LUT,将第一合成图像和第二合成图像合成为第三合成图像。
在第三方面的又一种实现方式中,该方法被实现在专用集成电路(ASIC)芯片处。在第三方面的又一种实现方式中,多个源图像、第一LUT、第二LUT和第三LUT从ASIC芯片的第一外部存储装置被获取。在第三方面的又一种实现方式中,合成图像被写入ASIC芯片的第二外部存储装置。
可以理解地,上述提供的第二方面的系统和第三方面的方法均用于实现可实现第一方面所提供的装置的各种实现方式。因此,关于第一方面的解释或者说明同样适用于第二方面和 第三方面。此外,第二方面和第三方面所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。
应当理解,发明内容部分中所描述的内容并非旨在限定本公开的实施例的关键或重要特征,亦非用于限制本公开的范围。本公开的其它特征将通过以下的描述变得容易理解。
附图说明
结合附图并参考以下详细说明,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。在附图中,相同或相似的附图标记表示相同或相似的元素,其中:
图1A和图1B示出了根据本公开的一些实施例的图像拼接的示例;
图2示出了根据本公开的一些实施例的图像处理系统的框图;
图3示出了根据本公开的一些实施例的图2的图像拼接装置中的处理流程的示意图;
图4示出了根据本公开的一些实施例的源图像到合成图像的像素坐标空间对应性的示例;
图5示出了根据本公开的一些实施例的图2的高速缓存模块的框图;
图6示出了根据本公开的一些实施例的图像切块的示例;
图7示出了根据本公开的一些实施例的图2的图像拼接装置的示例扩展处理流程的示意图;以及
图8示出了根据本公开的一些实施例的用于图像拼接的方法的示意流程图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
在本公开的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“第一”、“第二”等等可以指代不同的或相同的对象。术语“和/或”表示由其关联的两项的至少一项。例如“A和/或B”表示A、B、或者A和B。下文还可能包括其他明确的和隐含的定义。
应理解,本申请实施例提供的技术方案,在以下具体实施例的介绍中,某些重复之处可能不再赘述,但应视为这些具体实施例之间已有相互引用,可以相互结合。
如以上提及的,将多个图像拼接为合成图像的技术可以被应用到多种领域中。图1A和图1B示出了根据本公开的一些实施例的图像拼接。
图1A示出了在驾驶场景中,可以从车辆102的前后左右部署4个数字成像设备,以采集静态图像或动态视频流。例如,可以采集到某个时刻车辆102四周的4个源图像111、112、113和114。通过图像拼接,可以获得合成图像120,其示出车辆102的环视全景画面。注意,在图1A中,源图像和合成图像中心示出的车辆102可以是实际车辆的符号表示。图1B示出了将室内部署的多个成像设备采集到画面拼接拼接得到的全景合成图像150,其示出了室内的更宽画面,从而能够帮助用于监控和安防等目的。
应当理解,图1A和图1B仅给出了从多个源图像到合成图像的拼接的示例。本公开的实 施例不限于这些示例中给出源图像的数目、拼接方式等形式。
通常,图像拼接涉及从源图像到合成图像的去畸变,以及用于避免或减少合成图像失真的多种处理技术。当前主要由通用计算设备,例如中央计算单元(CPU)等来执行图像拼接的各个操作。例如,由CPU来逐个像素确定和校正源图像的畸变,处理图像失真,执行图像融合等。然而,这对于CPU的计算开销和带宽开销要求高,容易造成较大时延,性能较低。
工作原理和系统
根据本公开的实施例,提出了基于查找表(LUT)的图像拼接加速方案。该方案通过预先配置多级LUT来实现图像拼接的多个操作,包括对要拼接的多个源图像的畸变校正,亮度/色度均衡以及图像融合。这样的图像拼接方案能够由专用处理设备来实现,例如由专用集成电路(ASIC)芯片来实现,从而提高处理性能,加快处理速度,并且降低了对系统带宽的需求。下文将结合附图来更详细讨论根据本公开的实施例的图像拼接。
图2示出了根据本公开的一些实施例的图像处理系统200的框图。图像处理系统200包括图像拼接装置205,其被配置为将源图像融合为合成图像。图像拼接装置205可以是硬件处理装置,例如硬件加速器。在一些实施例中,图像拼接装置205可以包括或被实现在芯片中。在一些示例中,图像拼接装置205可以包括或被实现在ASIC芯片中。在一些示例中,图像拼接装置205还可以被包括在诸如可编程逻辑器件等其他专用处理设备中。
图像拼接装置205被配置为对多个源图像202-1、202-2、……202-N(其中N大于等于1)执行图像拼接,以对应的合成图像254。为便于讨论,多个源图像202-1、202-2、……202-N可以被统称为或单独称为源图像202。用于图像拼接的多个源图像202可以来自各种数据源,例如可以包括由不同数字成像设备(例如,相机或录像机)采集得到静态图像,或者动态视频流中的视频帧。
为支持图像拼接功能,图像处理系统200还可以包括存储装置280。存储装置280与图像拼接装置205通信连接,并且可以被配置为存储图像拼接装置205的输入图像,例如多个源图像202。存储装置280还可以存储在图像拼接装置205的操作过程中需要的其他输入信息,中间处理结果等。如图2示出的,存储装置280还可以存储多个查找表LUT 204,LUT 232和LUT 252。这些LUT的具体使用将在下文中详细描述。虽然图2中示出为单个装置,在实际应用中,图像处理系统200可以包括多个存储装置,用于存储图像拼接装置205需要的各种数据/信息。例如,源图像202可以被存储在与各个LUT 204、232、252等不同的存储装置中。本公开的实施例在此方面不受限制。
在一些实施例中,图像处理系统200还可以包括通用处理装置290,其可以被配置为控制向存储装置280存储的源图像202以及LUT。例如,要拼接的多个源图像202可以由用户输入,并且由通用处理装置290控制存储到存储装置280。用于实现图像拼接的各个LUT也可以经由通用处理装置290存储、维护和更新。在一些实施例中,通用处理装置290还可以通过控制输入到图像拼接装置205的源图像202和LUT来控制图像拼接任务的执行。在一些实施例中,通用处理装置290可以被配置为将图像拼接装置205生成的合成图像254呈现给用户,提供给其他设备或提供作为其他任务的输入,等等。在一些实施例中,通用处理装置290还可以被配置为执行在图像拼接过程中需要的一个或多个操作。通用处理装置290的一些示例功能将在下文中更详细描述。
在一些实施例中,图像处理系统200可以被实现为或者被包括在计算设备或计算系统中。 例如,图像处理系统200可以包括服务器、大型机、边缘计算设备、终端设备等各种具有计算能力的设备/系统。
如图2所示,在图像处理系统200中,图像拼接装置205包括用于实现图像拼接中的各种操作的多个功能模块。具体地,图像拼接装置205包括畸变校正模块210,其被配置为基于LUT 204(在本文中有时也称为“第一LUT”)来对多个源图像202执行畸变校正。畸变校正的主要是将每个源图像202从源图像的像素坐标空间透视变换到合成图像252的像素坐标空间,校正图像采集设备的畸变等。通常,源图像202的尺寸和合成图像254的尺寸是可以预先确定的。在一些示例中,合成图像254的尺寸大于单个源图像202的尺寸。
为了执行畸变校正,LUT 204指示从合成图像254中的像素坐标到源图像202的像素坐标的畸变校正关系。在本文中,“像素坐标”指的是在图像的二维空间中的坐标,用于表示图像中的一个像素点。畸变校正模块210基于LUT 204所指示的畸变校正关系,通过遍历合成图像254的像素坐标,来将多个源图像202校正得到多个校正图像214-1至214-M(其中,M是大于1的整数)。M可以小于等于N。为便于讨论,多个校正图像214-1至214-M可以被统称为或单独称为校正图像214。
在一些实施例中,图像拼接装置205可以包括高速缓存模块220,被配置为对源图像202执行缓存管理。高速缓存模块220包括高速缓存区,可以用于缓存一个或多个源图像202的至少部分像素坐标对应的像素值。畸变校正模块210在执行畸变校正时需要读取源图像202。此时,畸变校正模块210可以将要读取的源图像202的源像素坐标212发送给高速缓存模块220。如果源像素坐标212对应的像素值222被缓存在高速缓存模块220,该模块可以快速将像素值222提供给畸变校正模块210。在一些实施例中,如果像素值222未被缓存,高速缓存模块220可以从存储装置280读取像素值222,并将所读取的像素值222进行缓存以及提供给畸变校正模块210。
图像拼接装置205还包括均衡模块230,其被配置为基于LUT 232(在本文中有时也称为“第二LUT”)来执行对多个校正图像214执行亮度和/或色度均衡,以调整多个校正图像214,得到对应的多个均衡图像234-1至234-M。LUT 232指示针对多个校正图像214的亮度和/或色度均衡关系。为便于讨论,多个均衡图像234-1至234-M可以被统称为或单独称为均衡图像234。
在一些实施例中,图像拼接装置205还可以包括直方图统计模块240,其被配置为确定多个校正图像214各自的亮度和/或色度直方图。所确定的亮度和/或色度直方图可以用于确定用于亮度/色度均衡的LUT。在一些实施例中,由直方图统计模块240确定的亮度和/或色度直方图可以被提供给通用处理装置290,以由通用处理装置290执行LUT的确定。此时通用处理装置290确定的LUT可以作为LUT 232被提供给均衡模块230,用于执行对当前校正图像214的亮度/色度均衡。在一些实施例中,通用处理装置290确定的LUT可以被提供用于对从后续源图像确定的校正图像的亮度/色度均衡。
在一些实施例中,由均衡模块230使用的LUT 232可以是预先确定的,并且不会随着不同源图像而改变。在这样的实施例中,直方图统计模块240可以省略。
图像拼接装置205还包括图像融合模块250,其被配置为基于LUT 252(在本文中有时也称为“第三LUT”),将多个均衡图像234合并为合成图像254。LUT 252指示如何将多个均衡图像234合成到合成图像254,即指示从多个均衡图像234各自的像素坐标到合成图像252的像素坐标之间的融合关系。在一些实施例中,融合关系可以包括加权融合关系,这意 味着多个均衡图像234各自的像素坐标对应的像素值可以被加权组合成合成图像252的像素坐标的像素值。
在图2示出的示例中,图像拼接装置205还可以包括数据分发网络260,该数据分发网络260可以通过总线270连接到外部的存储装置280,以从存储装置280读取数据和向存储装置280写入数据。此外,数据分发网络260还可以被配置为将经由总线270从存储装置280读取的数据传输到对应的模块,例如畸变校正模块210、高速缓存模块220、均衡模块230、直方图统计模块240和图像融合模块250。数据分发网络260还可以被配置为将各个模块的数据经由总线270传输到其目的地装置,例如外部的存储装置280和/或通用处理装置290。
LUT的应用使得各个模块可以通过快速的查表操作来完成所要求的操作。根据本公开的实施例,通过在图像拼接的畸变校正、均衡、以及图像融合等阶段应用多级LUT来分别完成相应操作,能够极大提高计算速度、减少时延,并且还可以减轻系统中通用计算装置(例如,CPU)的计算负担,降低在计算过程中频繁读写中间结果等而对系统带宽的占用。
以上简单介绍了图像拼接装置200中各个模块的功能。下文将参考其他附图来更详细讨论图像拼接装置200中各个模块的具体操作。
示例图像拼接流程
图3示出了根据本公开的一些实施例的图2的图像拼接装置205中的处理流程的示意图。出于解释说明的目的,在图3的实施例中,假设要拼接的源图像数目是4,包括源图像202-1、202-2、202-3和202-4。
在操作过程中,畸变校正模块210被配置为读取LUT 204和各个源图像202的像素值。如以上提及的,LUT 204从合成图像254中的像素坐标到至少一个源图像202的一个或多个源像素坐标的畸变校正关系。畸变校正模块210被配置为在一级LUT应用阶段,利用LUT 204所指示的畸变校正关系来对多个源图像202执行畸变校正,得到多个校正图像214。在图3的示例中,多个校正图像214包括校正图像214-1和214-2。校正图像214可以与最终合成图像254的像素坐标空间相对应,这指的是校正图像214中的一个像素坐标可以被映射到合成图像254中的一个像素坐标。
为了实现畸变校正中,对于合成图像254的像素坐标空间中的一个像素坐标,一个或多个源图像202中的一个或多个源像素坐标可能会被映射到这个像素坐标。LUT 204所指示的畸变校正关系,可以用于索引合成图像254的每个像素坐标到一个或多个源图像202的源像素坐标的畸变校正关系。在畸变校正过程中,每个源图像202从图像的像素坐标空间透视变换相机坐标空间,并在相机坐标空间中执行畸变校正,然后使畸变校正后的图像从相机坐标空间变化到图像的像素坐标空间。在一些实施例中,LUT 204可以包括条目列表,每个条目包括合成图像254的像素坐标的索引,一个或多个源图像202的图像标识(ID)以及其中的一个或多个源像素坐标,以及源图像202中的一个或多个源像素坐标分别对应的权重。这里的权重可以指示合成图像254中的像素坐标到对应的源像素坐标的畸变校正。通过合成图像254的像素坐标,可以从LUT 204快速查找出用于畸变校正的源像素坐标及其对应的权重。
在一些实施例中,由于源图像202与合成图像254的畸变校正通常与数字成像设备的镜头角度、宽度等相关,LUT 204可以是针对特定场景配置的。例如,可以针对特定车辆的车载环视拼接场景,来配置对应的LUT 204。对于从该车辆上部署的多个数字成像设备采集到的多个源图像,可以均用预先配置的LUT 204来执行畸变校正。在一些示例中,如果在特定 场景下的数字成像设备的布置情况、合成图像的配置等发生变化,可以通过更新LUT 204来适应这样的变化,而不需要更改畸变校正模块210的配置。
在执行畸变校正时,畸变校正模块210可以被配置为经由数据分发网络260,从外部的存储装置280读取LUT 204。畸变校正模块210可以被配置为按合成图像254的逐个像素坐标,从LUT 204确定每个像素坐标映射到的源图像202的一个或多个源像素坐标212。畸变校正模块210可以从高速缓存模块220或直接从存储装置280(在没有高速缓存模块220的示例中)读取源图像202的一个或多个源像素坐标212对应的一个或多个像素值222。畸变校正模块210可以基于LUT 204所指示的畸变校正关系(例如,源像素坐标对应的权重),来变换所读取的像素值222。例如,如果合成图像254的一个像素坐标映射到源图像202中多个源像素坐标,可以利用多个源像素坐标对应的权重,对所读取的源像素值222进行双线性差值。经过变化后,畸变校正模块210将所得到的像素值222确定为校正图像214中与合成图像254对应的像素坐标处的像素值,从而形成校正图像214,以用于后续的进一步处理来得到最终的合成图像254。
在一些实施例中,在执行畸变校正的过程中,畸变校正模块210可以被配置为并行执行多个校正图像214的生成。在一些实施例中,畸变校正模块210可以被配置为将每个源图像202通过畸变校正来变换到合成图像254的像素坐标空间。这样,针对每个源图像202,可以得到一个校正图像214。
在一些实施例中,在所有源图像202到合成图像254的畸变校正中,可能存在至少两个源图像202与另外至少两个源图像202的像素坐标被映射到合成图像254中的不同区域。在这种情况下,畸变校正模块210可以被配置为对被映射到不同区域的至少两个源图像202执行畸变校正,得到一个校正图像214。
在图3示出的示例中,校正图像214-1是通过从源图像202-1和202-3执行畸变校正得到的,其中源图像202-1主要映射到校正图像214-1的上部分(也对应于合成图像254的上部分),而源图像202-3主要映射到校正图像214-1的下部分(也对应于合成图像254的下部分)。校正图像214-2是从源图像202-2和202-4执行畸变校正得到的,其中源图像202-2主要映射到校正图像214-2的左部分(也对应于合成图像254的左部分),而源图像202-4主要映射到校正图像214-3的右部分(也对应于合成图像254的右部分)。注意,在校正图像214-1和214-2中标注源图像202的标号,主要是为了表示与源图像202的映射关系,而不代表校正图像214的对应部分与所标注的源图像202的像素值相同。
为了更好理解对源图像202的畸变校正,图4示出了在示例的驾驶场景中从源图像202到合成图像254的像素坐标空间对应性。在图4中,假设在车辆410的四个方向分别部署有4个数字采集设备,这些设备采集到的4个源图像(例如,源图像202-1至201-4)分别捕获到从车辆410看到的4个方向视场411、412、413和414中的画面。相应地,要将这四个源图像202-1至201-4合并成车辆的环视拼接场景。
在图4进一步示出的,在合并后的合成图像254的像素坐标空间中,9个不同部分中分别映射到不同的源图像202。例如,根据数字采集设备的部署情况,图像部分421中的像素坐标可以是从源图像202-1和202-2确定的;图像部分422中的像素坐标可以是仅从源图像202-1确定的;图像部分423中的像素坐标可以是从源图像202-1和202-4确定的;图像部分424中的像素坐标可以是从源图像202-2确定的;图像部分426中的像素坐标可以是从源图像202-4确定的;图像部分427中的像素坐标可以是从源图像202-2和202-3确定的;图像部分 428中的像素坐标可以是从源图像202-3确定的;并且图像部分429中的像素坐标可以是从源图像202-3和202-4确定的。注意,在一些情况下,如果没有数据采集设备用于采集车辆410顶部空间,合成图像254的中心区域可能无法被映射到任何输入的源图像。
基于图4所示的像素坐标空间对应性,源图像202-1中的像素值用于确定合成图像254的上部分区域,即图像部分421、422和423,而源图像202-3中的像素值用于确定合成图像254的下部分区域,即图像部分427、428和429。源图像202-1和203-2的像素坐标因此被认为映射到合成图像254中的不同区域。类似地,源图像202-2和202-4的像素坐标也被认为被映射到合成图像254中的不同区域,即由图像部分421、424和427组成的左部分区域,和由图像部分423、426和429组成的右部分区域。
基于图4的像素空间坐标对应性,在畸变校正过程中,LUT 204可以被配置为指示合成图像254的像素坐标到源图像202-1和202-3的像素坐标之间的畸变校正关系,以及还指示合成图像254的像素坐标到源图像202-2和202-4的像素坐标之间的畸变校正关系。在一些示例中,LUT 204可以包括两个子LUT,分别用于指示前述两种畸变校正关系。通过读取LUT204所指示的合成图像254的各个像素坐标,畸变校正模块210可以被配置为分别从不同组源图像202来生成对应的校正图像214-1、214-2。通常,每个校正图像214的像素坐标空间可以与合成图像254相对应。如以上提及的,在一些实施例中,畸变校正模块204可以被配置为并行执行针对源像素202-1和202-3和针对源像素202-2和202-4的畸变校正,从而可以提高处理速度。并行处理后得到的校正图像214-1和214-2被提供给下一个处理模块,即均衡模块230。当然,在其他实施例中,畸变校正模块204也可以串行执行多个校正图像214的生成。
应当理解,在图3和图4中示出了源图像到合成图像的像素空间坐标对应性,但仅是为了解释说明的目的而给出的示例。在实际应用中,源图像与合成图像的空间对应性的区域划分并非如图3和图4所示的规则形状。在一些情况下,合成图像的一个或多个像素坐标或者区块可能被映射到两个或更多个源图像的像素坐标,这与源图像的数字采集设备的布置有关。在这些情况下,均可以通过配置LUT 204,来正确指示合成图像的像素坐标与对应源图像的像素坐标之间的畸变校正关系。
以上讨论了畸变校正过程。在畸变校正过程中,为了确定校正图像214中与合成图像254对应的像素坐标处的像素值,畸变校正模块210需要不断读取源图像202的源像素值222。在一些实施例中,可以由高速缓存模块220缓存一个或多个源图像202中的一些像素值。畸变校正模块210可以从高速缓存模块220读取需要的源像素值。相比于每次直接从存储装置280读取源像素值,从高速缓存模块220读取源像素值的速度更快,从而进一步加快速度。
图5示出了根据本公开的一些实施例的图2的高速缓存模块220的框图。如图5所示,高速缓存模块220包括高速缓存区510、缓存供应子模块520和像素读取子模块530。畸变校正模块210将要读取的源图像202中的源像素坐标212提供给高速缓存模块220。在一些实施例中,源像素坐标212包括标识源图像202的图像ID和在该源图像202的像素坐标空间中的像素坐标。高速缓存模块212确定512源像素坐标212对应的像素值是否被缓存在高速缓存区510,即是否有缓存命中。例如,高速缓存模块220基于源像素坐标212确定对应的像素值的外部存储地址信息。可以基于所确定的存储地址信息来确定对应的像素值是否被存储在高速缓存区510中。
在一些实施例中,如果当前要读取的源像素坐标212确定对应的像素值被存储在高速缓 存区510,即存在缓存命中,那么高速缓存模块220可以从高速缓存区510取出缓存的像素值,并且将缓存的像素值提供给畸变校正模块210。例如,从高速缓存区510取出的缓存的像素值可以被提供给缓存供应子模块520,并且缓存供应子模块520基于控制信号来控制被提供到畸变校正模块210的像素值222。在一些实施例中,发送到高速缓存模块220的源像素坐标212可以是多个,以请求一系列的像素值。缓存供应子模块520可以包括控制队列522和数据队列524。控制队列522可以用于排列来自高速缓存区510的控制信号,数据队列524可以用于排列来自从高速缓存区510和/或从像素读取子模块530获取的像素值。数据队列524中的像素值在控制信号的控制下,按顺序被提供到畸变校正模块210。
在一些实施例中,如果当前要读取的源像素坐标212确定对应的像素值未被存储在高速缓存区510,即存在缓存未命中,那么高速缓存模块220可以发送控制信号到像素读取子模块530,以请求像素读取子模块530获取对应的像素值。控制信号中可以指示要读取的像素值的外部存储地址。像素读取子模块530向外部存储装置,例如存储装置280发送读取请求,并且从存储装置280读取到像素值。在一些实施例中,像素读取子模块530还可以接收到多个控制信号,以请求读取对应的像素值。像素读取子模块530可以被配置为请求去重处理532,以将针对相同存储地址的请求进行去重。由像素读取子模块530读取到的像素值可以被提供给缓存供应子模块520,以提供到畸变校正模块210。在一些实施例中,由像素读取子模块530读取到的像素值还可以被缓存到高速缓存区510,以便后续在被读取时能够从高速缓存区510快速提供给畸变校正模块210。
在传统畸变校正中,通常会通过对合成图像按行扫描,逐行确定对应像素坐标的畸变校正。由于合成图像的每一行中的像素坐标可能映射到源图像中不同行的点,且行的跨度较大(例如,最大可能会超过200行的跨度)。这样,在逐行扫描进行畸变校正时,可能需要重复读取源图像中离散的像素值。如果每次均从外部存储装置读取数据,将会导致大量带宽消耗,并且读取延时大,性能差。通过设置高速缓存,可以提高图像拼接中的处理速度。
在一些实施例中,为进一步提高处理速度,还可以将合成图像254分割成多个区块中,并且畸变校正模块210可以逐个区块来执行对源图像的畸变校正。在每个区块内,畸变校正模块210可以通过逐行扫描的方式,针对该区块中的每个目标像素坐标,来确定对应源图像202的源像素坐标212,并获取源像素坐标212的像素值222来执行畸变校正。在合成图像254的区块之间,畸变校正模块210也可以通过逐行扫描的方式遍历多个区块。
图6示出了根据本公开的一些实施例的图像切块的示例。如图6所示,合成图像254的整个图像区域可以被划分成多个区块(在图6中由数字0、1、2、3、4、5、6、7等标识)。通过逐个区块遍历,每次将一个区块作为目标区块进行处理。对于合成图像254中的一个目标区块中的像素坐标,一个或多个源图像202中的一个或多个区块中的像素值可能被映射到目标区块中的多个像素坐标。注意,虽然在图6中示出在源图像202中分割区块,但这些区块不一定与合成图像254中的区块一一对应。
在执行畸变校正时,畸变校正模块210可以被配置为针对合成图像250中的每个区块来执行畸变校正。在合成图像254的每个区块内,畸变校正模块210可以被配置为按Z字形顺序610,逐行扫描该区块的目标像素坐标。在多个区块之间,畸变校正模块210可以被配置为按Z字形顺序620,逐个扫描多个区块,直到针对整个合成图像254来完成针对源图像202的畸变校正。
在合成图像250中的每个区块中,畸变校正模块210可以被配置为如以上讨论的那样执 行畸变校正。例如,通过逐行扫描确定该区块的像素坐标,利用LUT 204来确定扫描到区块中的目标像素坐标映射到的源图像202中的一个或多个源像素坐标212。畸变校正模块210可以基于源像素坐标212,读取源图像202的这些源像素坐标212的像素值222。在一些实施例中,畸变校正模块210可以通过高速缓存模块220读取像素值222。然后,畸变校正模块210可以基于LUT 204所指示的畸变校正关系,来变换所读取的像素值222,以得到校正图像214在目标像素坐标处的像素值。
通常,高速缓存模块220的缓存空间是有限的,可能无法存储全部源图像202。为了提高在高速缓存模块220中的缓存命中率,在一些实施例中,可以按合成图像254的区块的扫描顺序,来在高速缓存模块220中缓存对应像素值。在一些实施例中,通过Z字形的扫描顺序,可以提高高速缓存的利用率。例如,可以按扫描顺序,确定合成图像254的一个区块中的目标像素坐标映射到的源图像202中的像素值,然后将这部分像素值缓存到高速缓存区510。这样,在一个区块内,如果扫描到其他行的像素坐标,可能原先缓存的像素值可以被重复利用,这可以避免高速缓存模块510对相同像素值的重复读取,提高缓存读写性能。作为示例,在图6中,可以首先将源图像202中被映射到合成图像254的右上角的区块(被标注有“0”的区块)的像素坐标对应的像素值缓存到高速缓存模块220中。这样,在扫描该区块时,可以从高速缓存模块220中快速读取到对应的像素值。随着区块的扫描,可以继续缓存后续区块中的像素值。
在一些实施例中,由于不同位置的畸变程度不同,合成图像254中的处于不同位置的区域可能被映射到源图像202中不同面积的区域。因此,在一些实施例中,合成图像254可以被划分为尺寸不同的多个区块。在一些实施例中,可以在合成图像254的边缘处的划分更多区块,每个区块具有较小的尺寸。在一些实施例中,可以在合成图像254的中心处划分更少的区块,每个区块可以具有较大的尺寸。也就是说,位于合成图像254的边缘的区块的尺寸可以比位于合成图像254的中心的区块的尺寸更小。例如,在图6的示例中,合成图像254中由数字“0”和“3”标注的区块可以小于在中心处的由数字“5”和“6”标注的区块。
这样划分方式是考虑到,通常在合成图像的边缘处畸变程度更高,因此每个像素坐标可能会被映射到更多源图像和/或更多源像素坐标。因此,如果尺寸相同,对于边缘处的区块,需要缓存的像素值将会更多。因此,通过调整区块的尺寸,可以更充分利用高速缓存空间,提高缓存命中率。
在对合成图像254划分用于扫描的区块时,可以根据实际应用(例如,源图像/合成图像的尺寸,缓存空间的大小等)来划分不同数目的区块,每个区块的大小也可以根据需要配置。例如,在图6的示例中,合成图像254可以被划分为每行4个区块,其中区块0大小为32x32像素,区块1大小为64x32像素,区块4大小为32x64像素,区块5大小为64x64像素,等等。当然,这里仅给出了一个具体示例。任何其他区块划分方式和块的尺寸设置均是可行的。
以上讨论了图像拼接中的畸变校正阶段。返回参考图3,畸变校正模块210可以将得到的多个校正图像214-1和214-2提供到均衡模块230。均衡模块230被配置为在二级LUT处理阶段,获取LUT 232并且利用LUT 232来调整多个校正图像214-1和214-2的亮度,得到对应的多个均衡图像234-1和234-2。LUT 232指示针对多个校正图像214的亮度/色度均衡关系。在一些实施例中,LUT 232可以指示针对每个校正图像214的亮度/色度均衡关系。作为一个示例,对于每个校正图像214,LUT 232可以指示该校正图像214中的RGB值与均衡后的RGB值的映射。通过RGB值的调整,可以实现亮度和/或色度均衡。在亮度和/或色度均 衡时,利用LUT 232,均衡模块230可以将校正图像214中对应亮度值调整到均衡后的亮度值。通过对多个校正图像214-1和214-2执行亮度/色度均衡,所得到的均衡图像234-1和234-2可以具有均衡统一的亮度和/或色度。
在一些实施例中,对于当前处理的多个校正图像214,均衡模块230可以经由数据分发网络260,从存储装置280获得所要使用的LUT 232。在一些实施例中,多个校正图像214可以被提供到亮度/色度均衡统计模块240,其确定多个校正图像214的亮度值的直方图统计结果242,并将直方图统计结果242提供给通用处理装置290。例如,直方图统计模块240可以将直方图统计结果242写入存储装置280并且由通用处理装置290从存储装置280读取该统计结果。通用处理装置290可以基于直方图统计结果242,确定或更新由均衡模块230使用的LUT 232。
在一些实施例中,直方图统计模块240可以对多个校正图像214的重叠区域进行亮度和/或色度直方图统计。在一些实施例中,直方图统计模块240可以支持多颜色通道、多区域的亮度和/或色度直方图统计。基于直方图统计结果242,通用处理装置290可以利用各种亮度/色度均衡算法来确定亮度/色度均衡关系,从而得到LUT 232。
在一些实施例中,基于多个校正图像214的直方图统计结果242确定的LUT 232可以被提供回均衡模块230,用于对多个校正图像214执行亮度/色度均衡。在一些实施例中,特别是在针对视频流处理的实施例中,由于视频帧之间的连续性,对于当前正在处理的多个校正图像214,可以利用先前得到的校正图像的直方图统计结果确定的LUT 232来执行亮度/色度均衡。在一些实施例中,针对特定场景,用于亮度/色度均衡的LUT 232可以是固定的。在这种情况下,直方图统计模块240可以省略。
经过亮度/色度均衡的多个均衡图像234-1和234-2可以被提供到图像融合模块250。图像融合模块250被配置为在三级LUT处理阶段,获取LUT 252并且利用LUT 252来将多个均衡图像234-1和234-2合并,以得到合成图像254。图像融合模块250可以被配置为经由数据分发网络260,从外部的存储装置280读取LUT 252。LUT 252指示从多个均衡图像234各自的像素坐标到合成图像254的像素坐标之间的融合关系。从校正图像214起,经过亮度/色度均衡,均衡图像234的像素空间坐标与合成图像254的像素空间坐标也可以相对应。因此,两个均衡图像234的各个像素坐标可以被映射到合成图像254的对应像素坐标。
在一些实施例中,LTU 252可以包括条目列表,每个条目包括合成图像254的像素坐标的索引,对应的多个均衡图像234的像素坐标的索引,以及用于针对多个均衡图像234的像素坐标的权重。这里的权重可以指示均衡图像234的像素坐标到合成图像254的像素坐标之间的融合关系。LUT 252中的权重可以通过各种图像融合算法来确定。图像融合算法的示例可以包括alpha融合算法,多波段(multiband)融合算法,等等。
在执行图像融合时,图像融合模块250可以通过逐行扫描合成图像254的每个像素坐标的方式,读取均衡图像234-1和234-2中的对应像素坐标的像素值,并且基于LUT 252所指示的权重,将所读取的像素值进行融合,得到合成图像254的像素坐标处的像素值。
注意,虽然图3描述了一组源图像的拼接。在实际应用中,可以从多个图像采集设备持续采集到视频流,并且可能需要对不同时间点处的视频帧进行拼接,从而获得全景的合成图像。在这样的动态图像拼接过程中,针对同一场景,一级LUT和三级LUT中所使用的LUT可以是预先配置的。二级LUT中用于亮度/色度均衡的LUT 232可以在图像拼接过程中被实时或者基于先前视频帧处理得到的校正图像的亮度和/或色度直方图统计进行更新。
以上参考图3讨论了图像拼接装置205的处理流程。在一些实施例中,由于预先模块配置和LUT的预先确定,被输入到图像拼接装置205以用于合成的源图像的数目是预定的(在图3的示例中,这个预定数目是4),并且中间处理的校正图像214、均衡图像的数据234的数目也可以是预定的(在图3的示例中,这个预定数目是2)。在其他实施例中,可以通过对模块的预先配置和LUT的预先确定,使图像拼接装置205用于拼接其他数目的源图像,并且中间处理的校正图像214和均衡图像的数据234的数目也可以变化。
根据本公开的实施例的图像拼接过程还可以被进一步扩展,以引入在图像拼接过程中需要的其他处理。这样的处理可以通过图像拼接装置205中引入对应的功能模块来实现,或者通过通用处理装置中的处理算法来实现。
图像拼接的示例扩展
如以上提及,被输入到图像拼接装置205以用于合成的源图像的数目可以是预定的。例如,在图3的示例中,向图像拼接装置205输入的源图像的数目是4。在一些实施例中,作为专用处理装置,在不改变对各个模块的功能配置的情况下,可以通过控制提供到图像拼接装置205的源图像和LUT,来将图像拼接装置205复用于对大于或小于预定数目的源图像的拼接。
在一些实施例中,如果要拼接的源图像的数目小于预定数目,可以直接将源图像输入到图像拼接装置205,或者可以加入一个或多个空白图像来与源图像一起组成预定数目的输入图像。如果要拼接的源图像的数目大于预定数目,可以多次调用图像拼接装置205来实现图像拼接。在多次调用中,每次可以提供给图像拼接装置205的输入可以等于或小于预定数目的图像作为输入。例如,如果要拼接的源图像的数目是预定数目的整数倍,每次可以输入预定数目的源图像。如果要拼接的源图像的数目不是预定数目的整数倍,可以在某些调用中输入小于预定数目的源图像。然后,可以将由图像拼接装置205在多次调用后生成的多个合成图像作为输入,再次提供到图像拼接装置205进行拼接,直到得到最终的合成图像。在一些实施例中,如果输入的图像数目小于预定数目或者没有通过空白图像作为补充,还可以向图像拼接装置205提供指示(例如,由通用控制装置290提供),以指示输入的图像的数目。这样,图像拼接装置205可以获知在处理过程中要访问的图像的具体数目。
无论是增加空白图像,还是处理部分源图像,都可以预先设置在多级LUT处理过程中所需要的多个LUT。这样,在调用图像拼接装置205时,通用处理装置290可以控制每次被输入到图像拼接装置205的图像和要使用的各级LUT,从而获得正确的合成图像。
图7示出了根据本公开的一些实施例的图像拼接装置205的示例扩展处理流程的示意图。在图7的示例中,假设要将8个源图像702-1至702-8(统称为或单独称为源图像702)进行拼接,得到全景拼接图像。这些源图像702可以分别呈现捕获空间中某个位置处360度方向中的画面。
在图7所示的流程中,可以将8个源图像702-1至702-8划分为两组,每组4个源图像。例如第一组包括源图像702-1、702-3、702-5和702-7,第二组包括源图像702-2、702-4、702-6和702-8。对源图像的分组组的划分可以根据任何方式来划分,本公开的实施例在此方面不受限制。在一些实施例中,位于同一组中的源图像702可以被映射到最终合成图像780的不同区域中。
在图像拼接过程中,可以首先将第一组的源图像702-1、702-3、702-5和702-7作为图像 拼接装置205的输入。图像拼接装置205中的各个模块按与图3讨论类似的流程来执行对源图像702-1、702-3、702-5和702-7的处理。在多级LUT处理中,LUT 710被提供给畸变校正模块210,由畸变校正模块210用于对这些源图像执行畸变校正,得到多个校正图像715-1和715-2。LUT 720被提供给均衡模块230,由均衡模块230用于对多个校正图像715-1和715-2执行亮度/色度均衡,得到多个均衡图像725-1和725-2。多个均衡图像725-1和725-2被提供给图像融合模块250,由图像融合模块250利用三级LUT中的LUT 730执行图像融合,输出合成图像750-1。
第二组的源图像702-2、702-4、702-6和702-8然后也被输入到图像拼接装置205。图像拼接装置205中的各个模块按与图3讨论类似的流程来执行对源图像702-2、702-4、702-6和702-8的处理。在多级LUT处理中,LUT 712被提供给畸变校正模块210,由畸变校正模块210用于对这些源图像执行畸变校正,得到多个校正图像715-3和715-4。LUT 722被提供给均衡模块230,由均衡模块230用于对多个校正图像715-3和715-4执行亮度/色度均衡,得到多个均衡图像725-3和725-4。多个均衡图像725-3和725-4被提供给图像融合模块250,由图像融合模块250利用三级LUT中的LUT 730执行图像融合,输出合成图像750-2。
接下来,前两次调用图像拼接装置205得到的合成图像750-1和750-2可以作为图像拼接装置205的输入,再次被提供到图像拼接装置205用于拼接。为了符合图像拼接装置205的输入要求,在一些实施例中,还可以输入空白图像760-1和760-2,这些空白图像与合成图像750-1和750-2的尺寸相同,但每个像素坐标的像素值为0(或者为空)。在一些实施例中,为了避免空白图像的输入对内存空间的占用和数据访问开销,还可以不输入空白图像,而是仅将合成图像750-1和750-2输入给图像拼接装置205用于拼接。可以通过向图像拼接装置205提供指示,来使图像拼接装置205的各个模块可以在2个输入图像的模式下实现图像拼接。
由于不需要对合成图像750-1和750-2进行畸变校正和亮度/色度均衡,在图像融合模块之前的模块770(至少包括畸变校正模块210和均衡模块230中)中,一级LUT和二级LUT中的畸变校正关系和亮度/色度均衡关系可以被配置为不对合成图像750-1和750-2的像素值执行变换。然后,由均衡模块240提供到图像融合模块250的“均衡图像”775-1和775-2可以分别与合成图像750-1和750-2相同。图像融合模块250可以继续获取LUT 736,并且利用LUT 726对“均衡图像”775-1和775-2执行图像融合,得到最终合成图像780。
在图7的图像拼接过程中,可以由通用处理装置290控制对图像拼接装置205的调用,以及在每次调用中提供给图像拼接装置205的输入图像和所要使用的LUT。在图7的示例中,在每次调用中,每一级LUT处理所使用的LUT可以被预先确定。
通过上述方式,可以将具有预先配置的图像拼接装置205在各个应用中重复利用,使专用的图像拼接装置205的应用灵活度更高。
示例方法流程
图8示出了根据本公开的一些实施例的用于图像拼接的方法800的示意流程图。方法800可以被实现在图像拼接装置205处。应当理解,方法800还可以包括未示出的附加动作和/或可以省略所示出的动作。本公开的范围在此方面不受限制。
在框810,图像拼接装置205利用第一LUT对多个源图像执行畸变校正,得到多个校正图像,第一LUT指示从合成图像中的像素坐标到多个源图像中的至少一个源像素坐标的畸变 校正关系。在框820,图像拼接装置205利用第二LUT来调整多个校正图像的亮度,得到多个均衡图像,第二LUT指示针对多个校正图像的亮度和/或色度均衡关系。在框830,图像拼接装置205利用第三LUT,将多个均衡图像合并为合成图像,第三LUT指示从多个均衡图像各自的像素坐标到合成图像的像素坐标之间的融合关系。
在一些实施例中,该方法还包括在高速缓存区(例如,高速缓存模块220)中缓存缓存多个源图像的像素值。在一些实施例中,图像拼接装置205从高速缓存区读取像素值以用于执行畸变校正。
在一些实施例中,在多个源图像执行畸变校正时,图像拼接装置205可以针对由合成图像分割的多个区块中的每个目标区块,通过逐行扫描确定目标区块的像素坐标;利用第一LUT,确定目标区块中的目标像素坐标映射到的多个源图像中的至少一个源像素坐标,读取至少一个源像素坐标的像素值,以及基于第一LUT所指示的从目标像素坐标到至少一个源像素坐标的畸变校正关系,来变换所读取的至少一个源像素坐标的像素值。
在一些实施例中,图像拼接装置205从高速缓存区读取至少一个源像素坐标的像素值。该高速缓存区至少缓存多个源图像中被映射到目标区块的源像素坐标的像素值。
在一些实施例中,多个区块的尺寸不同,并且位于合成图像的边缘的区块的尺寸比位于合成图像的中心的区块的尺寸更小。
在一些实施例中,对多个源图像执行畸变校正包括并行执行以下操作:对多个源图像中的第一源图像和第二源图像执行畸变校正,得到第一校正图像,第一源图像和第二源图像的像素坐标被映射到合成图像中的不同部分;以及对多个源图像中的第三源图像和第四源图像执行畸变校正,得到第二校正图像,第三源图像和第四源图像的像素坐标被映射到合成图像中的不同部分。
在一些实施例中,图像拼接装置205还确定多个校正图像各自的亮度和/或色度直方图,第二LUT基于所述确定的亮度和/或色度直方图所确定。在一些实施例中,所确定的亮度和/或色度直方图被提供给通用处理装置290,以由通用处理装置290确定第二LUT。在一些实施例中,通用处理装置290基于所确定的亮度和/或色度直方图来确定的LUT用于调整其他多个校正图像,这些校正图像是从另外的多个源图像通过畸变校正确定的。
在一些实施例中,图像拼接装置205还利用第四LUT来对第二多个源图像执行畸变校正,得到第二多个校正图像,第四LUT指示第二合成图像的像素坐标与第二多个源图像中的至少一个源像素坐标之间的畸变校正关系;利用第五LUT来调整第二多个校正图像的亮度和/或色度,得到第二多个均衡图像,第五LUT指示针对第二多个校正图像的亮度和/或色度均衡关系;以及利用第六LUT,将第二多个均衡图像合并为第二合成图像,第六LUT指示第二合成图像的像素坐标与第二多个均衡图像的像素坐标之间的融合关系。
在一些实施例中,图像拼接装置205还获取第七LUT,第七LUT指示第一合成图像的像素坐标与第二合成图像的像素坐标之间的融合关系;以及利用第七LUT,将第一合成图像和第二合成图像合成为第三合成图像。
通常,本申请的各种实施例可以以硬件或专用电路、软件、逻辑或其任何组合来实现。一些方面可以用硬件实现,而其他方面可以用固件或软件实现,其可以由控制器,微处理器或其他计算设备执行。虽然本申请的实施例的各个方面被示出并描述为框图,流程图或使用一些其他图示表示,但是应当理解,本文描述的框,装置、系统、技术或方法可以实现为,如非限制性示例,硬件、软件、固件、专用电路或逻辑、通用硬件或控制器或其他计算设备, 或其某种组合。
本申请还提供有形地存储在非暂时性计算机可读存储介质上的至少一个计算机程序产品。该计算机程序产品包括计算机可执行指令,例如包括在程序模块中的指令,其在目标的真实或虚拟处理器上的设备中执行,以执行如上实施例所述的过程/方法。通常,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、库、对象、类、组件、数据结构等。在各种实施例中,可以根据需要在程序模块之间组合或分割程序模块的功能。用于程序模块的机器可执行指令可以在本地或分布式设备内执行。在分布式设备中,程序模块可以位于本地和远程存储介质中。
用于实施本申请的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本申请的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
此外,虽然采用特定次序描绘了各操作,但是这应当理解为要求这样操作以所示出的特定次序或以顺序次序执行,或者要求所有图示的操作应被执行以取得期望的结果。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本申请的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实现中。相反地,在单个实现的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实现中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (22)

  1. 一种用于图像拼接的装置,包括:
    畸变校正模块,被配置为:
    获取第一查找表(LUT),所述第一LUT指示从合成图像中的像素坐标到多个源图像中的至少一个源像素坐标的畸变校正关系,以及
    利用所述第一LUT,对所述多个源图像执行畸变校正,得到多个校正图像;
    均衡模块,被配置为:
    获取第二LUT,所述第二LUT指示针对所述多个校正图像的亮度和/或色度均衡关系,以及
    利用所述第二LUT来调整所述多个校正图像,得到多个均衡图像;以及
    图像融合模块,被配置为:
    获取第三LUT,所述第三LUT指示从所述多个均衡图像各自的像素坐标到所述合成图像的像素坐标之间的融合关系,以及
    利用所述第三LUT,将所述多个均衡图像合并为所述合成图像。
  2. 根据权利要求1所述的装置,还包括:
    高速缓存模块,用于缓存所述多个源图像的像素值,
    其中所述畸变校正模块被配置从所述高速缓存模块读取所述像素值以用于执行所述畸变校正。
  3. 根据权利要求1所述的装置,其中所述畸变校正模块被配置为:
    针对由所述合成图像分割的多个区块中的每个目标区块,
    通过逐行扫描确定所述目标区块中的所述目标像素坐标;
    利用所述第一LUT,确定所述目标区块中的所述目标像素坐标映射到的所述多个源图像中的所述至少一个源像素坐标;
    读取所述至少一个源像素坐标的像素值;以及
    基于所述第一LUT所指示的从所述目标像素坐标到所述至少一个源像素坐标的畸变校正关系,来变换所读取的所述至少一个源像素坐标的所述像素值。
  4. 根据权利要求3所述的装置,其中所述畸变校正模块被配置从高速缓存模块读取所述至少一个源像素坐标,所述高速缓存模块至少缓存所述多个源图像中被映射到所述目标区块的源像素坐标的像素值。
  5. 根据权利要求3所述的装置,其中所述多个区块的尺寸不同,并且位于所述合成图像的边缘的区块的尺寸比位于所述合成图像的中心的区块的尺寸更小。
  6. 根据权利要求1至5中任一项所述的装置,其中所述畸变校正模块被配置为并行执行以下操作:
    对所述多个源图像中的第一源图像和第二源图像执行畸变校正,得到第一校正图像,所述第一源图像和所述第二源图像的像素坐标被映射到所述合成图像中的不同部分;以及
    对所述多个源图像中的第三源图像和第四源图像执行畸变校正,得到第二校正图像,所述第三源图像和所述第四源图像的像素坐标被映射到所述合成图像中的不同部分。
  7. 根据权利要求1至6中任一项所述的装置,还包括:
    直方图统计模块,被配置为确定所述多个校正图像各自的亮度和/或色度直方图,所述第 二LUT基于所述确定的亮度和/或色度直方图所确定。
  8. 根据权利要求1至7中任一项所述的装置,其中
    所述畸变校正模块还被配置为:
    获取第四LUT,所述第四LUT指示第二合成图像的像素坐标与第二多个源图像中的至少一个源像素坐标之间的畸变校正关系,以及
    利用所述第四LUT来对所述第二多个源图像执行畸变校正,得到第二多个校正图像;
    所述均衡模块被配置为:
    获取第五LUT,所述第五LUT指示针对所述第二多个校正图像的亮度和/或色度均衡关系,以及
    利用所述第五LUT来调整所述第二多个校正图像,得到第二多个均衡图像;以及
    所述图像融合模块还被配置为:
    获取第六LUT,所述第六LUT指示所述第二合成图像的像素坐标与所述第二多个均衡图像的像素坐标之间的融合关系,以及
    利用所述第六LUT,将所述第二多个均衡图像合并为所述第二合成图像。
  9. 根据权利要求8所述的装置,其中所述图像融合模块还被配置为:
    获取第七LUT,所述第七LUT指示所述第一合成图像的像素坐标与所述第二合成图像的像素坐标之间的融合关系,以及
    利用所述第七LUT,将所述第一合成图像和所述第二合成图像合成为第三合成图像。
  10. 根据权利要求1至9中任一项所述的装置,其中所述装置包括专用集成电路(ASIC)芯片,
    其中所述多个源图像、所述第一LUT、所述第二LUT和所述第三LUT从所述ASIC芯片的第一外部存储装置被获取,并且
    其中所述合成图像被写入所述ASIC芯片的第二外部存储装置。
  11. 一种图像处理系统,包括:
    根据权利要求1至10中任一项所述的装置;以及
    至少一个存储装置,用于存储所述多个源图像、所述第一LUT、所述第二LUT和所述第三LUT。
  12. 根据权利要求11所述的系统,其特征在于,还包括:
    通用处理装置,被配置为基于所述多个校正图像各自的亮度和/或色度直方图,确定所述第二LUT。
  13. 一种用于图像拼接的方法,包括:
    利用第一查找表(LUT)对多个源图像执行畸变校正,得到多个校正图像,所述第一LUT指示从合成图像中的像素坐标到所述多个源图像中的至少一个源像素坐标的畸变校正关系;
    利用第二LUT来调整所述多个校正图像,得到多个均衡图像,所述第二LUT指示针对所述多个校正图像的亮度和/或色度均衡关系;以及
    利用第三LUT,将所述多个均衡图像合并为所述合成图像,所述第三LUT指示从所述多个均衡图像各自的像素坐标到所述合成图像的像素坐标之间的融合关系。
  14. 根据权利要求13所述的方法,还包括:
    在高速缓存区中缓存所述多个源图像的像素值,
    其中对所述多个源图像畸变校正包括:从所述高速缓存区读取所述像素值以用于执行所述畸变校正。
  15. 根据权利要求13所述的方法,其中对所述多个源图像执行畸变校正包括:
    针对由所述合成图像分割的多个区块中的每个目标区块,
    通过逐行扫描确定所述目标区块的像素坐标;
    利用所述第一LUT,确定所述目标区块中的目标像素坐标映射到的所述多个源图像中的至少一个源像素坐标;
    读取所述至少一个源像素坐标的像素值;以及
    基于所述第一LUT所指示的从所述目标像素坐标到所述至少一个源像素坐标的畸变校正关系,来变换所读取的所述至少一个源像素坐标的所述像素值。
  16. 根据权利要求15所述的方法,其中读取所述至少一个源像素坐标的像素值包括:从高速缓存区读取所述至少一个源像素坐标,所述高速缓存区至少缓存所述多个源图像中被映射到所述目标区块的源像素坐标的像素值。
  17. 根据权利要求15所述的方法,其中所述多个区块的尺寸不同,并且位于所述合成图像的边缘的区块的尺寸比位于所述合成图像的中心的区块的尺寸更小。
  18. 根据权利要求13至17中任一项所述的方法,其中对所述多个源图像执行畸变校正包括并行执行以下操作:
    对所述多个源图像中的第一源图像和第二源图像执行畸变校正,得到第一校正图像,所述第一源图像和所述第二源图像的像素坐标被映射到所述合成图像中的不同部分;以及
    对所述多个源图像中的第三源图像和第四源图像执行畸变校正,得到第二校正图像,所述第三源图像和所述第四源图像的像素坐标被映射到所述合成图像中的不同部分。
  19. 根据权利要求13至17中任一项所述的方法,还包括:
    确定所述多个校正图像各自的亮度和/或色度直方图,所述第二LUT基于所述确定的亮度和/或色度直方图所确定。
  20. 根据权利要求13至19中任一项所述的方法,还包括:
    利用第四LUT来执行对第二多个源图像执行畸变校正,得到第二多个校正图像,所述第四LUT指示第二合成图像的像素坐标与所述第二多个源图像中的至少一个源像素坐标之间的畸变校正关系;
    利用第五LUT来调整所述第二多个校正图像,得到第二多个均衡图像,所述第五LUT指示针对所述第二多个校正图像的亮度和/或色度均衡关系;以及
    利用所述第六LUT,将所述第二多个均衡图像合并为所述第二合成图像,所述第六LUT指示所述第二合成图像的像素坐标与所述第二多个均衡图像的像素坐标之间的融合关系。
  21. 根据权利要求20所述的方法,还包括:
    获取第七LUT,所述第七LUT指示所述第一合成图像的像素坐标与所述第二合成图像的像素坐标之间的融合关系;以及
    利用所述第七LUT,将所述第一合成图像和所述第二合成图像合成为第三合成图像。
  22. 根据权利要求13至21中任一项所述的方法,其中所述方法被实现在专用集成电路(ASIC)芯片处,
    其中所述多个源图像、所述第一LUT、所述第二LUT和所述第三LUT从所述ASIC芯 片的第一外部存储装置被获取,并且
    其中所述合成图像被写入所述ASIC芯片的第二外部存储装置。
PCT/CN2021/102879 2021-06-28 2021-06-28 用于图像拼接的装置、系统和相关联的方法 WO2023272457A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180099013.7A CN117480522A (zh) 2021-06-28 2021-06-28 用于图像拼接的装置、系统和相关联的方法
PCT/CN2021/102879 WO2023272457A1 (zh) 2021-06-28 2021-06-28 用于图像拼接的装置、系统和相关联的方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/102879 WO2023272457A1 (zh) 2021-06-28 2021-06-28 用于图像拼接的装置、系统和相关联的方法

Publications (1)

Publication Number Publication Date
WO2023272457A1 true WO2023272457A1 (zh) 2023-01-05

Family

ID=84690900

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102879 WO2023272457A1 (zh) 2021-06-28 2021-06-28 用于图像拼接的装置、系统和相关联的方法

Country Status (2)

Country Link
CN (1) CN117480522A (zh)
WO (1) WO2023272457A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140125774A1 (en) * 2011-06-21 2014-05-08 Vadas, Ltd. Apparatus for synthesizing three-dimensional images to visualize surroundings of vehicle and method thereof
US20150302561A1 (en) * 2014-04-21 2015-10-22 Texas Instruments Incorporated Method, apparatus and system for performing geometric calibration for surround view camera solution
CN107424118A (zh) * 2017-03-28 2017-12-01 天津大学 基于改进径向畸变校正的球状全景拼接方法
CN107424120A (zh) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 一种全景环视系统中的图像拼接方法
CN110381255A (zh) * 2019-07-29 2019-10-25 上海通立信息科技有限公司 应用360全景环视技术的车载视频监控系统及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140125774A1 (en) * 2011-06-21 2014-05-08 Vadas, Ltd. Apparatus for synthesizing three-dimensional images to visualize surroundings of vehicle and method thereof
US20150302561A1 (en) * 2014-04-21 2015-10-22 Texas Instruments Incorporated Method, apparatus and system for performing geometric calibration for surround view camera solution
CN107424118A (zh) * 2017-03-28 2017-12-01 天津大学 基于改进径向畸变校正的球状全景拼接方法
CN107424120A (zh) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 一种全景环视系统中的图像拼接方法
CN110381255A (zh) * 2019-07-29 2019-10-25 上海通立信息科技有限公司 应用360全景环视技术的车载视频监控系统及方法

Also Published As

Publication number Publication date
CN117480522A (zh) 2024-01-30

Similar Documents

Publication Publication Date Title
US20200349683A1 (en) Methods and system for efficient processing of generic geometric correction engine
CN111161660B (zh) 数据处理系统
WO2020038124A1 (zh) 图像对比度增强方法、装置、设备及存储介质
US9672586B2 (en) Image synthesis method with DSP and GPU
US8243163B2 (en) Adjusting auto white balance
US8094230B2 (en) Image processing apparatus, image processing method, and program
US20130162861A1 (en) Image processing device for generating reconstruction image, image generating method, and storage medium
US11145079B2 (en) Method and apparatus for arbitrary output shape processing of an image
CN109598673A (zh) 图像拼接方法、装置、终端及计算机可读存储介质
CA2940664A1 (en) Image stitching and automatic-color correction
EP2870585A1 (en) A method and system for correcting a distorted input image
JP5696783B2 (ja) 画像処理装置
US20170006272A1 (en) Multi-area white-balance control device, multi-area white-balance control method, multi-area white-balance control program, computer in which multi-area white-balance control program is recorded, multi-area white-balance image-processing device, multi-area white-balance image-processing method, multi-area white-balance image-processing program, computer in which multi-area white-balance image-processing program is recorded, and image-capture apparatus
CN109166076B (zh) 多相机拼接的亮度调整方法、装置及便携式终端
US7760967B2 (en) Image processing apparatus capable of carrying out magnification change process of image
KR20110032157A (ko) 저해상도 비디오로부터 고해상도 비디오를 생성하는 방법
US20210042891A1 (en) Method and apparatus for dynamic block partition of an image
US10368048B2 (en) Method for the representation of a three-dimensional scene on an auto-stereoscopic monitor
KR102383669B1 (ko) Hlbp 디스크립터 정보를 이용한 시차 최소화 스티칭 장치 및 방법
US20210090220A1 (en) Image de-warping system
WO2023272457A1 (zh) 用于图像拼接的装置、系统和相关联的方法
US20200120274A1 (en) Image processing apparatus and image processing method
US9129406B2 (en) Image processing method
WO2022155950A1 (zh) 虚拟视点合成方法、电子设备和计算机可读介质
CN111355942B (zh) 半导体设备、图像处理系统、方法和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21947421

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180099013.7

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21947421

Country of ref document: EP

Kind code of ref document: A1