WO2017076106A1 - 图像的拼接方法和装置 - Google Patents

图像的拼接方法和装置 Download PDF

Info

Publication number
WO2017076106A1
WO2017076106A1 PCT/CN2016/096182 CN2016096182W WO2017076106A1 WO 2017076106 A1 WO2017076106 A1 WO 2017076106A1 CN 2016096182 W CN2016096182 W CN 2016096182W WO 2017076106 A1 WO2017076106 A1 WO 2017076106A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
coordinate system
pixel
visible light
Prior art date
Application number
PCT/CN2016/096182
Other languages
English (en)
French (fr)
Inventor
覃骋
毛慧
沈林杰
俞海
浦世亮
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to EP16861375.0A priority Critical patent/EP3373241A4/en
Priority to US15/773,544 priority patent/US10755381B2/en
Publication of WO2017076106A1 publication Critical patent/WO2017076106A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of image processing, and in particular to a method and apparatus for stitching images.
  • Image stitching technology is a technique of splicing several images with overlapping parts into a large seamless high-resolution image. By splicing images, the field of view of the camera can be increased. Image stitching has less distortion than image stitching using a large field of view lens to increase the field of view.
  • SIFT-based stitching algorithm is computationally intensive, and it is prone to errors in complex scenes, resulting in poor image quality after stitching. problem.
  • image splicing process due to the difference between the calibrated depth and the actual depth, the transition zone is prone to ghosting problems.
  • depth image-based splicing technology multiple visible light cameras are used for image information acquisition, and depth information is spliced by using the stereo vision principle between cameras. Since the method requires real-time matching of feature points, the real-time splicing is affected. effect.
  • the related art proposes a method of combining image with depth information.
  • the depth information of the overlapping area is obtained by the depth camera, according to the depth information.
  • the amount of parallax is obtained, and the pixel points of the overlapping area are mapped onto the target image.
  • the method for splicing combined with the depth information can adapt to the splicing of more complex scenes, and add an additional depth camera based on multiple visible light cameras to obtain the depth information of the overlapping area. Solve the problem of ghosting in traditional 2D image stitching.
  • the method has the following problems: Firstly, the related art needs to calculate and acquire the overlapping area in advance, obtain the depth map of the overlapping area by the depth camera, and align the depth information of the pixel of the overlapping area with the visible light information, and the splicing process is inefficient. . Secondly, in the related art, it is necessary to identify the overlapping region, and the pixel points of the overlapping region and the extended region are mapped by two different methods, and the transition zone may have a sudden phenomenon, which causes the overlapping region and the extended region to be merged. Unsatisfactory problems. Thirdly, in the process of image splicing in combination with depth information, the pixels of the extended region are not transformed by the depth information in the mapping process.
  • the embodiment of the present application provides a method and device for splicing an image, so as to at least solve the technical problem that the splicing efficiency is low in the image splicing process of the related art.
  • a method for splicing an image includes: acquiring a first image acquired by a first camera and a second image captured by a second camera, wherein the first image has visible light information.
  • the second image is an image including depth information and visible light information
  • the first camera is disposed adjacent to the second camera, and the collected first image and the second image have overlapping regions; and the pixel points in the second image are mapped Going to the overlapping area and/or the extended area, wherein the extended area is a pixel point in the second image mapped to an image area outside the first image; the depth information of the pixel in the second image and the visible light information are mapped
  • the pixel points in the second image after the splicing are spliced onto the overlapping area and/or the extended area to obtain a spliced image.
  • the step of mapping the pixel points in the second image onto the overlapping area and/or the extended area comprises: reading coordinate information of the pixel points in the second image in the second image coordinate system; using coordinate transformation, Mapping information of the pixel points in the second image in the second image coordinate system to the first image coordinate system, obtaining coordinate information of the pixel points in the second image on the first image coordinate system; passing through the second image
  • the coordinate information of the pixel on the first image coordinate system determines the position of the pixel in the second image in the overlap region and/or the extended region.
  • coordinate information is used to map coordinate information of the pixel points in the second image in the second image coordinate system to the first image coordinate system, to obtain coordinates of the pixel points in the second image on the first image coordinate system.
  • the step of information includes: pixels in the second image are in the second image coordinate system The coordinate information is mapped to the second camera coordinate system to obtain coordinate information of the pixel point in the second image in the second camera coordinate system; and coordinate information of the pixel point in the second image in the second camera coordinate system is mapped to a first camera coordinate system, obtaining coordinate information of a pixel point in the second image in the first camera coordinate system; mapping coordinate information of the pixel point in the second image in the first camera coordinate system to the first image coordinate system And obtaining coordinate information of the pixel point in the second image on the first image coordinate system.
  • the first equation by the second image pixel coordinate information m 2 (u 2, v 2 ) in the second image coordinate system is mapped to a second camera coordinate system, to obtain the second image pixel Coordinate information m 2 (X 2 , Y 2 , Z 2 ) in the second camera coordinate system;
  • a 2 is an internal parameter of the second camera;
  • D 2 is a scale factor of the second image.
  • mapping the coordinate information m 2 (X 2 , Y 2 , Z 2 ) of the pixel point in the second image in the second camera coordinate system to the first camera coordinate system by using the following second formula to obtain the second image
  • the coordinate information m 1 (X 1 , Y 1 , Z 1 ) of the pixel in the first camera coordinate system Where R is a rotation matrix of the first camera coordinate system and the second camera coordinate system, and t is a relative translation vector of the first camera coordinate system and the second camera coordinate system.
  • mapping the coordinate information m 1 (X 1 , Y 1 , Z 1 ) of the pixel point in the second image in the first camera coordinate system to the first image coordinate system by using the following third formula to obtain the second image Coordinate information m 1 (u 1 , v 1 ) of the pixel in the first image coordinate system;
  • a 1 is an internal parameter of the first camera;
  • D 1 is a scale factor of the first image.
  • the pixel points in the mapped second image are spliced onto the overlapping area and/or the extended area by using the depth information of the pixel points in the second image and the visible light information, and the step of obtaining the stitched image includes: when the second When the pixel points in the image are mapped to the overlapping area of the first image and the second image, the visible light information of the pixel in the first image and the visible light information of the pixel in the second image are weighted, and the weighted operation is performed.
  • the visible light information is assigned to the visible light information of the overlapping area of the pixel in the stitched image; when the pixel point in the second image is mapped to the extended area, the visible light information of the pixel in the second image is assigned to the pixel in the stitched image Visible light information in the extended area.
  • the pixel points in the mapped second image are spliced onto the overlapping area and/or the extended area by using the depth information of the pixel points in the second image and the visible light information
  • the step of obtaining the stitched image includes: The coordinate information of the pixel in the second image on the first image coordinate system, determining whether the plurality of pixel points in the second image are simultaneously mapped to the same pixel of the overlapping area and/or the extended area; when the second image When a plurality of pixel points are simultaneously mapped to the same pixel of the overlapping area and/or the extended area, determining the same pixel of the overlapping area and/or the extended area according to the plurality of depth information of the plurality of pixel points in the second image Visible light information in overlapping regions and/or extended regions in the stitched image.
  • the step of determining the visible light information of the overlapping area and/or the extended area of the same pixel of the overlapping area and/or the extended area in the stitched image comprises: Comparing the plurality of depth information of the plurality of pixel points in the second image, the visible light information of the pixel point having the smallest depth information in the plurality of depth information is assigned to the overlapping area of the same pixel point in the stitched image and/or the visible light of the extended area Information; or, performing weighting operation on the visible light information of the plurality of pixel points in the second image, and assigning the weighted visible light information to the visible light information of the overlapping area and/or the extended area of the same pixel in the stitched image.
  • an image splicing apparatus including: An acquiring unit, configured to acquire a first image acquired by the first camera and a second image captured by the second camera, where the first image is an image with visible light information, and the second image is an image including depth information and visible light information
  • the first camera is disposed adjacent to the second camera, and the collected first image and the second image have an overlapping area;
  • the mapping unit is configured to map the pixel points in the second image to the overlapping area and/or the extended area,
  • the extended area is a pixel area in the second image mapped to the image area outside the first image;
  • the tiling unit is configured to pass the mapped second image by using depth information of the pixel point in the second image and visible light information.
  • the pixels in the splicing are spliced onto the overlapping area and/or the extended area to obtain a spliced image.
  • the mapping unit includes: a reading module, configured to read coordinate information of the pixel point in the second image in the second image coordinate system; and a coordinate transformation module, configured to use the coordinate transformation to convert the pixel in the second image
  • the coordinate information of the point in the second image coordinate system is mapped to the first image coordinate system to obtain coordinate information of the pixel point in the second image on the first image coordinate system; and the determining module is configured to pass the pixel in the second image
  • the coordinate information on the first image coordinate system determines the position of the pixel in the second image in the overlap region and/or the extended region.
  • the coordinate transformation module includes: a first mapping sub-module, configured to map coordinate information of the pixel points in the second image in the second image coordinate system to the second camera coordinate system, to obtain pixel points in the second image Coordinate information in the second camera coordinate system; a second mapping sub-module, configured to map coordinate information of the pixel points in the second image in the second camera coordinate system to the first camera coordinate system to obtain the second image The coordinate information of the pixel in the first camera coordinate system; the third mapping sub-module, configured to map the coordinate information of the pixel in the second image in the first camera coordinate system to the first image coordinate system, to obtain the first The coordinate information of the pixel points in the two images on the first image coordinate system.
  • the first mapping sub-module calculates the coordinate information m 2 (X 2 , Y 2 , Z 2 ) of the pixel point in the second image in the second camera coordinate system by using the first formula; Wherein A 2 is an internal parameter of the second camera; D 2 is a scale factor of the second image.
  • the second mapping sub-module calculates the coordinate information m 1 (X 1 , Y 1 , Z 1 ) of the pixel point in the second image in the first camera coordinate system by using the following second formula; Where R is a rotation matrix of the first camera coordinate system and the second camera coordinate system, and t is a relative translation vector of the first camera coordinate system and the second camera coordinate system.
  • the third mapping sub-module calculates the coordinate information m 1 (u 1 , v 1 ) of the pixel point in the second image on the first image coordinate system by using the following third formula; Where A 1 is an internal parameter of the first camera; D 1 is a scale factor of the first image.
  • the splicing unit includes: a first splicing module, configured to: when the pixel points in the second image are mapped to the overlapping area, the visible light information of the pixel in the first image and the visible light information of the pixel in the second image Performing a weighting operation, assigning the weighted visible light information to the visible light information of the overlapping area of the pixel in the stitched image; and the second stitching module, when the pixel point in the second image is mapped to the extended area, the pixel is The visible light information in the second image is assigned to the visible light information of the extended region of the pixel in the stitched image.
  • the splicing unit includes: a determining module, configured to determine, by using coordinate information of the pixel points in the second image in the first image coordinate system, whether the plurality of pixel points in the second image are simultaneously mapped to the overlapping area And/or on the same pixel of the extended area; a determining module, configured to: when the plurality of pixels in the second image are simultaneously mapped to the same pixel of the overlapping area and/or the extended area, according to the plurality of second images The plurality of depth information of the pixel determines the visible light information of the overlapping area and/or the extended area of the same pixel of the overlapping area and/or the extended area in the stitched image.
  • the determining module includes: a comparison submodule, configured to compare the plurality of depth information of the plurality of pixel points in the second image, and select the visible light signal of the pixel point with the smallest depth information in the plurality of depth information
  • the information is assigned to the visible information of the overlapping area and/or the extended area of the same pixel in the stitched image
  • the weighting sub-module is configured to perform weighting operation on the visible light information of the plurality of pixel points in the second image, and the weighted operation is performed.
  • the visible light information is assigned to visible light information of overlapping regions and/or extended regions of the same pixel in the stitched image.
  • the present application also provides an electronic device comprising: a housing, a processor, a memory, a circuit board, and a power supply circuit, wherein the circuit board is disposed inside a space enclosed by the housing, and the processor and the memory are disposed on the circuit board;
  • the power supply circuit is configured to supply power to the respective circuits or devices;
  • the memory is used to store the executable program code; and the processor executes the image splicing method provided by the embodiment of the present application by running the executable program code stored in the memory.
  • the application also provides an application for performing the splicing method of the image provided by the embodiment of the present application at runtime.
  • the present application also provides a storage medium for storing executable program code, the executable program code being executed to perform the stitching method of the image provided by the embodiment of the present application.
  • the first image acquired by the first camera and the second image captured by the second camera are acquired, where the second image is an RGB-D image including depth information and visible light information;
  • the pixel points in the image are mapped onto the first image;
  • the pixel points in the mapped second image are stitched onto the first image by the depth information of the pixel points in the second image and the visible light information, and the stitched image is obtained.
  • FIG. 2 is a schematic diagram of an optional pixel point mapping according to Embodiment 1 of the present application.
  • FIG. 3 is a schematic diagram of still another optional image splicing method according to Embodiment 1 of the present application.
  • FIG. 4 is a schematic diagram of still another optional pixel point mapping according to Embodiment 1 of the present application.
  • FIG. 5 is a schematic diagram of an optional pixel point in the presence of occlusion according to Embodiment 1 of the present application.
  • FIG. 6 is a schematic diagram of an optional image splicing apparatus according to Embodiment 2 of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an optional embodiment of the present application.
  • an embodiment of a method of splicing an image is provided. It should be noted that the steps shown in the flowchart of the drawing may be performed in a computer system such as a set of computer executable instructions, and Although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a flowchart of an optional image splicing method according to Embodiment 1 of the present application. As shown in FIG. 1, the method includes the following steps:
  • Step S12 acquiring a first image acquired by the first camera and a second image acquired by the second camera, where the first image is an image with visible light information, and the second image is a deep signal And the image of the visible light information, the first camera is disposed adjacent to the second camera, and the collected first image and the second image have overlapping regions.
  • the first image may be the target image in the image splicing
  • the second image may be the image to be spliced in the image splicing
  • the number of the images to be spliced may be one or more.
  • the step of stitching is the same as the step of stitching the second image to the first image.
  • the first image may be a visible light image captured by a visible light camera.
  • the first image may be used as a partial image of the target image, thereby achieving an effect of saving calculation amount and camera cost.
  • the second camera may be an RGB-D camera, and the captured second image may be an RGB-D image, and the RGB-D image includes a visible light image and a depth image, that is, for any pixel in the second image, With visible light information and depth information.
  • the first camera and the second camera may be binocular cameras.
  • the overlapping area of the second image and the first image is required, and the RGB-D image is included in the RGB-D image because any one of the pixels in the second image exists.
  • the depth information includes visible light information. Therefore, in the mapping process, the pixels of the overlapping area need not be aligned with the depth information and the visible light information, thereby achieving the purpose of improving the splicing efficiency.
  • Step S14 mapping pixel points in the second image onto the overlapping area and/or the extended area.
  • mapping the pixel to the first image in the second image is the same, that is, in the embodiment of the present application, the overlapping region and the pixel of the extended region are mapped in the same manner, and therefore, the solution is solved.
  • the transition zone ghosting in the stitched image and the possible overlap of the overlap region and the extended region transition segment may improve the quality of the merged region of the stitched image overlap region and the extended region transition segment.
  • Step S16 splicing the pixel points in the mapped second image onto the overlapping area and/or the extended area by using the depth information of the pixel points in the second image and the visible light information to obtain a spliced image.
  • the pixel point in the mapped second image when the pixel point in the mapped second image is in the overlapping area, the pixel point may have two visible light information, one depth information, that is, a pixel point falling in the overlapping area, visible light information and depth information. The number is different.
  • the pixel in the mapped second image is in the extended region, the pixel may have one visible light information and one depth information, or one pixel may have multiple visible light when there is an occlusion problem at the pixel of the extended region.
  • the information and the plurality of depth information, that is, the pixel points that fall within the extended area the pixels have the same amount of visible light information as well as depth information.
  • the weighted operation result can be used as the pixel image in the mapped second image by using a weighting operation on the two visible light information.
  • Visible light information When the pixel in the mapped second image is in the extended region, the visible light information of the pixel in the second image may be used as the visible light information in the stitched image.
  • the plurality of depth information of the plurality of pixel points may be compared, and the visible light information of the pixel with the smallest depth information is mapped. The visible light information of a plurality of pixels in a pixel in the mosaic.
  • the present application determines the visible light information of the pixel point in the spliced image by using the depth information of the pixel and the visible light information, and solves the problem of foreground and background occlusion occurring during splicing, and achieves the improvement of the splicing quality. purpose.
  • FIG. 2 is a schematic diagram of an optional pixel point mapping according to Embodiment 1 of the present application.
  • the first image is a visible light image acquired from a visible light camera
  • the second image and the Nth image are RGB-D images acquired from an RGB-D camera.
  • the first image is a target image
  • the second image and the Nth image are images to be stitched
  • the Nth image and the second image are stitched onto the first image by using the same stitching method. Taking the second image spliced to the first image as an example, the dotted rectangle in FIG.
  • the pixel point imaged by the first point P1 in the first image in space is P11, and is formed in the second image.
  • the pixel of the image is P21, the pixel at the second point P2 in the space is P12 in the first image, and the pixel imaged in the second image is P22.
  • the process of mapping the first point P1 in space is to map P21 to P11.
  • the process of splicing the first point P1 in the space is to fuse the P21 and the P11 in the overlapping area, and perform the weighting operation on the visible light information of the P21 point and the visible light information of the P11 to determine the visible light information of the P1 point on the spliced image.
  • the process of mapping the second point P2 in space is to map P22 to P12, and the process of splicing the second point P2 in space is to splicing P22 and P12 in the extended area, and in the absence of occlusion,
  • the visible light information of P22 is assigned to P12. In the presence of occlusion, the visible light information of P12 is determined based on the depth information.
  • FIG. 3 is a schematic diagram of still another optional image splicing method according to Embodiment 1 of the present application. As shown in FIG.
  • the second image when there are multiple images to be spliced (the second image, the Nth image), the second image may be first spliced to the first image, and the second image is spliced to After the first image, the Nth image is stitched to the first image.
  • the first image acquired by the first camera and the second image captured by the second camera are acquired by using the above steps S12 to S16, wherein the first image is an image with visible light information,
  • the second image is an image including depth information and visible light information, the first camera is disposed adjacent to the second camera, and the collected first image and the second image have overlapping regions;
  • the pixel points in the second image are mapped to the overlapping region and And/or an extended area, wherein the extended area is a pixel point in the second image mapped to an image area other than the first image;
  • the depth information of the pixel in the second image and the visible light information are used to map the second
  • the pixels in the image are spliced onto the overlapping area and/or the extended area to obtain a spliced image.
  • the RGB-D camera simultaneously acquires the visible light information and the depth information, so that the depth information and the visible light information of the pixels in the overlapping area are not required.
  • the purpose of the alignment thereby achieving the technical effect of improving the image stitching efficiency, thereby solving the related technology Image stitching process, low splicing efficiency of technical problems.
  • mapping the pixel points in the second image to the overlapping area and/or the extended area may include:
  • Step S141 reading coordinate information of the pixel point in the second image in the second image coordinate system.
  • Step S143 using coordinate transformation, mapping coordinate information of the pixel point in the second image in the second image coordinate system to the first image coordinate system, and obtaining coordinates of the pixel point in the second image on the first image coordinate system. information.
  • Step S145 determining, by the coordinate information of the pixel points in the second image on the first image coordinate system, the positions of the pixel points in the second image in the overlapping area and/or the extended area.
  • the first image coordinate system and the second image coordinate system are two-dimensional image coordinate systems, and the coordinate information of the pixel points in the second image is mapped to the first image by using coordinate transformation.
  • coordinate information of the pixel points in the second image on the first image coordinate system is obtained.
  • FIG. 4 is a schematic diagram of still another optional pixel point mapping according to Embodiment 1 of the present application.
  • the imaged pixel points in the first image are in the first image coordinate system coordinate information is m 1 (u 1 , v 1 ), and the pixel points imaged in the second image are in the second image coordinates
  • the coordinate information of the system is m 2 (u 2 , v 2 ).
  • O 1 and O 2 are the optical centers of the first camera and the second camera
  • Z 1 is the distance of the pixel points in the first image from the first camera coordinate system to the world coordinate system in the first image (ie, the m 1 point is at the first The depth information in the camera coordinate system)
  • Z 2 is the distance of the pixel point in the second image from the second camera coordinate system to the world coordinate system in the second image (ie, the depth information of the m 2 point in the second camera coordinate system).
  • step S143 coordinate information of the pixel points in the second image in the second image coordinate system is mapped to the first image coordinate system by using coordinate transformation, and the pixel points in the second image are obtained in the first image coordinate.
  • the coordinate information on the system can include:
  • Step S1431 Mapping coordinate information of the pixel points in the second image in the second image coordinate system to the second camera coordinate system, and obtaining coordinate information of the pixel points in the second image in the second camera coordinate system.
  • Step S1433 Mapping coordinate information of the pixel points in the second image in the second camera coordinate system to the first camera coordinate system, and obtaining coordinate information of the pixel points in the second image in the first camera coordinate system.
  • Step S1435 mapping coordinate information of the pixel points in the second image in the first camera coordinate system to the first image coordinate system, and obtaining coordinates of the pixel points in the second image on the first image coordinate system. information.
  • coordinate information of the pixel points in the second image is mapped into the first image coordinate system using coordinate transformation, thereby obtaining coordinate information of the pixel points in the second image on the first image coordinate system.
  • the pixel information in the second image is mapped to the second camera coordinate system by using the coordinate information m 2 (u 2 , v 2 ) in the second image coordinate system to obtain the pixel in the second image.
  • X 2 is the abscissa of the pixel point in the second image in the second camera coordinate system
  • Y 2 is the ordinate of the pixel point in the second image in the second camera coordinate system
  • Z 2 is the second image
  • a 2 is the internal parameter of the second camera
  • a 2 -1 is the inverse matrix of A 2
  • D 2 is the scale factor of the second image
  • u 2 For the abscissa of the pixel in the second image in the second image coordinate system, v 2 is the ordinate of the pixel in the second image in the second image coordinate system.
  • mapping the coordinate information m 2 (X 2 , Y 2 , Z 2 ) of the pixel point in the second image in the second camera coordinate system to the first camera coordinate system by using the following second formula to obtain the second The coordinate information m 1 (X 1 , Y 1 , Z 1 ) of the pixel point in the image in the first camera coordinate system;
  • X 1 is the abscissa of the pixel point in the second image in the first camera coordinate system
  • Y 1 is the ordinate of the pixel point in the second image in the first camera coordinate system
  • Z 1 is the second image
  • R is the rotation matrix of the first camera coordinate system and the second camera coordinate system
  • t is the relative translation of the first camera coordinate system and the second camera coordinate system Vector
  • O T is a zero matrix
  • X 2 is the abscissa of the pixel in the second image in the second camera coordinate system
  • Y 2 is the ordinate of the pixel in the second image in the second camera coordinate system
  • Z 2 is depth information of a pixel point in the second image in the second camera coordinate system.
  • the coordinate information m 1 (X 1 , Y 1 , Z 1 ) of the pixel in the second image in the first camera coordinate system is mapped to the first image coordinate system by using a third formula to obtain a second Coordinate information m 1 (u 1 , v 1 ) of the pixel in the image on the first image coordinate system;
  • u 1 is the abscissa of the pixel in the second image in the first image coordinate system
  • v 1 is the ordinate of the pixel in the second image in the first image coordinate system
  • a 1 is the first camera
  • D 1 is the scale factor of the first image
  • X 1 is the abscissa of the pixel in the second image in the first camera coordinate system
  • Y 1 is the pixel point in the second image in the first camera
  • Z 1 is the depth information of the pixel points in the second image in the first camera coordinate system.
  • first formula, the second formula, and the third formula respectively correspond to step S1431, step S1433, and step S1435.
  • the coordinate information of the pixel points in the second image is mapped into the first image coordinate system by coordinate transformation, thereby obtaining coordinate information of the pixel points in the second image on the first image coordinate system.
  • the coordinate information of the pixel points in the second image is mapped into the first image coordinate system by using coordinate transformation, thereby obtaining pixel points in the second image.
  • the coordinate information process on the first image coordinate system can also be calculated as follows.
  • the coordinate information of the pixel point in the second image in the second image coordinate system is mapped to the first image coordinate system by the fourth formula to obtain the coordinate information m of the pixel point in the second image on the first image coordinate system.
  • m 1 (Z 2 A 1 RA 2 -1 m 2 + A 1 t) / Z 1 .
  • Z 2 is depth information of a pixel point in the second image in the second camera coordinate system
  • a 1 is an internal parameter of the first camera
  • R is a rotation matrix of the first camera coordinate system and the second camera coordinate system
  • a 2 -1 is the inverse matrix of A 2
  • m 2 is the coordinate information of the pixel point in the second image on the second image coordinate system
  • t is the relative translation of the first camera coordinate system and the second camera coordinate system.
  • the vector, Z 1 is the depth information of the pixel points in the second image in the first camera coordinate system.
  • the depth information Z 2 of the pixel in the second image in the second camera coordinate system may be acquired by the depth information of the RGB-D image, and the depth information Z 1 of the pixel in the second image in the first camera coordinate system may be It is calculated by the above first formula and the second formula.
  • the coordinate information m 1 (u 1 , v 1 ) of the pixel points in the second image on the first image coordinate system can be calculated by the fourth formula.
  • the internal parameter matrix A 1 of the first camera, the internal parameter matrix A 2 of the second camera, and the rotation matrix R of the first camera coordinate system and the second camera coordinate system; the first camera coordinate system and The relative translation vector t of the second camera coordinate system can be obtained by calibration.
  • the process of external parameter calibration may be that, assuming that the target faces of the first camera and the second camera are the same, and the baseline is aligned on the same plane of the world coordinate system, the camera coordinate systems of the two cameras are only in a translational relationship. Without the rotation relationship, the external reference calibration process can be completed by measuring the baseline distance of the two cameras.
  • the internal parameter calibration process can be performed by using Zhang Zhengyou calibration method to calibrate the parameters in the visible light camera, using the general black and white grid calibration plate; using the Zhang Zhengyou calibration method to calibrate the parameters of the infrared camera, using a grid calibration plate specifically for the infrared camera, the calibration plate can be similar
  • the white lattice area is a material having a high reflectance
  • the black lattice area is a material having a low reflectance.
  • step S16 splicing the pixel points in the mapped second image to the overlapping area and/or the extended area by using the depth information of the pixel in the second image and the visible light information may include:
  • Step S161 when the pixel points in the second image are mapped to the overlapping area of the first image and the second image, the visible light information of the pixel point in the first image and the visible light information of the pixel point in the second image are weighted
  • the weighted light information is assigned to the visible light information of the overlapping area of the pixel in the stitched image.
  • step S161 after the pixel point in the second image is mapped to the first image, it is determined whether the visible light information and the depth information of the pixel point are the same, to determine whether the pixel point is mapped to the first image and The overlapping area of the second image.
  • the pixels in the overlap area will exist at least The two visible light information can be fused by the weighted method to obtain visible light information of the pixel in the stitched image.
  • Step S163 when the pixel points in the second image are mapped to the extended area, the visible light information of the pixel in the second image is assigned to the visible light information of the extended area of the pixel in the stitched image.
  • the visible light information of the pixel of the second image may be directly assigned to the visible light information of the pixel in the mosaic image.
  • the present application determines the visible light information of the pixel point in the spliced image by using the depth information of the pixel and the visible light information, and solves the problem of foreground and background occlusion occurring during splicing, and achieves the improvement of the splicing quality. purpose.
  • step S16 the pixel points in the mapped second image are spliced onto the overlapping area and/or the extended area by using the depth information of the pixel in the second image and the visible light information, and the obtained stitched image may include:
  • Step S165 determining whether a plurality of pixel points in the second image are simultaneously mapped to the same pixel of the overlapping area and/or the extended area by using the coordinate information of the pixel points in the second image in the first image coordinate system. on.
  • Step S167 when a plurality of pixel points in the second image are simultaneously mapped to the same pixel of the overlapping area and/or the extended area, determining the overlapping area according to the plurality of depth information of the plurality of pixel points in the second image. Or visible light information of overlapping regions and/or extended regions of the same pixel of the extended region in the stitched image.
  • the plurality of pixel points in the second image are simultaneously mapped to the same pixel point of the extended area.
  • the problem of front background occlusion occurs.
  • step S167 determining, according to the plurality of depth information of the plurality of pixel points in the second image, the visible light information of the overlapping area and/or the extended area of the same pixel of the overlapping area and/or the extended area in the stitched image.
  • Step S1671 comparing multiple depth information of multiple pixel points in the second image, multiple depth letters
  • the visible light information of the pixel with the smallest depth information in the information is assigned to the visible light information of the overlapping area and/or the extended area of the same pixel in the stitched image.
  • visible light information with smaller depth information can be used as an overlapping region of the same pixel in the mosaic image and/or Or expand the visible light information of the area.
  • Step S1673 performing weighting operation on the visible light information of the plurality of pixel points in the second image, and assigning the weighted visible light information to the visible light information of the overlapping area and/or the extended area of the same pixel in the stitched image.
  • the visible light information of the pixel point is given a larger weight value, and the plurality of visible light information of the plurality of pixel points are weighted, and the weighting operation is performed.
  • the visible light information is visible light information as a pixel of the overlapping area and/or the extended area in the stitched image.
  • FIG. 5 is a schematic diagram of an optional pixel point in the presence of occlusion according to the first embodiment of the present application.
  • Fig. 5 there are two points P i , P j in space.
  • the coordinate information of P i on the second image coordinate system is m i (u i , v i ), and the coordinate information of P j on the second image coordinate system is m j (u j , v j ).
  • m i (u i , v i ) is mapped to m(u, v) in the first image coordinate system
  • m j (u j , v j ) is also mapped to m in the first image coordinate system ( u, v).
  • u i is the abscissa of P i in the second image coordinate system
  • v i is the ordinate of P i in the second image coordinate system
  • u j is the abscissa of P j in the second image coordinate system
  • v j is P j ordinate in the second image coordinate system
  • u is P i / P j abscissa in the first image coordinate system
  • v is P i / P j in a first vertical coordinate system of the image coordinate.
  • the weighted operation is performed by using a plurality of visible light information of the plurality of pixel points to perform weighting operation.
  • the information is visible light information as a pixel of an extended area in the stitched image.
  • an apparatus embodiment of an image splicing apparatus is also provided.
  • the splicing apparatus of the image may be used to implement the splicing method of the image in the embodiment of the present application.
  • the splicing method of the image in the embodiment of the present application may also be performed by the splicing device of the image, and the description of the method in the present application is omitted.
  • FIG. 6 is a schematic diagram of an optional image splicing apparatus according to Embodiment 2 of the present application. As shown in Figure 6, the device includes:
  • the acquiring unit 40 is configured to acquire the first image collected by the first camera and the second image captured by the second camera, where the first image is an image with visible light information, and the second image is included with depth information and visible light information.
  • the first camera is disposed adjacent to the second camera, and the collected first image and the second image have overlapping regions.
  • the first image may be the target image in the image stitching
  • the second image may be the image to be stitched in the image stitching
  • the number of the images to be stitched may be one or more.
  • the process of stitching is the same as the process of stitching the second image to the first image.
  • the first image may be a visible light image captured by a visible light camera.
  • the first image may be used as a partial image of the target image, thereby achieving an effect of saving calculation amount and camera cost.
  • the second camera may be an RGB-D camera, and the captured second image may be an RGB-D image, and the RGB-D image includes a visible light image and a depth image, that is, for any pixel in the second image, With visible light information and depth information.
  • the overlapping area of the second image and the first image is required, and the RGB-D image is included in the RGB-D image because any one of the pixels in the second image exists.
  • the depth information includes visible light information. Therefore, in the mapping process, the pixels of the overlapping area need not be aligned with the depth information and the visible light information, thereby achieving the purpose of improving the splicing efficiency.
  • the mapping unit 42 is configured to map pixel points in the second image onto the overlapping area and/or the extended area.
  • mapping the pixel to the first image in the second image is the same, that is, in the embodiment of the present application, the overlapping region and the pixel of the extended region are mapped in the same manner, and therefore, the solution is solved.
  • the transition band weight in the stitched image There may be a problem of mutation between the shadow and the transition region of the overlap region and the extended region, and the quality of the fusion of the overlap region of the stitched image and the transition region of the extended region is improved.
  • the splicing unit 44 is configured to splicing pixel points in the mapped second image onto the overlapping area and/or the extended area by using depth information of the pixel points in the second image and visible light information to obtain a spliced image.
  • the pixel point in the mapped second image when the pixel point in the mapped second image is in the overlapping area, the pixel point may have two visible light information, one depth information, that is, a pixel point falling in the overlapping area, visible light information and depth information. The number is different.
  • the pixel in the mapped second image is in the extended region, the pixel may have one visible light information and one depth information, or one pixel may have multiple visible light when there is an occlusion problem at the pixel of the extended region.
  • the information and the plurality of depth information, that is, the pixel points that fall within the extended area the pixels have the same amount of visible light information as well as depth information.
  • the weighted operation result can be used as the pixel image in the mapped second image by using a weighting operation on the two visible light information.
  • Visible light information When the pixel in the mapped second image is in the extended region, the visible light information of the pixel in the second image may be used as the visible light information in the stitched image.
  • the plurality of depth information of the plurality of pixel points may be compared, and the visible light information of the pixel with the smallest depth information is mapped. The visible light information of a plurality of pixels in a pixel in the mosaic.
  • the present application determines the visible light information of the pixel point in the spliced image by using the depth information of the pixel and the visible light information, and solves the problem of foreground and background occlusion occurring during splicing, and achieves the improvement of the splicing quality. purpose.
  • the acquiring unit 40 is configured to acquire a first image collected by the first camera and a second image captured by the second camera, where the first image is an image with visible light information, and the second image is included
  • the depth information and the image of the visible light information, the first camera is disposed adjacent to the second camera, and the collected first image and the second image have an overlapping area
  • the mapping unit 42 is configured to map the pixel points in the second image to the overlapping a splicing unit 44, configured to splicing pixel points in the mapped second image onto the overlapping area and/or the extended area by using depth information of the pixel points in the second image and visible light information , get the stitched image.
  • mapping unit 42 may include:
  • a reading module configured to read coordinate information of a pixel point in the second image in the second image coordinate system
  • a coordinate transformation module configured to map coordinate information of a pixel in the second image in the second image coordinate system to the first image coordinate system by using coordinate transformation, to obtain a pixel point in the second image in the first image coordinate system Coordinate information on;
  • a determining module configured to determine, by the coordinate information of the pixel point in the second image on the first image coordinate system, the position of the pixel in the second image in the overlapping area and/or the extended area.
  • the coordinate transformation module comprises:
  • a first mapping sub-module configured to map coordinate information of the pixel in the second image in the second image coordinate system to the second camera coordinate system, to obtain a pixel point in the second image in the second camera coordinate system Coordinate information
  • a second mapping sub-module configured to map coordinate information of the pixel in the second image in the second camera coordinate system to the first camera coordinate system, to obtain a pixel point in the second image in the first camera coordinate system Coordinate information
  • a third mapping sub-module configured to map coordinate information of the pixel in the second image in the first camera coordinate system to the first image coordinate system, to obtain a pixel point in the second image on the first image coordinate system Coordinate information.
  • the first mapping sub-module calculates, by using the first formula, coordinate information m 2 (X 2 , Y 2 , Z 2 ) of the pixel in the second image in the second camera coordinate system;
  • X 2 is the abscissa of the pixel point in the second image in the second camera coordinate system
  • Y 2 is the ordinate of the pixel point in the second image in the second camera coordinate system
  • Z 2 is the second image
  • a 2 is the internal parameter of the second camera
  • a 2 -1 is the inverse matrix of A 2
  • D 2 is the scale factor of the second image
  • u 2 For the abscissa of the pixel in the second image in the second image coordinate system, v 2 is the ordinate of the pixel in the second image in the second image coordinate system.
  • the second mapping sub-module calculates the coordinate information m 1 (X 1 , Y 1 , Z 1 ) of the pixel point in the second image in the first camera coordinate system by using the following second formula;
  • X 1 is the abscissa of the pixel point in the second image in the first camera coordinate system
  • Y 1 is the ordinate of the pixel point in the second image in the first camera coordinate system
  • Z 1 is the second image
  • R is the rotation matrix of the first camera coordinate system and the second camera coordinate system
  • t is the relative translation of the first camera coordinate system and the second camera coordinate system Vector
  • O T is a zero matrix
  • X 2 is the abscissa of the pixel in the second image in the second camera coordinate system
  • Y 2 is the ordinate of the pixel in the second image in the second camera coordinate system
  • Z 2 is depth information of a pixel point in the second image in the second camera coordinate system.
  • first formula, the second formula, and the third formula respectively correspond to the first mapping subunit, the second mapping subunit, and the third mapping subunit.
  • the coordinate information of the pixel points in the second image is mapped into the first image coordinate system by coordinate transformation, thereby obtaining coordinate information of the pixel points in the second image on the first image coordinate system.
  • the coordinate information is used to map the coordinate information of the pixel points in the second image to the first image coordinate system, so that the coordinate information of the pixel points in the second image on the first image coordinate system can also be performed as follows. Calculation.
  • the coordinate information of the pixel point in the second image in the second image coordinate system is mapped to the first image coordinate system by the fourth formula to obtain the coordinate information m of the pixel point in the second image on the first image coordinate system.
  • m 1 (Z 2 A 1 RA 2 -1 m 2 + A 1 t) / Z 1 .
  • Z 2 is depth information of a pixel point in the second image in the second camera coordinate system
  • a 1 is an internal parameter of the first camera
  • R is a rotation matrix of the first camera coordinate system and the second camera coordinate system
  • a 2 -1 is the inverse matrix of A 2
  • m 2 is the coordinate information of the pixel point in the second image on the second image coordinate system
  • t is the relative translation of the first camera coordinate system and the second camera coordinate system.
  • the vector, Z 1 is the depth information of the pixel points in the second image in the first camera coordinate system.
  • the depth information Z 2 of the pixel in the second image in the second camera coordinate system may be acquired by the depth information of the RGB-D image, and the depth information Z 1 of the pixel in the second image in the first camera coordinate system may be It is calculated by the above first formula and the second formula.
  • the coordinate information m 1 (u 1 , v 1 ) of the pixel points in the second image on the first image coordinate system can be calculated by the fourth formula.
  • the internal parameter matrix A 1 of the first camera, the internal parameter matrix A 2 of the second camera, and the rotation matrix R of the first camera coordinate system and the second camera coordinate system; the first camera coordinate system and The relative translation vector t of the second camera coordinate system can be obtained by calibration.
  • the third mapping sub-module calculates coordinate information m 1 (u 1 , v 1 ) of the pixel points in the second image on the first image coordinate system by using a third formula
  • u 1 is the abscissa of the pixel in the second image in the first image coordinate system
  • v 1 is the ordinate of the pixel in the second image in the first image coordinate system
  • a 1 is the first camera
  • D 1 is the scale factor of the first image
  • X 1 is the abscissa of the pixel in the second image in the first camera coordinate system
  • Y 1 is the pixel point in the second image in the first camera
  • Z 1 is the depth information of the pixel points in the second image in the first camera coordinate system.
  • the splicing unit 44 includes:
  • a first splicing module configured to: when the pixel points in the second image are mapped to the overlapping area of the first image and the second image, the visible light information of the pixel in the first image and the visible light of the pixel in the second image The information is subjected to a weighting operation, and the weighted visible light information is assigned to the visible light information of the overlapping area of the pixel in the stitched image.
  • the visible light information and the depth information of the pixel point are the same, to determine whether the pixel point is mapped to the overlapping area of the first image and the second image.
  • At least two visible light information may exist in the pixel of the overlapping area, and the visible light information may be fused by the weighting method to obtain visible light information of the pixel in the spliced image.
  • the second splicing module is configured to assign visible light information of the pixel point in the second image to visible light information of the extended area of the pixel in the spliced image when the pixel point in the second image is mapped to the extended area.
  • the visible light information of the pixel of the second image may be directly assigned to the visible light information of the pixel in the mosaic image.
  • the present application determines the visible light information of the pixel point in the spliced image by using the depth information of the pixel and the visible light information, and solves the problem of foreground and background occlusion occurring during splicing, and achieves the improvement of the splicing quality. purpose.
  • the splicing unit may further include:
  • a determining module configured to determine, by using coordinate information of the pixel points in the second image in the first image coordinate system, whether the plurality of pixel points in the second image are simultaneously mapped to the same overlap region and/or the extended region On the pixel.
  • a determining module for simultaneously mapping a plurality of pixel points in the second image to the overlapping area and/or expanding When the same pixel of the area is displayed, determining the visible area of the overlapping area and/or the extended area of the same pixel in the stitched image and/or the visible area of the extended area according to the plurality of depth information of the plurality of pixel points in the second image information.
  • the plurality of pixel points in the second image are simultaneously mapped to the same pixel point of the extended area.
  • the problem of front background occlusion occurs.
  • the determining module may include:
  • the comparison submodule is configured to compare the plurality of depth information of the plurality of pixel points in the second image, and assign the visible light information of the pixel with the smallest depth information in the plurality of depth information to the overlapping area of the same pixel in the mosaic image and / or expand the visible light information of the area.
  • the weighting sub-module is configured to perform weighting operation on the visible light information of the plurality of pixel points in the second image, and assign the visible light information subjected to the weighting operation to the visible light information of the overlapping area and/or the extended area of the same pixel in the stitched image .
  • the visible light information of the pixel point is given a larger weight value, and the plurality of visible light information of the plurality of pixel points are weighted, and the weighting operation is performed.
  • the visible light information is visible light information as a pixel of the overlapping area and/or the extended area in the stitched image.
  • an embodiment of the present application further provides an electronic device, including: a housing 701, a processor 702, a memory 703, a circuit board 704, and a power circuit 705.
  • the circuit board 704 is disposed in the housing 701. Inside the space, the processor 702 and the memory 703 are disposed on the circuit board 704; the power circuit 705 is used to supply power to the respective circuits or devices; the memory 703 is used to store executable program code; and the processor 702 is stored in the memory by running the memory.
  • the program code is executable to perform the splicing method of the image provided by the embodiment of the present application; wherein the splicing method of the image includes:
  • first image acquired by the first camera and a second image captured by the second camera, wherein the first image is an image with visible light information, and the second image is configured to include depth information. And the image of the visible light information, the first camera is disposed adjacent to the second camera, and the collected first image and the second image have overlapping regions;
  • the pixel points in the mapped second image are spliced onto the overlapping area and/or the extended area by the depth information of the pixel points in the second image and the visible light information to obtain a spliced image.
  • the processor in the electronic device executes the splicing method of the image described above by running the executable program code stored in the memory, thereby implementing the technical effect of improving image splicing efficiency, and further solving the related technology.
  • the technical problem of low stitching efficiency In the image stitching process, the technical problem of low stitching efficiency.
  • the embodiment of the present application further provides an application for performing a splicing method of the image at a running time, where the splicing method of the image may include:
  • the first image is an image with visible light information
  • the second image is an image including depth information and the visible light information
  • the first camera is disposed adjacent to the second camera, and the collected first image and the second image have overlapping regions;
  • the pixel points in the mapped second image are spliced onto the overlapping area and/or the extended area by the depth information of the pixel points in the second image and the visible light information to obtain a spliced image.
  • the application is used to perform the splicing method of the image described above in the application at the time of operation, thereby realizing the technical effect of improving the image splicing efficiency, and further solving the related art, in the image splicing process, the splicing efficiency is low. technical problem.
  • the embodiment of the present application further provides a storage medium for storing executable program code, where the executable program code is executed to perform a stitching method of the image, where the image stitching method may be To include:
  • the first image is an image with visible light information
  • the second image is an image including depth information and the visible light information
  • the first camera is disposed adjacent to the second camera, and the collected first image and the second image have overlapping regions;
  • the pixel points in the mapped second image are spliced onto the overlapping area and/or the extended area by the depth information of the pixel points in the second image and the visible light information to obtain a spliced image.
  • the executable medium code for executing the splicing method of the image described above in the present application is stored in the storage medium, thereby implementing the technical effect of improving the image splicing efficiency, and further solving the related art in the image splicing process. , technical problems with low splicing efficiency.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, may be located in one place. Or it can be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种图像的拼接方法和装置。其中,该方法包括:获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,第一图像为具有可见光信息的图像,第二图像为包括深度信息以及可见光信息的图像,第一摄像机与第二摄像机相邻设置,采集到的第一图像与第二图像存在重叠区域;将第二图像中的像素点映射到重叠区域和/或扩展区域上,其中,扩展区域为第二图像中的像素点映射到第一图像之外的图像区域;通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像。本申请解决了现有技术在图像拼接过程中,拼接效率低的技术问题。

Description

图像的拼接方法和装置
本申请要求于2015年11月6日提交中国专利局、申请号为201510752156.X发明名称为“图像的拼接方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,具体而言,涉及一种图像的拼接方法和装置。
背景技术
图像拼接技术是将数张具有重叠部分的图像拼成一幅大型的无缝高分辨率图像的技术,通过图像拼接,可以增大相机的视场范围。图像拼接与使用大视场角的镜头来增大视场范围的方法相比,图像拼接具有更小的畸变。
传统的二维图像拼接,例如,基于SIFT(尺度不变特征转换,Scale Invariant Feature Transform)的拼接算法,基于SIFT的拼接算法计算量大,在复杂场景下容易出现误差导致拼接后图像质量差的问题。并且在图像拼接过程中,由于标定的深度和实际的深度不同,过渡带容易出现重影问题。又例如,在基于深度图像的拼接技术中,采用多个可见光相机进行图像信息采集,利用相机间的立体视觉原理得到深度信息进行拼接,由于该方法需要实时进行特征点的匹配,影响拼接的实时效果。
在获取深度信息技术日趋成熟的情况下,相关技术中提出了结合深度信息进行图像拼接的方法,通过查找目标图片与待拼接图片的重叠区域,通过深度相机获取重叠区域的深度信息,根据深度信息得到视差量,将重叠区的像素点映射到目标图上。上述结合深度信息进行拼接的方法与传统的二维图像拼接相比,可以适应更复杂的场景的拼接,并且在多个可见光相机的基础上增加额外的深度相机,获取重叠区的深度信息,能够解决传统的二维图像拼接的重影问题。但是,该方法存在以下问题:首先,相关技术需要预先计算获取重叠区域,通过深度相机获取重叠区域的深度图,并对重叠区域像素点的深度信息与可见光信息进行对准,拼接过程效率较低。其次,相关技术中需要识别重叠区域,对于重叠区域与扩展区域的像素点采用两种不同的方法进行映射,过渡带可能存在突变现象,造成重叠区域和扩展区域过渡段融 合不理想的问题。再次,相关结合深度信息进行图像拼接的过程中,扩展区域的像素点在映射过程中没有利用深度信息进行变换,当像素点由待拼接图像映射至目标图像时,由于视点存在变换,存在待拼接图像上可以观察到的像素点,在拼接后的目标图像上无法看到的问题,不能很好的处理前景和背景遮挡的问题。
针对上述在图像拼接过程中,拼接效率低的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种图像的拼接方法和装置,以至少解决相关技术在图像拼接过程中,拼接效率低的技术问题。
根据本申请实施例的一个方面,提供了一种图像的拼接方法,包括:获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,第一图像为具有可见光信息的图像,第二图像为包括深度信息以及可见光信息的图像,第一摄像机与第二摄像机相邻设置,采集到的第一图像与第二图像存在重叠区域;将第二图像中的像素点映射到重叠区域和/或扩展区域上,其中,扩展区域为第二图像中的像素点映射到第一图像之外的图像区域;通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像。
进一步地,将第二图像中的像素点映射到重叠区域和/或扩展区域上的步骤包括:读取第二图像中的像素点在第二图像坐标系中的坐标信息;使用坐标变换,将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息;通过第二图像中的像素点在第一图像坐标系上的坐标信息,确定第二图像中的像素点在重叠区域和/或扩展区域的位置。
进一步地,使用坐标变换,将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息的步骤包括:将第二图像中的像素点在第二图像坐标系中 的坐标信息映射至第二摄像机坐标系,得到第二图像中的像素点在第二摄像机坐标系中的坐标信息;将第二图像中的像素点在第二摄像机坐标系中的坐标信息映射至第一摄像机坐标系,得到第二图像中的像素点在第一摄像机坐标系中的坐标信息;将第二图像中的像素点在第一摄像机坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息。
进一步地,通过如下第一公式将第二图像中的像素点在第二图像坐标系中坐标信息m2(u2,v2)映射至第二摄像机坐标系,得到第二图像中的像素点在第二摄像机坐标系中的坐标信息m2(X2,Y2,Z2);
Figure PCTCN2016096182-appb-000001
其中,A2为第二摄相机的内参数;D2为第二图像的规模化系数。
进一步地,通过如下第二公式将第二图像中的像素点在第二摄像机坐标系中的坐标信息m2(X2,Y2,Z2)映射至第一摄像机坐标系,得到第二图像中的像素点在第一摄像机坐标系中的坐标信息m1(X1,Y1,Z1);
Figure PCTCN2016096182-appb-000002
其中,R为第一摄像机坐标系与第二摄像机坐标系相对的旋转矩阵,t为第一摄像机坐标系与第二摄像机坐标系的相对的平移向量。
进一步地,通过如下第三公式将第二图像中的像素点在第一摄像机坐标系中的坐标信息m1(X1,Y1,Z1)映射至第一图像坐标系,得到第二图像中的像素点 在第一图像坐标系上的坐标信息m1(u1,v1);
Figure PCTCN2016096182-appb-000003
其中,A1为第一摄像机的内参数;D1为第一图像的规模化系数。
进一步地,通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像的步骤包括:当第二图像中的像素点映射至第一图像与第二图像的重叠区域时,将像素点在第一图像中的可见光信息以及像素点在第二图像中的可见光信息进行加权运算,将经过加权运算的可见光信息赋值给像素点在拼接图像中的重叠区域的可见光信息;当第二图像中的像素点映射至扩展区域时,将像素点在第二图像中的可见光信息,赋值给像素点在拼接图像中的扩展区域的可见光信息。
进一步地,通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像的步骤包括:通过映射后的第二图像中的像素点在第一图像坐标系上的坐标信息,判断第二图像中的多个像素点是否同时映射到重叠区域和/或扩展区域的同一像素点上;当第二图像中的多个像素点同时映射到重叠区域和/或扩展区域的同一像素点时,根据第二图像中的多个像素点的多个深度信息,确定重叠区域和/或扩展区域的同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
进一步地,根据第二图像中的多个像素点的多个深度信息,确定重叠区域和/或扩展区域的同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息的步骤包括:对比第二图像中的多个像素点的多个深度信息,将多个深度信息中深度信息最小的像素点的可见光信息赋值给同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息;或者,对第二图像中的多个像素点的可见光信息进行加权运算,将经过加权运算的可见光信息赋值给同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
根据本申请实施例的另一方面,还提供了一种图像的拼接装置,包括: 获取单元,用于获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,第一图像为具有可见光信息的图像,第二图像为包括深度信息以及可见光信息的图像,第一摄像机与第二摄像机相邻设置,采集到的第一图像与第二图像存在重叠区域;映射单元,用于将第二图像中的像素点映射到重叠区域和/或扩展区域上,其中,扩展区域为第二图像中的像素点映射到第一图像之外的图像区域;拼接单元,用于通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像。
进一步地,映射单元包括:读取模块,用于读取第二图像中的像素点在第二图像坐标系中的坐标信息;坐标变换模块,用于使用坐标变换,将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息;确定模块,用于通过第二图像中的像素点在第一图像坐标系上的坐标信息,确定第二图像中的像素点在重叠区域和/或扩展区域的位置。
进一步地,坐标变换模块包括:第一映射子模块,用于将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第二摄像机坐标系,得到第二图像中的像素点在第二摄像机坐标系中的坐标信息;第二映射子模块,用于将第二图像中的像素点在第二摄像机坐标系中的坐标信息映射至第一摄像机坐标系,得到第二图像中的像素点在第一摄像机坐标系中的坐标信息;第三映射子模块,用于将第二图像中的像素点在第一摄像机坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息。
进一步地,第一映射子模块通过如下第一公式计算得到第二图像中的像素点在第二摄像机坐标系中的坐标信息m2(X2,Y2,Z2);
Figure PCTCN2016096182-appb-000004
其中,A2为第二摄相机的内参数;D2为第二图像的规模化系数。
进一步地,第二映射子模块通过如下第二公式计算得到第二图像中的像素点在第一摄像机坐标系中的坐标信息m1(X1,Y1,Z1);
Figure PCTCN2016096182-appb-000005
其中,R为第一摄像机坐标系与第二摄像机坐标系相对的旋转矩阵,t为第一摄像机坐标系与第二摄像机坐标系的相对的平移向量。
进一步地,第三映射子模块通过如下第三公式计算得到第二图像中的像素点在第一图像坐标系上的坐标信息m1(u1,v1);
Figure PCTCN2016096182-appb-000006
其中,A1为第一摄像机的内参数;D1为第一图像的规模化系数。
进一步地,拼接单元包括:第一拼接模块,用于当第二图像中的像素点映射至重叠区域时,将像素点在第一图像中的可见光信息以及像素点在第二图像中的可见光信息进行加权运算,将经过加权运算的可见光信息赋值给像素点的在拼接图像中的重叠区域的可见光信息;第二拼接模块,用于当第二图像中的像素点映射至扩展区域时,将像素点在第二图像中的可见光信息,赋值给像素点在拼接图像中的扩展区域的可见光信息。
进一步地,拼接单元包括:判断模块,用于通过映射后的第二图像中的像素点在第一图像坐标系上的坐标信息,判断第二图像中的多个像素点是否同时映射到重叠区域和/或扩展区域的同一像素点上;确定模块,用于当第二图像中的多个像素点同时映射到重叠区域和/或扩展区域的同一像素点时,根据第二图像中的多个像素点的多个深度信息,确定重叠区域和/或扩展区域的同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
进一步地,确定模块包括:对比子模块,用于对比第二图像中的多个像素点的多个深度信息,将多个深度信息中深度信息最小的像素点的可见光信 息赋值给同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息;加权子模块,用于对第二图像中的多个像素点的可见光信息进行加权运算,将经过加权运算的可见光信息赋值给同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
本申请还提供了一种电子设备,包括:壳体、处理器、存储器、电路板和电源电路,其中,电路板安置在壳体围成的空间内部,处理器和存储器设置在电路板上;电源电路,用于为各个电路或器件供电;存储器用于存储可执行程序代码;处理器通过运行存储器中存储的可执行程序代码,以执行本申请实施例所提供的图像的拼接方法。
本申请还提供了一种应用程序,该应用程序用于在运行时执行本申请实施例所提供的图像的拼接方法。
本申请还提供了一种存储介质,该存储介质用于存储可执行程序代码,该可执行程序代码被运行以执行本申请实施例所提供的图像的拼接方法。
在本申请实施例中,采用获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,第二图像为包括深度信息以及可见光信息的RGB-D图像;将第二图像中的像素点映射到第一图像上;通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到第一图像上,得到拼接图像的方式,通过RGB-D相机同时获取可见光信息以及深度信息,达到了无需对重叠区域像素点的深度信息与可见光信息进行对准的目的,从而实现了提高图像拼接效率的技术效果,进而解决了相关技术在图像拼接过程中,拼接效率低的技术问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例一的一种可选的图像的拼接方法的流程图;
图2是根据本申请实施例一的一种可选的像素点映射的示意图;
图3是根据本申请实施例一的又一种可选的图像的拼接方法的示意图;
图4是根据本申请实施例一的又一种可选的像素点映射的示意图;
图5是根据本申请实施例一的一种可选的存在遮挡时像素点的示意图;
图6是根据本申请实施例二的一种可选的图像的拼接装置的示意图;以及
图7是本申请的一可选实施例的电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例一
根据本申请实施例,提供了一种图像的拼接的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本申请实施例一的一种可选的图像的拼接方法的流程图,如图1所示,该方法包括如下步骤:
步骤S12,获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,第一图像为具有可见光信息的图像,第二图像为包括深度信 息以及可见光信息的图像,第一摄像机与第二摄像机相邻设置,采集到的第一图像与第二图像存在重叠区域。
具体地,在上述步骤S12中,第一图像可以是图像拼接中的目标图像,第二图像可以是图像拼接中的待拼接图像,待拼接图像的数量可以是一个也可以是多个。在待拼接图像的数量为多个时,拼接的步骤与将第二图像拼接至第一图像的步骤相同。其中,第一图像可以是可见光相机拍摄的可见光图像,当第一图像为可见光相机拍摄的可见光图像时,可以将第一图像作为目标图的部分图像,可以达到节约计算量和相机成本的效果。第二摄像机可以是RGB-D相机,拍摄得到的第二图像可以是RGB-D图像,RGB-D图像包括可见光图像以及深度图像,也就是说,对于第二图像中的任意一个像素点,同时具有可见光信息以及深度信息。其中,第一摄像机与第二摄像机可以是双目摄像机。
需要说明的是,在将第二图像拼接至第一图像的过程中,需要第二图像与第一图像存在重叠区域,由于第二图像中的任意一个像素点,在RGB-D图像中既包含深度信息又包括可见光信息,因此,在映射过程中,上述重叠区域的像素点,无需对深度信息与可见光信息进行对准,达到了提高拼接效率的目的。
步骤S14,将第二图像中的像素点映射到重叠区域和/或扩展区域上。
具体地,使用坐标变换,可以实现将第二图像(待拼接图像)中像素点映射至第一图像(目标图像)上。由于第二图像中的任意一个像素点映射至第一图像的过程是相同的,也就是说,在本申请实施例中,重叠区域与扩展区域的像素点采用相同的方法进行映射,因此,解决了拼接图像中过渡带重影以及重叠区域与扩展区域过渡段可能存在突变的问题,提高拼接图像重叠区域和扩展区域过渡段融合的质量。
需要说明的是,在将第二图像拼接至第一图像的过程中,需要第二图像与第一图像之间存在重叠区域,在本申请提供的方案中,由于第二图像中的任意一个像素点映射至第一图像的过程是相同的,也就是说,本申请不需要进行特征点的匹配,因此可以尽可能减小重叠区域的面积,从而达到在拼接次数相同的情况下,可以获取包含更大视场范围的拼接图像的目的。
步骤S16,通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像。
具体地,当映射后的第二图像中的像素点在重叠区域时,该像素点可以具有两个可见光信息,一个深度信息,也就是说,落在重叠区域的像素点,可见光信息与深度信息的数量不同。当映射后的第二图像中的像素点在扩展区域时,该像素点可以具有一个可见光信息和一个深度信息,或者,在扩展区域的像素点存在遮挡问题时,一个像素点可以具有多个可见光信息和多个深度信息,也就是说,落在扩展区域的像素点,该像素点具有相同数量的可见光信息以及深度信息。
需要说明的是,映射后的第二图像中的像素点在重叠区域时,可以通过对两个可见光信息进行加权运算,将加权运算的结果作为映射后的第二图像中的像素点在拼接图像中的可见光信息。映射后的第二图像中的像素点在扩展区域时,可以将第二图像中的像素点的可见光信息作为第二图像中的像素点在拼接图像中的可见光信息。当第二图像中的多个像素点在映射后落在扩展区域时,并且存在遮挡问题时,可以比较多个像素点的多个深度信息,将深度信息最小的像素点的可见光信息作为映射后多个像素点在拼接图中像素点的可见光信息。
还需要说明的是,本申请通过像素点的深度信息以及可见光信息,来确定像素点在拼接图像中的可见光信息,解决了在拼接时出现的前景、背景遮挡的问题,达到了提高拼接质量的目的。
在一种可选的应用场景中,上述步骤S12至步骤S16提供的图像拼接方法可以包括如下内容,图2是根据本申请实施例一的一种可选的像素点映射的示意图,在图2中,第一图像为从可见光相机中获取的可见光图像,第二图像、第N图像为从RGB-D相机获取的RGB-D图像。其中第一图像为目标图像,第二图像、第N图像为待拼接图像,第N图像与第二图像采用相同的拼接方法拼接至第一图像上。以第二图像拼接至第一图像为例,图2中的虚线矩形表示第二图像拼接至第一图像后的拼接图像,拼接图像的平面可以为第一图像的相机平面,因此第一摄像机采集的第一图像不需要计算,可以直接映射到拼接图像上。空间上第一点P1在第一图像中成像的像素点为P11,在第二图像中成 像的像素点为P21,空间上第二点P2在在第一图像中成像的像素点为P12,在第二图像中成像的像素点为P22。空间上第一点P1映射的过程即为,将P21映射至P11。空间上第一点P1拼接的过程即是,将位于重叠区域的P21与P11的进行融合,将P21点的可见光信息与P11的可见光信息进行加权运算,确定P1点在拼接图像上的可见光信息。空间上第二点P2映射的过程即为,将P22映射至P12,空间上第二点P2拼接的过程即是,将位于扩展区域的P22与P12进行拼接,在不存在遮挡的情况下,将P22的可见光信息赋值给P12。在存在遮挡的情况下,根据深度信息,确定P12的可见光信息。
在又一种可选的应用场景中,当第二图像的数量为多个,即待拼接的图像数量为多个时,可以按照上述第二图像拼接至第一图像的过程,逐一将多个待拼接图像(第二图像)拼接至第一图像上。图3是根据本申请实施例一的又一种可选的图像的拼接方法的示意图。如图3所示,本申请提供的图像拼接方法,在存在多个待拼接图像(第二图像、第N图像)时,可以首先将第二图像拼接至第一图像,在第二图像拼接至第一图像之后,再将第N图像拼接至第一图像。
在本申请实施例中,通过上述步骤S12至步骤S16,采用获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,第一图像为具有可见光信息的图像,第二图像为包括深度信息以及可见光信息的图像,第一摄像机与第二摄像机相邻设置,采集到的第一图像与第二图像存在重叠区域;将第二图像中的像素点映射到重叠区域和/或扩展区域上,其中,扩展区域为第二图像中的像素点映射到第一图像之外的图像区域;通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像的方式,通过RGB-D相机同时获取可见光信息以及深度信息,达到了无需对重叠区域像素点的深度信息与可见光信息进行对准的目的,从而实现了提高图像拼接效率的技术效果,进而解决了相关技术在图像拼接过程中,拼接效率低的技术问题。
可选地,步骤S14,将第二图像中的像素点映射到重叠区域和/或扩展区域上可以包括:
步骤S141,读取第二图像中的像素点在第二图像坐标系中的坐标信息。
步骤S143,使用坐标变换,将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息。
步骤S145,通过第二图像中的像素点在第一图像坐标系上的坐标信息,确定第二图像中的像素点在重叠区域和/或扩展区域的位置。
具体地,在上述步骤S141至步骤S145中,第一图像坐标系以及第二图像坐标系为二维图像坐标系,使用坐标变换,将第二图像中的像素点的坐标信息映射到第一图像坐标系中,从而得到第二图像中的像素点在第一图像坐标系上的坐标信息。
在一种可选的应用场景下,图4是根据本申请实施例一的又一种可选的像素点映射的示意图。在空间中任意一点P,在第一图像中的成像的像素点在第一图像坐标系坐标信息为m1(u1,v1),在第二图像中成像的像素点在第二图像坐标系的坐标信息为m2(u2,v2)。其中O1、O2为第一摄像机和第二摄像机的光心,Z1为P点在第一图像中像素点在第一摄像机坐标系到世界坐标系的距离(即m1点在第一摄像机坐标系中的深度信息),Z2为P点在第二图像中像素点在第二摄像机坐标系到世界坐标系的距离(即m2点在第二摄像机坐标系中的深度信息)。
可选地,步骤S143,使用坐标变换,将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息的可以包括:
步骤S1431,将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第二摄像机坐标系,得到第二图像中的像素点在第二摄像机坐标系中的坐标信息。
步骤S1433,将第二图像中的像素点在第二摄像机坐标系中的坐标信息映射至第一摄像机坐标系,得到第二图像中的像素点在第一摄像机坐标系中的坐标信息。
步骤S1435,将第二图像中的像素点在第一摄像机坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标 信息。
具体地,使用坐标变换,将第二图像中的像素点的坐标信息映射到第一图像坐标系中,从而得到第二图像中的像素点在第一图像坐标系上的坐标信息。
可选地,通过如下第一公式将第二图像中的像素点在第二图像坐标系中坐标信息m2(u2,v2)映射至第二摄像机坐标系,得到第二图像中的像素点在第二摄像机坐标系中的坐标信息m2(X2,Y2,Z2);
Figure PCTCN2016096182-appb-000007
其中,X2为第二图像中的像素点在第二摄像机坐标系中的横坐标,Y2为第二图像中的像素点在第二摄像机坐标系中的纵坐标,Z2为第二图像中的像素点在第二摄像机坐标系中的深度信息;A2为第二摄相机的内参数;A2 -1为A2的逆矩阵;D2为第二图像的规模化系数,u2为第二图像中的像素点在第二图像坐标系中的横坐标,v2为第二图像中的像素点在第二图像坐标系中的纵坐标。
可选地,通过如下第二公式将第二图像中的像素点在第二摄像机坐标系中的坐标信息m2(X2,Y2,Z2)映射至第一摄像机坐标系,得到第二图像中的像素点在第一摄像机坐标系中的坐标信息m1(X1,Y1,Z1);
Figure PCTCN2016096182-appb-000008
其中,X1为第二图像中的像素点在第一摄像机坐标系中的横坐标,Y1为第二图像中的像素点在第一摄像机坐标系中的纵坐标,Z1为第二图像中的像素点在第一摄像机坐标系中的深度信息,R为第一摄像机坐标系与第二摄像机坐标系相对的旋转矩阵,t为第一摄像机坐标系与第二摄像机坐标系的相对的平移向量,OT为零矩阵,X2为第二图像中的像素点在第二摄像机坐标系中的横 坐标,Y2为第二图像中的像素点在第二摄像机坐标系中的纵坐标,Z2为第二图像中的像素点在第二摄像机坐标系中的深度信息。
可选地,通过如下第三公式将第二图像中的像素点在第一摄像机坐标系中的坐标信息m1(X1,Y1,Z1)映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息m1(u1,v1);
Figure PCTCN2016096182-appb-000009
其中,u1为第二图像中的像素点在第一图像坐标系中的横坐标,v1为第二图像中的像素点在第一图像坐标系中的纵坐标,A1为第一摄像机的内参数;D1为第一图像的规模化系数,X1为第二图像中的像素点在第一摄像机坐标系中的横坐标,Y1为第二图像中的像素点在第一摄像机坐标系中的纵坐标,Z1为第二图像中的像素点在第一摄像机坐标系中的深度信息。
需要说明的是,上述第一公式、第二公式、第三公式分别对应于步骤S1431、步骤S1433、步骤S1435。利用坐标变换,将第二图像中的像素点的坐标信息映射到第一图像坐标系中,从而得到第二图像中的像素点在第一图像坐标系上的坐标信息。
还需要说明的是,作为本申请一种等同的表述方式,上述利用坐标变换,将第二图像中的像素点的坐标信息映射到第一图像坐标系中,从而得到第二图像中的像素点在第一图像坐标系上的坐标信息过程也可以按照如下方式进行计算。
通过如下第四公式将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息m1(u1,v1):
m1=(Z2A1RA2 -1m2+A1t)/Z1
其中,Z2为第二图像中的像素点在第二摄像机坐标系中的深度信息,A1为第一摄像机的内参数,R为第一摄像机坐标系与第二摄像机坐标系相对的旋转 矩阵,A2 -1为A2的逆矩阵,m2为第二图像中的像素点在第二图像坐标系上的坐标信息,t为第一摄像机坐标系与第二摄像机坐标系的相对的平移向量,Z1为第二图像中的像素点在第一摄像机坐标系中的深度信息。第二图像中的像素点在第二摄像机坐标系中的深度信息Z2可以通过RGB-D图像的深度信息获取,第二图像中的像素点在第一摄像机坐标系中的深度信息Z1可以通过上述第一公式以及第二公式计算得出。在Z1计算得出后,通过第四公式既可以计算得出第二图像中的像素点在第一图像坐标系上的坐标信息m1(u1,v1)。
还需要说明的是,第一摄像机的内参数矩阵A1,第二摄相机的内参数矩阵A2;第一摄像机坐标系与第二摄像机坐标系相对的旋转矩阵R;第一摄像机坐标系与第二摄像机坐标系的相对的平移向量t可以通过标定获取。
还需要说明的是,通过摄像机成像时,图像上的点和世界坐标系之间的关系,建立方程式,运用比例正交投影确定摄像机的内参数A1、A2以及外参数R、t。外参数标定的过程可以是,假设第一摄相机和第二摄相机的靶面朝向一样,并且在世界坐标系同一平面上,其基线对齐,那么两个摄相机的相机坐标系只有平移关系,没有旋转关系,通过测量两个相机的基线距即可完成外参标定过程。内参数标定的过程可以是使用张正友标定法标定可见光相机内参数,使用一般的黑白格子标定板;使用张正友标定法标定红外相机内参数,使用专门针对红外相机的格子标定板,该标定板可以类似上述可见光标定板,其中白色格子区域为涂抹反射率高的材料,其中黑色格子区域为涂抹反射率很低的材料。
可选地,步骤S16,通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上可以包括:
步骤S161,当第二图像中的像素点映射至第一图像与第二图像的重叠区域时,将像素点在第一图像中的可见光信息以及像素点在第二图像中的可见光信息进行加权运算,将经过加权运算的可见光信息赋值给像素点在拼接图像中的重叠区域的可见光信息。
具体地,在上述步骤S161中,可以通过第二图像中的像素点在映射到第一图像后,判断像素点的可见光信息与深度信息是否相同,来判断该像素点是否映射到第一图像与第二图像的重叠区域。重叠区域的像素点会存在至少 两个可见光信息,可以将可见光信息通过加权的方法进行融合,得到像素点在拼接图像中的可见光信息。
步骤S163,当第二图像中的像素点映射至扩展区域时,将像素点在第二图像中的可见光信息,赋值给像素点在拼接图像中的扩展区域的可见光信息。
具体地,在上述步骤S163中,在第二图像的像素点映射到至扩展区域时,可以直接将第二图像的像素点的可见光信息,赋值给该像素点在拼接图像中的可见光信息。
还需要说明的是,本申请通过像素点的深度信息以及可见光信息,来确定像素点在拼接图像中的可见光信息,解决了在拼接时出现的前景、背景遮挡的问题,达到了提高拼接质量的目的。
可选地,步骤S16,通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像可以包括:
步骤S165,通过映射后的第二图像中的像素点在第一图像坐标系上的坐标信息,判断第二图像中的多个像素点是否同时映射到重叠区域和/或扩展区域的同一像素点上。
步骤S167,当第二图像中的多个像素点同时映射到重叠区域和/或扩展区域的同一像素点时,根据第二图像中的多个像素点的多个深度信息,确定重叠区域和/或扩展区域的同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
具体地,映射后的第二图像中的像素点的深度信息的数量为多个时,可以确定第二图像中的多个像素点同时映射到扩展区域的同一个像素点。在多个像素点同时映射到同一像素点时,便出现前背景遮挡的问题。
可选地,步骤S167,根据第二图像中的多个像素点的多个深度信息,确定重叠区域和/或扩展区域的同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息可以包括:
步骤S1671,对比第二图像中的多个像素点的多个深度信息,将多个深度信 息中深度信息最小的像素点的可见光信息赋值给同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
具体地,在实际情况中,从某一视点出发仅能看到距离视点较近点的可见光信息,因此,可以将深度信息较小的可见光信息作为同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
步骤S1673,对第二图像中的多个像素点的可见光信息进行加权运算,将经过加权运算的可见光信息赋值给同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
具体地,对于多个像素点,当像素点的深度信息较小时,为该像素点的可见光信息赋予较大的权重值,通过多个像素点的多个可见光信息进行加权运算,将加权运算后的可见光信息作为拼接图像中的重叠区域和/或扩展区域的像素点的可见光信息。
在一种可选的应用场景下,图5是根据本申请实施例一的一种可选的存在遮挡时像素点的示意图。在图5中,空间上两点Pi,Pj。Pi在第二图像坐标系上的坐标信息为mi(ui,vi),Pj在第二图像坐标系上的坐标信息为mj(uj,vj)。按照坐标变换,mi(ui,vi)映射至第一图像坐标系中的m(u,v),mj(uj,vj)同样映射至第一图像坐标系中的m(u,v)。其中,ui为Pi在第二图像坐标系中的横坐标,vi为Pi在第二图像坐标系中的纵坐标,uj为Pj在第二图像坐标系中的横坐标,vj为Pj在第二图像坐标系中的纵坐标,u为Pi/Pj在第一图像坐标系中的横坐标,v为Pi/Pj在第一图像坐标系中的纵坐标。在实际情况中,从视点O1出发仅能看到距离O1较近点的可见光信息,因此,可以将深度信息较小的可见光信息赋值给点m(u,v)。或者对于多个像素点,当像素点的深度信息较小时,为该像素点的可见光信息赋予较大的权重值,通过多个像素点的多个可见光信息进行加权运算,将加权运算后的可见光信息作为拼接图像中的扩展区域的像素点的可见光信息。
实施例二
根据本申请实施例,还提供了一种图像的拼接装置的装置实施例,需要说明的是,该图像的拼接装置可以用于实现本申请实施例的图像的拼接方法, 本申请实施例的图像的拼接方法也可以通过该图像的拼接装置来执行,在本申请方法实施例中进行过说明的不再赘述。
图6是根据本申请实施例二的一种可选的图像的拼接装置的示意图。如图6所示,该装置包括:
获取单元40,用于获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,第一图像为具有可见光信息的图像,第二图像为包括深度信息以及可见光信息的图像,第一摄像机与第二摄像机相邻设置,采集到的第一图像与第二图像存在重叠区域。
具体地,在获取单元40中,第一图像可以是图像拼接中的目标图像,第二图像可以是图像拼接中的待拼接图像,待拼接图像的数量可以是一个也可以是多个。在待拼接图像的数量为多个时,拼接的过程与将第二图像拼接至第一图像的过程相同。其中,第一图像可以是可见光相机拍摄的可见光图像,当第一图像为可见光相机拍摄的可见光图像时,可以将第一图像作为目标图的部分图像,可以达到节约计算量和相机成本的效果。第二摄像机可以是RGB-D相机,拍摄得到的第二图像可以是RGB-D图像,RGB-D图像包括可见光图像以及深度图像,也就是说,对于第二图像中的任意一个像素点,同时具有可见光信息以及深度信息。
需要说明的是,在将第二图像拼接至第一图像的过程中,需要第二图像与第一图像存在重叠区域,由于第二图像中的任意一个像素点,在RGB-D图像中既包含深度信息又包括可见光信息,因此,在映射过程中,上述重叠区域的像素点,无需对深度信息与可见光信息进行对准,达到了提高拼接效率的目的。
映射单元42,用于将第二图像中的像素点映射到重叠区域和/或扩展区域上。
具体地,使用坐标变换,可以实现将第二图像(待拼接图像)中像素点映射至第一图像(目标图像)上。由于第二图像中的任意一个像素点映射至第一图像的过程是相同的,也就是说,在本申请实施例中,重叠区域与扩展区域的像素点采用相同的方法进行映射,因此,解决了拼接图像中过渡带重 影以及重叠区域与扩展区域过渡段可能存在突变的问题,提高拼接图像重叠区域和扩展区域过渡段融合的质量。
需要说明的是,在将第二图像拼接至第一图像的过程中,需要第二图像与第一图像之间存在重叠区域,在本申请提供的方案中,由于第二图像中的任意一个像素点映射至第一图像的过程是相同的,也就是说,本申请不需要进行特征点的匹配,因此可以尽可能减小重叠区域的面积,从而达到在拼接次数相同的情况下,可以获取包含更大视场范围的拼接图像的目的。
拼接单元44,用于通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像。
具体地,当映射后的第二图像中的像素点在重叠区域时,该像素点可以具有两个可见光信息,一个深度信息,也就是说,落在重叠区域的像素点,可见光信息与深度信息的数量不同。当映射后的第二图像中的像素点在扩展区域时,该像素点可以具有一个可见光信息和一个深度信息,或者,在扩展区域的像素点存在遮挡问题时,一个像素点可以具有多个可见光信息和多个深度信息,也就是说,落在扩展区域的像素点,该像素点具有相同数量的可见光信息以及深度信息。
需要说明的是,映射后的第二图像中的像素点在重叠区域时,可以通过对两个可见光信息进行加权运算,将加权运算的结果作为映射后的第二图像中的像素点在拼接图像中的可见光信息。映射后的第二图像中的像素点在扩展区域时,可以将第二图像中的像素点的可见光信息作为第二图像中的像素点在拼接图像中的可见光信息。当第二图像中的多个像素点在映射后落在扩展区域时,并且存在遮挡问题时,可以比较多个像素点的多个深度信息,将深度信息最小的像素点的可见光信息作为映射后多个像素点在拼接图中像素点的可见光信息。
还需要说明的是,本申请通过像素点的深度信息以及可见光信息,来确定像素点在拼接图像中的可见光信息,解决了在拼接时出现的前景、背景遮挡的问题,达到了提高拼接质量的目的。
通过本申请实施例中获取单元40,用于获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,第一图像为具有可见光信息的图像,第二图像为包括深度信息以及可见光信息的图像,第一摄像机与第二摄像机相邻设置,采集到的第一图像与第二图像存在重叠区域;映射单元42,用于将第二图像中的像素点映射到重叠区域和/或扩展区域上;拼接单元44,用于通过第二图像中的像素点的深度信息以及可见光信息,将映射后的第二图像中的像素点拼接到重叠区域和/或扩展区域上,得到拼接图像。解决了相关技术在图像拼接过程中,拼接效率低的技术问题。
可选地,映射单元42可以包括:
读取模块,用于读取第二图像中的像素点在第二图像坐标系中的坐标信息;
坐标变换模块,用于使用坐标变换,将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息;
确定模块,用于通过第二图像中的像素点在第一图像坐标系上的坐标信息,确定第二图像中的像素点在重叠区域和/或扩展区域的位置。
可选地,坐标变换模块包括:
第一映射子模块,用于将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第二摄像机坐标系,得到第二图像中的像素点在第二摄像机坐标系中的坐标信息;
第二映射子模块,用于将第二图像中的像素点在第二摄像机坐标系中的坐标信息映射至第一摄像机坐标系,得到第二图像中的像素点在第一摄像机坐标系中的坐标信息;
第三映射子模块,用于将第二图像中的像素点在第一摄像机坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息。
可选地,第一映射子模块通过如下第一公式计算得到第二图像中的像素 点在第二摄像机坐标系中的坐标信息m2(X2,Y2,Z2);
Figure PCTCN2016096182-appb-000010
其中,X2为第二图像中的像素点在第二摄像机坐标系中的横坐标,Y2为第二图像中的像素点在第二摄像机坐标系中的纵坐标,Z2为第二图像中的像素点在第二摄像机坐标系中的深度信息;A2为第二摄相机的内参数;A2 -1为A2的逆矩阵;D2为第二图像的规模化系数,u2为第二图像中的像素点在第二图像坐标系中的横坐标,v2为第二图像中的像素点在第二图像坐标系中的纵坐标。
可选地,第二映射子模块通过如下第二公式计算得到第二图像中的像素点在第一摄像机坐标系中的坐标信息m1(X1,Y1,Z1);
Figure PCTCN2016096182-appb-000011
其中,X1为第二图像中的像素点在第一摄像机坐标系中的横坐标,Y1为第二图像中的像素点在第一摄像机坐标系中的纵坐标,Z1为第二图像中的像素点在第一摄像机坐标系中的深度信息,R为第一摄像机坐标系与第二摄像机坐标系相对的旋转矩阵,t为第一摄像机坐标系与第二摄像机坐标系的相对的平移向量,OT为零矩阵,X2为第二图像中的像素点在第二摄像机坐标系中的横坐标,Y2为第二图像中的像素点在第二摄像机坐标系中的纵坐标,Z2为第二图像中的像素点在第二摄像机坐标系中的深度信息。
需要说明的是,上述第一公式、第二公式、第三公式分别对应于第一映射子单元、第二映射子单元、第三映射子单元。利用坐标变换,将第二图像中的像素点的坐标信息映射到第一图像坐标系中,从而得到第二图像中的像素点在第一图像坐标系上的坐标信息。
还需要说明的是,作为本申请一种等同的表述方式,上述坐标变换模块, 利用坐标变换,将第二图像中的像素点的坐标信息映射到第一图像坐标系中,从而得到第二图像中的像素点在第一图像坐标系上的坐标信息过程也可以按照如下方式进行计算。
通过如下第四公式将第二图像中的像素点在第二图像坐标系中的坐标信息映射至第一图像坐标系,得到第二图像中的像素点在第一图像坐标系上的坐标信息m1(u1,v1):
m1=(Z2A1RA2 -1m2+A1t)/Z1
其中,Z2为第二图像中的像素点在第二摄像机坐标系中的深度信息,A1为第一摄像机的内参数,R为第一摄像机坐标系与第二摄像机坐标系相对的旋转矩阵,A2 -1为A2的逆矩阵,m2为第二图像中的像素点在第二图像坐标系上的坐标信息,t为第一摄像机坐标系与第二摄像机坐标系的相对的平移向量,Z1为第二图像中的像素点在第一摄像机坐标系中的深度信息。第二图像中的像素点在第二摄像机坐标系中的深度信息Z2可以通过RGB-D图像的深度信息获取,第二图像中的像素点在第一摄像机坐标系中的深度信息Z1可以通过上述第一公式以及第二公式计算得出。Z1计算得出后,通过第四公式既可以计算得出第二图像中的像素点在第一图像坐标系上的坐标信息m1(u1,v1)。
还需要说明的是,第一摄像机的内参数矩阵A1,第二摄相机的内参数矩阵A2;第一摄像机坐标系与第二摄像机坐标系相对的旋转矩阵R;第一摄像机坐标系与第二摄像机坐标系的相对的平移向量t可以通过标定获取。
还需要说明的是,通过摄像机成像时,图像上的点和世界坐标系之间的关系,建立方程式,运用比例正交投影确定摄像机的外参数R、t。
可选地,第三映射子模块通过如下第三公式计算得到第二图像中的像素点在第一图像坐标系上的坐标信息m1(u1,v1);
Figure PCTCN2016096182-appb-000012
其中,u1为第二图像中的像素点在第一图像坐标系中的横坐标,v1为第二 图像中的像素点在第一图像坐标系中的纵坐标,A1为第一摄像机的内参数;D1为第一图像的规模化系数,X1为第二图像中的像素点在第一摄像机坐标系中的横坐标,Y1为第二图像中的像素点在第一摄像机坐标系中的纵坐标,Z1为第二图像中的像素点在第一摄像机坐标系中的深度信息。
可选地,拼接单元44包括:
第一拼接模块,用于当第二图像中的像素点映射至第一图像与第二图像的重叠区域时,将像素点在第一图像中的可见光信息以及像素点在第二图像中的可见光信息进行加权运算,将经过加权运算的可见光信息赋值给像素点在拼接图像中的重叠区域的可见光信息。
具体地,可以通过第二图像中的像素点在映射到第一图像后,判断像素点的可见光信息与深度信息是否相同,来判断该像素点是否映射到第一图像与第二图像的重叠区域。重叠区域的像素点会存在至少两个可见光信息,可以将可见光信息通过加权的方法进行融合,得到像素点在拼接图像中的可见光信息。
第二拼接模块,用于当第二图像中的像素点映射至扩展区域时,将像素点在第二图像中的可见光信息,赋值给像素点在拼接图像中的扩展区域的可见光信息。
具体地,在第二图像的像素点映射到至扩展区域时,可以直接将第二图像的像素点的可见光信息,赋值给该像素点在拼接图像中的可见光信息。
还需要说明的是,本申请通过像素点的深度信息以及可见光信息,来确定像素点在拼接图像中的可见光信息,解决了在拼接时出现的前景、背景遮挡的问题,达到了提高拼接质量的目的。
可选地,拼接单元还可以包括:
判断模块,用于通过映射后的第二图像中的像素点在第一图像坐标系上的坐标信息,判断第二图像中的多个像素点是否同时映射到重叠区域和/或扩展区域的同一像素点上。
确定模块,用于当第二图像中的多个像素点同时映射到重叠区域和/或扩 展区域的同一像素点时,根据第二图像中的多个像素点的多个深度信息,确定重叠区域和/或扩展区域的同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
具体地,映射后的第二图像中的像素点的深度信息的数量为多个时,可以确定第二图像中的多个像素点同时映射到扩展区域的同一个像素点。在多个像素点同时映射到同一像素点时,便出现前背景遮挡的问题。
可选地,确定模块可以包括:
对比子模块,用于对比第二图像中的多个像素点的多个深度信息,将多个深度信息中深度信息最小的像素点的可见光信息赋值给同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
具体地,在实际情况中,从某一视点出发仅能看到距离视点较近点的可见光信息,因此,可以将深度信息较小的可见光信息作为同一像素点在拼接图像中的可见光信息。
加权子模块,用于对第二图像中的多个像素点的可见光信息进行加权运算,将经过加权运算的可见光信息赋值给同一像素点在拼接图像中的重叠区域和/或扩展区域的可见光信息。
具体地,对于多个像素点,当像素点的深度信息较小时,为该像素点的可见光信息赋予较大的权重值,通过多个像素点的多个可见光信息进行加权运算,将加权运算后的可见光信息作为拼接图像中的重叠区域和/或扩展区域的像素点的可见光信息。
如图7所示,本申请实施例还提供了一种电子设备,包括:壳体701、处理器702、存储器703、电路板704和电源电路705,其中,电路板704安置在壳体701围成的空间内部,处理器702和存储器703设置在电路板704上;电源电路705,用于为各个电路或器件供电;存储器703用于存储可执行程序代码;处理器702通过运行存储器中存储的可执行程序代码,以执行本申请实施例所提供的图像的拼接方法;其中,该图像的拼接方法包括:
获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,该第一图像为具有可见光信息的图像,该第二图像为包括深度信息以 及该可见光信息的图像,该第一摄像机与该第二摄像机相邻设置,采集到的该第一图像与该第二图像存在重叠区域;
将该第二图像中的像素点映射到该重叠区域和/或扩展区域上,其中,该扩展区域为该第二图像中的像素点映射到该第一图像之外的图像区域;
通过该第二图像中的像素点的深度信息以及可见光信息,将映射后的该第二图像中的像素点拼接到该重叠区域和/或该扩展区域上,得到拼接图像。
本实施例中,电子设备中的处理器通过运行存储器中存储的可执行程序代码,以执行本申请上述的图像的拼接方法,因此,实现了提高图像拼接效率的技术效果,进而解决了相关技术在图像拼接过程中,拼接效率低的技术问题。
本申请实施例还提供了一种应用程序,该应用程序用于在运行时执行该图像的拼接方法,其中,该图像的拼接方法,可以包括:
获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,该第一图像为具有可见光信息的图像,该第二图像为包括深度信息以及该可见光信息的图像,该第一摄像机与该第二摄像机相邻设置,采集到的该第一图像与该第二图像存在重叠区域;
将该第二图像中的像素点映射到该重叠区域和/或扩展区域上,其中,该扩展区域为该第二图像中的像素点映射到该第一图像之外的图像区域;
通过该第二图像中的像素点的深度信息以及可见光信息,将映射后的该第二图像中的像素点拼接到该重叠区域和/或该扩展区域上,得到拼接图像。
本实施例中,该应用程序用于在运行时执行本申请上述的图像的拼接方法,因此,实现了提高图像拼接效率的技术效果,进而解决了相关技术在图像拼接过程中,拼接效率低的技术问题。
本申请实施例还提供了一种存储介质,用于存储可执行程序代码,该可执行程序代码被运行以执行该图像的拼接方法,其中,图像的拼接方法,可 以包括:
获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,该第一图像为具有可见光信息的图像,该第二图像为包括深度信息以及该可见光信息的图像,该第一摄像机与该第二摄像机相邻设置,采集到的该第一图像与该第二图像存在重叠区域;
将该第二图像中的像素点映射到该重叠区域和/或扩展区域上,其中,该扩展区域为该第二图像中的像素点映射到该第一图像之外的图像区域;
通过该第二图像中的像素点的深度信息以及可见光信息,将映射后的该第二图像中的像素点拼接到该重叠区域和/或该扩展区域上,得到拼接图像。
本实施例中,存储介质中存储有在运行时执行本申请上述的图像的拼接方法的可执行程序代码,因此,实现了提高图像拼接效率的技术效果,进而解决了相关技术在图像拼接过程中,拼接效率低的技术问题。
对于电子设备、应用程序以及存储介质实施例而言,由于其所涉及的方法内容基本相似于前述的方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方, 或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (21)

  1. 一种图像的拼接方法,其特征在于,包括:
    获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,所述第一图像为具有可见光信息的图像,所述第二图像为包括深度信息以及所述可见光信息的图像,所述第一摄像机与所述第二摄像机相邻设置,采集到的所述第一图像与所述第二图像存在重叠区域;
    将所述第二图像中的像素点映射到所述重叠区域和/或扩展区域上,其中,所述扩展区域为所述第二图像中的像素点映射到所述第一图像之外的图像区域;
    通过所述第二图像中的像素点的深度信息以及可见光信息,将映射后的所述第二图像中的像素点拼接到所述重叠区域和/或所述扩展区域上,得到拼接图像。
  2. 根据权利要求1所述的方法,其特征在于,将所述第二图像中的像素点映射到所述重叠区域和/或扩展区域上的步骤包括:
    读取所述第二图像中的像素点在第二图像坐标系中的坐标信息;
    使用坐标变换,将所述第二图像中的像素点在所述第二图像坐标系中的坐标信息映射至第一图像坐标系,得到所述第二图像中的像素点在所述第一图像坐标系上的坐标信息;
    通过所述第二图像中的像素点在所述第一图像坐标系上的坐标信息,确定所述第二图像中的像素点在所述重叠区域和/或所述扩展区域的位置。
  3. 根据权利要求2所述的方法,其特征在于,使用坐标变换,将所述第二图像中的像素点在所述第二图像坐标系中的坐标信息映射至第一图像坐标系,得到所述第二图像中的像素点在所述第一图像坐标系上的坐标信息的步骤包括:
    将所述第二图像中的像素点在所述第二图像坐标系中的坐标信息映射至第二摄像机坐标系,得到所述第二图像中的像素点在所述第二摄像机坐标系中的坐标信息;
    将所述第二图像中的像素点在所述第二摄像机坐标系中的坐标信息映射至第一摄像机坐标系,得到所述第二图像中的像素点在所述第一摄像机坐标系中的坐标信息;
    将所述第二图像中的像素点在所述第一摄像机坐标系中的坐标信息映射至所述第一图像坐标系,得到所述第二图像中的像素点在所述第一图像坐标系上的坐标信息。
  4. 根据权利要求3所述的方法,其特征在于,通过如下第一公式将所述第二图像中的像素点在所述第二图像坐标系中的坐标信息m2(u2,v2)映射至所述第二摄像机坐标系,得到所述第二图像中的像素点在所述第二摄像机坐标系中的坐标信息m2(X2,Y2,Z2);
    Figure PCTCN2016096182-appb-100001
    其中,A2为所述第二摄相机的内参数;D2为所述第二图像的规模化系数。
  5. 根据权利要求4所述的方法,其特征在于,通过如下第二公式将所述第二图像中的像素点在所述第二摄像机坐标系中的坐标信息m2(X2,Y2,Z2)映射至所述第一摄像机坐标系,得到所述第二图像中的像素点在所述第一摄像机坐标系中的坐标信息m1(X1,Y1,Z1);
    Figure PCTCN2016096182-appb-100002
    其中,R为所述第一摄像机坐标系与所述第二摄像机坐标系相对的旋转矩阵,t为所述第一摄像机坐标系与所述第二摄像机坐标系的相对的平移向量。
  6. 根据权利要求5所述的方法,其特征在于,通过如下第三公式将所述第二图像中的像素点在所述第一摄像机坐标系中的坐标信息m1(X1,Y1,Z1)映射至所述第一图像坐标系,得到所述第二图像中的像素点在所述第一图像坐标系上的坐标信息m1(u1,v1);
    Figure PCTCN2016096182-appb-100003
    其中,A1为所述第一摄像机的内参数;D1为所述第一图像的规模化系数。
  7. 根据权利要求1所述的方法,其特征在于,通过所述第二图像中的像素点的深度信息以及可见光信息,将映射后的所述第二图像中的像素点拼接到所述重叠区域和/或所述扩展区域上,得到拼接图像的步骤包括:
    当所述第二图像中的像素点映射至所述重叠区域时,将所述像素点在所述第一图像中的可见光信息以及所述像素点在所述第二图像中的可见光信息进行加权运算,将经过所述加权运算的可见光信息赋值给所述像素点在所述拼接图像中的所述重叠区域的可见光信息;
    当所述第二图像中的像素点映射至所述扩展区域时,将所述像素点在所述第二图像中的可见光信息,赋值给所述像素点在所述拼接图像中的所述扩展区域的可见光信息。
  8. 根据权利要求2所述的方法,其特征在于,通过所述第二图像中的像素点的深度信息以及可见光信息,将映射后的所述第二图像中的像素点拼接到所述重叠区域和/或所述扩展区域上,得到拼接图像的步骤包括:
    通过映射后的所述第二图像中的像素点在所述第一图像坐标系上的坐标信息,判断所述第二图像中的多个像素点是否同时映射到所述重叠区域和/或所述扩展区域的同一像素点上;
    当所述第二图像中的多个像素点同时映射到所述重叠区域和/或所述扩展区域的同一像素点时,根据所述第二图像中的多个像素点的多个深度信息,确定所述重叠区域和/或所述扩展区域的同一像素点在所述拼接图像中的所述 重叠区域和/或所述扩展区域的可见光信息。
  9. 根据权利要求8所述的方法,其特征在于,根据所述第二图像中的多个像素点的多个深度信息,确定所述重叠区域和/或所述扩展区域的同一像素点在所述拼接图像中的所述重叠区域和/或所述扩展区域的可见光信息的步骤包括:
    对比所述第二图像中的多个像素点的所述多个深度信息,将所述多个深度信息中深度信息最小的像素点的可见光信息赋值给所述同一像素点在所述拼接图像中的所述重叠区域和/或所述扩展区域的可见光信息;或者
    对所述第二图像中的多个像素点的可见光信息进行加权运算,将经过所述加权运算的可见光信息赋值给所述同一像素点在所述拼接图像中的所述重叠区域和/或所述扩展区域的可见光信息。
  10. 一种图像的拼接装置,其特征在于,包括:
    获取单元,用于获取第一摄像机采集到的第一图像以及第二摄像机采集到的第二图像,其中,所述第一图像为具有可见光信息的图像,所述第二图像为包括深度信息以及可见光信息的图像,所述第一摄像机与所述第二摄像机相邻设置,采集到的所述第一图像与所述第二图像存在重叠区域;
    映射单元,用于将所述第二图像中的像素点映射到所述重叠区域和/或扩展区域上,其中,所述扩展区域为所述第二图像中的像素点映射到所述第一图像之外的图像区域;
    拼接单元,用于通过所述第二图像中的像素点的深度信息以及可见光信息,将映射后的所述第二图像中的像素点拼接到所述重叠区域和/或所述扩展区域上,得到拼接图像。
  11. 根据权利要求10所述的装置,其特征在于,所述映射单元包括:
    读取模块,用于读取所述第二图像中的像素点在第二图像坐标系中的坐标信息;
    坐标变换模块,用于使用坐标变换,将所述第二图像中的像素点在所述第二图像坐标系中的坐标信息映射至第一图像坐标系,得到所述第二图像中 的像素点在所述第一图像坐标系上的坐标信息;
    确定模块,用于通过所述第二图像中的像素点在所述第一图像坐标系上的坐标信息,确定所述第二图像中的像素点在所述重叠区域和/或所述扩展区域的位置。
  12. 根据权利要求11所述的装置,其特征在于,所述坐标变换模块包括:
    第一映射子模块,用于将所述第二图像中的像素点在所述第二图像坐标系中的坐标信息映射至第二摄像机坐标系,得到所述第二图像中的像素点在所述第二摄像机坐标系中的坐标信息;
    第二映射子模块,用于将所述第二图像中的像素点在所述第二摄像机坐标系中的坐标信息映射至第一摄像机坐标系,得到所述第二图像中的像素点在所述第一摄像机坐标系中的坐标信息;
    第三映射子模块,用于将所述第二图像中的像素点在所述第一摄像机坐标系中的坐标信息映射至所述第一图像坐标系,得到所述第二图像中的像素点在所述第一图像坐标系上的坐标信息。
  13. 根据权利要求12所述的装置,其特征在于,所述第一映射子模块通过如下第一公式计算得到所述第二图像中的像素点在所述第二摄像机坐标系中的坐标信息m2(X2,Y2,Z2);
    Figure PCTCN2016096182-appb-100004
    其中,A2为所述第二摄相机的内参数;D2为所述第二图像的规模化系数。
  14. 根据权利要求13所述的装置,其特征在于,所述第二映射子模块通过如下第二公式计算得到所述第二图像中的像素点在所述第一摄像机坐标系中的坐标信息m1(X1,Y1,Z1);
    Figure PCTCN2016096182-appb-100005
    其中,R为所述第一摄像机坐标系与所述第二摄像机坐标系相对的旋转矩阵,t为所述第一摄像机坐标系与所述第二摄像机坐标系的相对的平移向量。
  15. 根据权利要求14所述的装置,其特征在于,所述第三映射子模块通过如下第三公式计算得到所述第二图像中的像素点在所述第一图像坐标系上的坐标信息m1(u1,v1);
    Figure PCTCN2016096182-appb-100006
    其中,A1为所述第一摄像机的内参数;D1为所述第一图像的规模化系数。
  16. 根据权利要求10所述的装置,其特征在于,所述拼接单元包括:
    第一拼接模块,用于当所述第二图像中的像素点映射至所述重叠区域时,将所述像素点在所述第一图像中的可见光信息以及所述像素点在所述第二图像中的可见光信息进行加权运算,将经过所述加权运算的可见光信息赋值给所述像素点在所述拼接图像中的所述重叠区域的可见光信息;
    第二拼接模块,用于当所述第二图像中的像素点映射至所述扩展区域时,将所述像素点在所述第二图像中的可见光信息,赋值给所述像素点在所述拼接图像中的所述扩展区域的可见光信息。
  17. 根据权利要求11所述的装置,其特征在于,所述拼接单元包括:
    判断模块,用于通过映射后的所述第二图像中的像素点在所述第一图像坐标系上的坐标信息,判断所述第二图像中的多个像素点是否同时映射到所述重叠区域和/或所述扩展区域的同一像素点上;
    确定模块,用于当所述第二图像中的多个像素点同时映射到所述重叠区 域和/或所述扩展区域的同一像素点时,根据所述第二图像中的多个像素点的多个深度信息,确定所述重叠区域和/或所述扩展区域的同一像素点在所述拼接图像中的所述重叠区域和/或所述扩展区域的可见光信息。
  18. 根据权利要求17所述的装置,其特征在于,所述确定模块包括:
    对比子模块,用于对比所述第二图像中的多个像素点的所述多个深度信息,将所述多个深度信息中深度信息最小的像素点的可见光信息赋值给所述同一像素点在所述拼接图像中的所述重叠区域和/或所述扩展区域的可见光信息;
    加权子模块,用于对所述第二图像中的多个像素点的可见光信息进行加权运算,将经过所述加权运算的可见光信息赋值给所述同一像素点在所述拼接图像中的所述重叠区域和/或所述扩展区域的可见光信息。
  19. 一种电子设备,其特征在于,包括:壳体、处理器、存储器、电路板和电源电路,其中,电路板安置在壳体围成的空间内部,处理器和存储器设置在电路板上;电源电路,用于为各个电路或器件供电;存储器用于存储可执行程序代码;处理器通过运行存储器中存储的可执行程序代码,以执行权利要求1-9任一项所述的图像的拼接方法。
  20. 一种应用程序,其特征在于,所述应用程序用于在运行时执行权利要求1-9任一项所述的图像的拼接方法。
  21. 一种存储介质,其特征在于,所述存储介质用于存储可执行程序代码,所述可执行程序代码被运行以执行权利要求1-9任一项所述的图像的拼接方法。
PCT/CN2016/096182 2015-11-06 2016-08-22 图像的拼接方法和装置 WO2017076106A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP16861375.0A EP3373241A4 (en) 2015-11-06 2016-08-22 IMAGE ASSEMBLY METHOD AND DEVICE
US15/773,544 US10755381B2 (en) 2015-11-06 2016-08-22 Method and device for image stitching

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510752156.X 2015-11-06
CN201510752156.XA CN106683071B (zh) 2015-11-06 2015-11-06 图像的拼接方法和装置

Publications (1)

Publication Number Publication Date
WO2017076106A1 true WO2017076106A1 (zh) 2017-05-11

Family

ID=58661550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096182 WO2017076106A1 (zh) 2015-11-06 2016-08-22 图像的拼接方法和装置

Country Status (4)

Country Link
US (1) US10755381B2 (zh)
EP (1) EP3373241A4 (zh)
CN (1) CN106683071B (zh)
WO (1) WO2017076106A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622518A (zh) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 图片合成方法、装置、设备及存储介质
CN111028155A (zh) * 2019-12-17 2020-04-17 大连理工大学 一种基于多对双目相机的视差图像拼接方法
CN113392913A (zh) * 2021-06-21 2021-09-14 常州大学 基于边界特征点集的平面图形匹配度评价方法、装置及系统

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107689030A (zh) * 2017-09-18 2018-02-13 上海联影医疗科技有限公司 一种图像处理的方法及装置
CN108470323B (zh) * 2018-03-13 2020-07-31 京东方科技集团股份有限公司 一种图像拼接方法、计算机设备及显示装置
WO2019233169A1 (zh) * 2018-06-06 2019-12-12 Oppo广东移动通信有限公司 图像处理方法和装置、电子装置、计算机设备和存储介质
CN108769476B (zh) * 2018-06-06 2019-07-19 Oppo广东移动通信有限公司 图像获取方法及装置、图像采集装置、计算机设备和可读存储介质
CN110881117A (zh) * 2018-09-06 2020-03-13 杭州海康威视数字技术股份有限公司 一种画面间区域映射方法、装置及多相机观测系统
US11689707B2 (en) * 2018-09-20 2023-06-27 Shoppertrak Rct Llc Techniques for calibrating a stereoscopic camera in a device
CN109889736B (zh) * 2019-01-10 2020-06-19 深圳市沃特沃德股份有限公司 基于双摄像头、多摄像头的图像获取方法、装置及设备
CN111723830B (zh) * 2019-03-20 2023-08-29 杭州海康威视数字技术股份有限公司 一种图像映射方法、装置及设备、存储介质
US11158056B2 (en) * 2019-06-26 2021-10-26 Intel Corporation Surround camera system with seamless stitching for arbitrary viewpoint selection
CN110493511B (zh) * 2019-07-30 2023-05-09 维沃移动通信有限公司 一种全景图像生成方法及移动终端
US11568516B2 (en) * 2019-09-12 2023-01-31 Nikon Corporation Depth-based image stitching for handling parallax
CN113538237A (zh) * 2021-07-09 2021-10-22 北京超星未来科技有限公司 一种图像拼接系统、方法及电子设备
CN113709388B (zh) * 2021-08-23 2022-07-05 西北工业大学 一种多源视频拼接方法及装置
CN114594038B (zh) * 2022-05-09 2022-08-09 天津立中车轮有限公司 一种精确测算孔隙率的扫描拼接应用方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673395A (zh) * 2008-09-10 2010-03-17 深圳华为通信技术有限公司 图像拼接方法及装置
US20120133639A1 (en) * 2010-11-30 2012-05-31 Microsoft Corporation Strip panorama
CN104318547A (zh) * 2014-10-09 2015-01-28 浙江捷尚视觉科技股份有限公司 基于gpu加速的多双目拼接智能分析系统
CN104519340A (zh) * 2014-12-30 2015-04-15 余俊池 基于多深度图像变换矩阵的全景视频拼接方法
CN104680520A (zh) * 2015-02-06 2015-06-03 周晓辉 一种现场三维信息勘验方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI383680B (zh) * 2008-04-10 2013-01-21 Univ Nat Chiao Tung 整合式影像監視系統及其方法
EP2385705A4 (en) * 2008-12-30 2011-12-21 Huawei Device Co Ltd METHOD AND DEVICE FOR GENERATING STEREOSCOPIC PANORAMIC VIDEO FLOW AND METHOD AND DEVICE FOR VISIOCONFERENCE
US9723272B2 (en) * 2012-10-05 2017-08-01 Magna Electronics Inc. Multi-camera image stitching calibration system
CN104134188A (zh) * 2014-07-29 2014-11-05 湖南大学 一种基于二维和三维摄像机融合的三维视觉信息获取方法
CN104463899B (zh) * 2014-12-31 2017-09-22 北京格灵深瞳信息技术有限公司 一种目标对象检测、监控方法及其装置
CN104899869B (zh) * 2015-05-14 2017-09-01 浙江大学 基于rgb‑d相机和姿态传感器的平面和障碍检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673395A (zh) * 2008-09-10 2010-03-17 深圳华为通信技术有限公司 图像拼接方法及装置
US20120133639A1 (en) * 2010-11-30 2012-05-31 Microsoft Corporation Strip panorama
CN104318547A (zh) * 2014-10-09 2015-01-28 浙江捷尚视觉科技股份有限公司 基于gpu加速的多双目拼接智能分析系统
CN104519340A (zh) * 2014-12-30 2015-04-15 余俊池 基于多深度图像变换矩阵的全景视频拼接方法
CN104680520A (zh) * 2015-02-06 2015-06-03 周晓辉 一种现场三维信息勘验方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3373241A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622518A (zh) * 2017-09-20 2018-01-23 广东欧珀移动通信有限公司 图片合成方法、装置、设备及存储介质
CN107622518B (zh) * 2017-09-20 2019-10-29 Oppo广东移动通信有限公司 图片合成方法、装置、设备及存储介质
CN111028155A (zh) * 2019-12-17 2020-04-17 大连理工大学 一种基于多对双目相机的视差图像拼接方法
CN113392913A (zh) * 2021-06-21 2021-09-14 常州大学 基于边界特征点集的平面图形匹配度评价方法、装置及系统
CN113392913B (zh) * 2021-06-21 2023-09-29 常州大学 基于边界特征点集的平面图形匹配度评价方法、装置及系统

Also Published As

Publication number Publication date
EP3373241A1 (en) 2018-09-12
EP3373241A4 (en) 2020-05-20
CN106683071A (zh) 2017-05-17
US20180330471A1 (en) 2018-11-15
CN106683071B (zh) 2020-10-30
US10755381B2 (en) 2020-08-25

Similar Documents

Publication Publication Date Title
WO2017076106A1 (zh) 图像的拼接方法和装置
US11570423B2 (en) System and methods for calibration of an array camera
CN110717942B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2018153374A1 (zh) 相机标定
WO2016074620A1 (en) Parallax tolerant video stitching with spatial-temporal localized warping and seam finding
US20150091900A1 (en) Systems and Methods for Depth-Assisted Perspective Distortion Correction
US9338437B2 (en) Apparatus and method for reconstructing high density three-dimensional image
US11282232B2 (en) Camera calibration using depth data
CN110009672A (zh) 提升ToF深度图像处理方法、3D图像成像方法及电子设备
WO2018171008A1 (zh) 一种基于光场图像的高光区域修复方法
JP5672112B2 (ja) ステレオ画像較正方法、ステレオ画像較正装置及びステレオ画像較正用コンピュータプログラム
JP2017021759A (ja) 画像処理装置、画像処理方法及びプログラム
GB2588265A (en) Image inpainting with geometric and photometric transformations
JP7312026B2 (ja) 画像処理装置、画像処理方法およびプログラム
WO2019019160A1 (zh) 一种图像信息获取方法、图像处理设备及计算机存储介质
KR102587298B1 (ko) 멀티뷰 어안 렌즈들을 이용한 실시간 전방위 스테레오 매칭 방법 및 그 시스템
KR20200057929A (ko) 캘리브레이트된 카메라들에 의해 캡쳐된 스테레오 영상들의 렉티피케이션 방법과 컴퓨터 프로그램
EP4064193A1 (en) Real-time omnidirectional stereo matching using multi-view fisheye lenses
WO2015159791A1 (ja) 測距装置および測距方法
JP7195785B2 (ja) 3次元形状データを生成する装置、方法、及びプログラム
CN111080689B (zh) 确定面部深度图的方法和装置
Zilly Method for the automated analysis, control and correction of stereoscopic distortions and parameters for 3D-TV applications
Zilly Method for the automated analysis, control and correction of stereoscopic distortions and parameters for 3D-TV applications: new image processing algorithms to improve the efficiency of stereo-and multi-camera 3D-TV productions
JP2022536789A (ja) 付帯カメラ較正のための方法およびシステム
Manuylova Investigations of stereo setup for Kinect

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16861375

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15773544

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016861375

Country of ref document: EP