WO2018068719A1 - Procédé et appareil de collage d'image - Google Patents

Procédé et appareil de collage d'image Download PDF

Info

Publication number
WO2018068719A1
WO2018068719A1 PCT/CN2017/105657 CN2017105657W WO2018068719A1 WO 2018068719 A1 WO2018068719 A1 WO 2018068719A1 CN 2017105657 W CN2017105657 W CN 2017105657W WO 2018068719 A1 WO2018068719 A1 WO 2018068719A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate
image
coordinate system
pixel
optical center
Prior art date
Application number
PCT/CN2017/105657
Other languages
English (en)
Chinese (zh)
Inventor
袁梓瑾
简伟华
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018068719A1 publication Critical patent/WO2018068719A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to an image stitching method and apparatus.
  • 360-degree panoramic video has gradually become one of the main contents in the field of virtual reality. Compared to traditional limited-view video, this panoramic video provides users with a more realistic and immersive viewing experience. Since the single-lens system for collecting panoramic video is still rare, it is generally composed of video captured by multiple camera devices or multiple lens systems.
  • the present invention provides an image splicing method and apparatus, which can provide a spliced image without parallax and improve resource utilization of the image splicing device.
  • the present invention provides an image stitching method for an image pickup apparatus including at least two image pickup apparatuses, the method comprising:
  • each imaging device For each imaging device, constructing a three-dimensional coordinate system of the imaging device with a common optical center of the at least two imaging devices as a starting point;
  • All images are stitched according to the third coordinate of each pixel in all images.
  • the present invention also provides an image splicing apparatus comprising a processor and a memory, wherein the memory stores instructions executable by the processor, and when the instruction is executed, the processor is configured to:
  • each imaging device For each imaging device, constructing a three-dimensional coordinate system of the imaging device with a common optical center of the at least two imaging devices as a starting point;
  • each camera device For each pixel in an image captured by each camera device, performing a process of converting a first coordinate of the pixel in a two-dimensional coordinate system of the image to a second coordinate in the three-dimensional coordinate system; Correcting the second coordinate to obtain a third coordinate by the optical center of the imaging device and the target object point specified in the image; and
  • All images are stitched according to the third coordinate of each pixel in all images.
  • the invention further provides a computer readable storage medium storing computer readable instructions for causing at least one processor to perform the method described above.
  • the present invention further provides an image pickup apparatus comprising at least two image pickup apparatuses, an image display apparatus, a processor, and a memory, wherein the memory stores instructions executable by the processor, when the instructions are executed,
  • the processor is used to:
  • each imaging device For each imaging device, constructing a three-dimensional coordinate system of the imaging device with a common optical center of the at least two imaging devices as a starting point;
  • a process of converting the first coordinate of the pixel in the two-dimensional coordinate system of the image into the three-dimensional coordinate system is performed a second coordinate; correcting the second coordinate according to an optical center of the imaging device and a target object point specified in the image to obtain a third coordinate;
  • the stitched image is displayed by the image display device.
  • FIG. 1a is a schematic diagram of an implementation environment according to an embodiment of the invention.
  • FIG. 1b is an exemplary flowchart of an image stitching method according to an embodiment of the invention.
  • FIG. 2 is a schematic diagram of constructing a Cartesian coordinate system in accordance with an embodiment of the present invention
  • FIG. 3 is an exemplary flowchart of a method for compensating an optical center offset according to an embodiment of the invention
  • 4a is a schematic diagram of coordinates for correcting a second coordinate according to an embodiment of the invention.
  • 4b is a schematic diagram of coordinates for determining an offset according to an embodiment of the invention.
  • FIG. 5 is an exemplary flowchart of an image stitching method according to another embodiment of the present invention.
  • 6a is a schematic diagram of a two-dimensional image before splicing according to an embodiment of the invention.
  • 6b is a schematic diagram of a two-dimensional image after splicing according to an embodiment of the invention.
  • FIG. 7 is a schematic structural diagram of an image splicing apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of an image splicing apparatus according to another embodiment of the present invention.
  • two-dimensional imaging captured by two non-co-optic lenses always has a certain parallax in their common field of view.
  • the degree of parallax is different, which ultimately leads to visually unacceptable flaws in the stitched image, such as ghosting, ghosting, and continuous line misalignment. Therefore, the spliced image has a poor effect, affects the user's viewing experience, and reduces the resource utilization of the imaging device.
  • the image splicing method and apparatus in the embodiments of the present invention are applicable to any image pickup apparatus having at least two image pickup apparatuses, wherein the angles of view of two adjacent image pickup apparatuses have a common portion, that is, a common view portion, and images taken by the two Has overlapping parts.
  • the images captured by each camera device are respectively processed, and then the images are stitched in the entire imaging device, and the complete target object point (or depth surface) can be obtained completely.
  • FIG. 1a is a schematic diagram of an implementation environment according to an embodiment of the invention.
  • the imaging system 100 includes a target object 200 and an imaging device 300.
  • the imaging device 300 further includes an image splicing device 310, an image display device 320, and imaging devices 331-335. All the imaging devices 331-335 are combined. Together, it can form a 360-degree panoramic shot.
  • the target object 100 is photographed in response to a user operation, and each of the imaging apparatuses 331 to 335 captures an image for the target object 100, and The captured image is transmitted to the image splicing device 310 for splicing, and then the image splicing device 310 transmits the spliced panoramic image to the image display device 320 for display for viewing by the user.
  • the imaging device 300 may be a wearable smart terminal, each camera The device is a single camera lens that can take a single image or take multiple consecutive images.
  • the image display device 320 is a display screen, and provides a visual interface for the user to display the stitched panoramic image.
  • FIG. 1b is an exemplary flowchart of an image stitching method according to an embodiment of the invention. The method is applied to an image pickup apparatus including at least two image pickup apparatuses, as shown in FIG. 1b, comprising the following steps:
  • Step 101 Acquire images captured by at least two imaging devices.
  • Step 102 For each imaging device, construct a three-dimensional coordinate system of the imaging device with the common optical center of the preset at least two imaging devices as an origin.
  • each camera device has an optical center of its own lens, in this step, a common optical center is first preset, which is for the entire imaging device, that is, all the imaging devices have such an ideal.
  • the optical center is used as a starting point to construct a three-dimensional coordinate system for each camera.
  • the specific method includes: taking the common optical center as the origin, and the imaging surface of the imaging device The two-dimensional coordinate system (X, Y) is established on the parallel plane, and then the Z-axis is determined according to the two-dimensional coordinate system (X, Y) and the right-hand rule.
  • the three-dimensional coordinate system is a Cartesian coordinate system.
  • This Cartesian coordinate system is also referred to as a Cartesian world coordinate system with respect to the coordinate system of the camera.
  • 2 is a schematic diagram of constructing a Cartesian coordinate system in accordance with an embodiment of the present invention.
  • the X-axis, the Y-axis, and the Z-axis together constitute a Cartesian coordinate system of the image pickup device A
  • the common optical center O is the origin of the coordinate system.
  • Incident light The lens system of the imaging device A is entered at an angle ⁇ , and after being refracted by the lens, it is imaged on the imaging surface x'o'y' of the imaging device A.
  • the XOY face and the x'o'y' face are parallel.
  • a two-dimensional coordinate system (X, Y) is established on the parallel plane XOY plane of the imaging plane x'o'y', and then the Z-axis is determined based on the two-dimensional coordinate system (X, Y) and the right-hand rule.
  • Step 103 Performing the following processing for each pixel in an image captured by each camera:
  • Step 1031 Convert the first coordinate of the pixel in the two-dimensional coordinate system of the image to the second coordinate in the three-dimensional coordinate system;
  • the captured image is two-dimensional, and each pixel has a two-dimensional coordinate, that is, a first coordinate, in the two-dimensional coordinate system of the image.
  • Step 1032 Correct the second coordinate according to the optical center of the imaging device and the target object point specified in the image to obtain a third coordinate.
  • converting the first coordinate to the second coordinate specifically includes: determining an angular coordinate of the pixel according to the first coordinate, determining incident light and the three-dimensional coordinate according to the lens imaging geometric function and the first coordinate of the imaging device.
  • the angle between the Z axes in the system (X, Y, Z) and then calculate the second coordinates based on the angle and the angular coordinates.
  • Determining an angular coordinate of the pixel according to the first coordinate comprises determining The following trigonometric values are:
  • Atan ( ⁇ ) represents the inverse tangent function
  • pw, ph represents the width and height of the pixel, respectively
  • f is the focal length of the lens (as shown in Figure 2).
  • one pixel p 1 ' in the imaging plane x'o'y', the first coordinate is (x 1 , y 1 ), the connection between p 1 ' and the origin o' and x'
  • the angle between the o' axes is Converted to the Cartesian coordinate system (X, Y, Z), corresponding to the object point P 1 , its three-dimensional coordinates are shown in formula (2).
  • the projection of P 1 on the two-dimensional plane of XOY is p 1
  • the angle between the line between p 1 and the origin O and the XO axis is also
  • the above-mentioned common optical center O is unique to all the imaging devices, but considering that each of the imaging devices has its own optical center O' in practice, it is necessary to perform imaging on the image according to the deviation between the optical centers.
  • the compensation is such that it is consistent with the imaging of O under the origin.
  • FIG. 3 is an exemplary flowchart of an optical center offset compensation method according to an embodiment of the present invention.
  • the second coordinate is corrected according to the optical center of the imaging device and the target object point specified in the image, and the third coordinate is obtained.
  • the method includes the following steps:
  • Step 301 Obtain a distance between the public optical center and the target object point, that is, obtain a depth of the target object point.
  • the target object point may be specified by the user according to the object point of interest in the captured image, or may be specified according to the main target object or the content in the scene.
  • FIG. 4a is a schematic diagram of coordinates for correcting a second coordinate according to an embodiment of the invention.
  • the target point is incident light.
  • the above distance is the projection of P 1 on the XOZ plane
  • the length, that is, the length between 0 and P', is denoted as R 0 , which is also referred to as the depth of the object point P 1 .
  • Step 302 Obtain an offset of the optical center of the imaging device relative to the common optical center.
  • regression or simulation estimation may be performed according to sample data of the overlapping image and corresponding/matching relationship with the imaging device.
  • the above offset is given.
  • a panoramic (ie 360°) video system in which three cameras are placed in a three-dimensional space, each camera capturing an image within a certain range of viewing angles.
  • FIG. 4b is a schematic diagram of coordinates for determining an offset according to an embodiment of the invention.
  • Fig. 4b in the ABC coordinate system constructed by the three-dimensional spherical surface 400, cameras 401 and 402 are arranged at different positions, and the images taken by the two have overlapping portions.
  • the offset between the optical center O' of each camera and the origin O can be determined from the sample data of the superimposed image.
  • the optical center O 4a 'with respect to the origin O in the offset X axis, Y axis and Z axis are T x, T y, T z .
  • Step 303 calculating a third coordinate according to the distance, the offset, and the second coordinate.
  • each coordinate value x 3 , y 3 , and z 3 in the third coordinate (x 3 , y 3 , z 3 ) can be calculated according to the following formula:
  • Step 104 splicing all images according to the third coordinate of each pixel in all images.
  • a three-dimensional coordinate system of the imaging device is constructed with a common optical center of at least two preset imaging devices as an origin,
  • Each pixel in an image captured by each camera device performs a process of converting a first coordinate of the pixel in a two-dimensional coordinate system of the image to a second coordinate in the three-dimensional coordinate system;
  • the optical center of the device and the target object point specified in the image, the second coordinate is corrected to obtain a third coordinate, and all the images are spliced according to the third coordinate of each pixel in all images, thereby providing a non-parallax splicing
  • the depth surface technology can adaptively select the depth position of the main content in the scene as the non-parallax stitching depth surface, so that the main content in the scene presents a non-parallas stitching effect.
  • the coordinate conversion and the compensation of the optical center offset in the above method are independent of the geometric characteristics of the target object point, and are not dependent on the shape of the specific target object point, and are more suitable for video applications whose content is constantly changing in the time dimension.
  • the above method does not need to perform feature detection and feature matching on the scene content, so that the user can specify the target object quickly and flexibly.
  • the point or the specified non-parallax stitching depth surface
  • the above method and the specific imaging geometric formula of the imaging device and the projection type of the final stitching are also irrelevant, and therefore, have versatility, and improve resource utilization of the image splicing device.
  • FIG. 5 is an exemplary flowchart of an image stitching method according to another embodiment of the present invention. As shown in FIG. 5, the method is applied to an image pickup apparatus including at least two image pickup apparatuses, and includes the following steps:
  • Step 501 Acquire an image captured by each of the at least two imaging devices.
  • Step 502 For each camera device, construct a Cartesian coordinate system of the camera device with the common optical center of the preset at least two camera devices as an origin.
  • Step 503 performing the following processing for each pixel in an image captured by each camera:
  • Step 5031 performing coordinate conversion:
  • step 5032 the optical center offset compensation is performed:
  • the second coordinate is corrected based on the optical center of the imaging device and the target object point specified in the image to obtain a third coordinate.
  • the modulus of the second coordinate is 1, that is, the established Cartesian coordinate system is a normalized Cartesian coordinate system. Since the normalized Cartesian coordinate system does not contain depth information, it is in the same incident light. The upper two object points with different depths have the same normalized Cartesian coordinate values. As shown, p 1 '2 converted to the corresponding normalized Cartesian coordinate system (X, Y, Z) is not only the object point P 1, except that P 1, may also be along the incident light Other objects on the point, as shown in Figure 2 P 2 .
  • the depths of the object points P 1 and P 2 are different, that is, the distances between the XOZ planes and the optical center O are different, but the two have the same normalized Cartesian coordinate values (x 2 , y 2 , z 2 ). Both correspond to p 1 ' on the imaging plane x'o'y'.
  • Step 504 Project the third coordinate into the unit panoramic sphere according to a preset projection type according to the position of each camera in the imaging device.
  • the third coordinates are projected into a unit of panoramic sphere.
  • Preset projection types include, but are not limited to, rectilinear, fisheye, equirectangular, orthographic, stereographic, and the like.
  • Step 505 splicing all the images in the unit panoramic spherical surface to obtain a panoramic image.
  • the splicing depth surface without parallax can be reached at the specified target object point position, and the adjacent images are completely aligned, and the effect of no splicing ⁇ is obtained.
  • the three-dimensional panoramic image can be reconverted into a two-dimensional image.
  • FIG. 6a is a schematic diagram of a two-dimensional image before splicing according to an embodiment of the invention.
  • the target point is the first flagpole closest to the lens (as indicated by arrow 601), corresponding to P1-P' shown in Figure 4a.
  • the up, down, left and right image misalignment due to parallax occurs at the flagpole.
  • the extra point 611' appears at the lower left of the top end 611 of the flagpole.
  • the flag is originally the image shown at 612, but due to the parallax, the final image is 612' (as indicated by the dotted line). Show).
  • FIG. 6b is a schematic diagram of a two-dimensional image after splicing according to an embodiment of the invention.
  • the left picture 620 is the image after the coordinate transformation and the optical center offset compensation, and the upper and lower images are perfectly aligned at the flagpole.
  • images that are not aligned outside of the top 611 and the flag 612 disappear, showing a clear flagpole. Visible, implemented in the scene The perfect alignment of the main contents (ie, the flagpole), at the position of the flagpole, becomes the non-parallax stitching depth surface.
  • reverse processing may also be adopted, that is, the inverse processing is performed pixel by pixel on a blank panoramic canvas (ie, the optical center offset compensation described in step 5032 is sequentially performed, step 5031).
  • the coordinate conversion operation described finds the pixel position of the image captured by the camera device corresponding thereto, and then interpolates to obtain the actual value of the pixel on the current panoramic canvas.
  • FIG. 7 is a schematic structural diagram of an image splicing apparatus according to an embodiment of the present invention.
  • the image splicing device 700 includes an obtaining module 710, a coordinate system building module 720, a coordinate processing module 730, and a splicing module 740, where
  • the acquiring module 710 is configured to acquire an image captured by each of the at least two camera devices;
  • a coordinate system construction module 720 configured to construct, for each camera device, a three-dimensional coordinate system of the camera device with a common optical center of at least two preset imaging devices as an origin;
  • the coordinate processing module 730 is configured to, for each pixel in an image captured by each camera device, perform a process of converting the first coordinate of the pixel in the two-dimensional coordinate system of the image into the three-dimensional coordinate system. a second coordinate; correcting the second coordinate according to the optical center of the imaging device and the target object point specified in the image to obtain a third coordinate; and
  • the splicing module 740 is configured to splicing all the images according to the third coordinate of each pixel in all the images.
  • the coordinate processing module 730 includes a conversion unit 731 for determining an angular coordinate of the pixel according to the first coordinate, and determining the incident light and the three-dimensional coordinate system according to the lens imaging geometric function and the first coordinate of the imaging device ( The angle between the Z axes in X, Y, Z); the second coordinate is calculated from the angular coordinates and the included angle.
  • the converting unit 731 is configured to determine:
  • the three-dimensional coordinate system is a Cartesian coordinate system. If the second coordinate is represented by (x 2 , y 2 , z 2 ) and the angle is represented by ⁇ , the conversion unit 731 is configured to calculate x 2 , y 2 and according to the following formula. z 2 :
  • the coordinate processing module 730 includes a correction unit 732 for acquiring a distance between the common optical center and the target object point; acquiring an offset of the optical center of the imaging device with respect to the common optical center; The offset and the second coordinate calculate the third coordinate.
  • the correcting unit 732 is configured to calculate x 3 , y 3 and z 3 according to the following formula:
  • the splicing module 740 is configured to project the third coordinate into the unit panoramic spherical surface according to a preset projection type according to the position of each camera device in the imaging device; The images are stitched together to obtain a panoramic image.
  • FIG. 8 is a schematic structural diagram of an image splicing apparatus according to another embodiment of the present invention.
  • the image splicing apparatus 800 can include a processor 810, a memory 820, a port 830, and a bus. 840.
  • Processor 810 and memory 820 are interconnected by a bus 840.
  • Processor 810 can receive and transmit data through port 830. among them,
  • the processor 810 is configured to execute a machine readable instruction module stored by the memory 820.
  • the memory 820 stores machine readable instruction modules executable by the processor 810.
  • the instruction module executable by the processor 810 includes an acquisition module 821, a coordinate system construction module 822, a coordinate processing module 823, and a splicing module 824. among them,
  • the acquiring module 821 may be executed by the processor 810 to: acquire an image captured by each of the at least two camera devices;
  • the coordinate system construction module 822 may be configured by the processor 810 to: for each camera device, construct a three-dimensional coordinate system of the camera device with the common optical center of the preset at least two camera devices as an origin;
  • the coordinate processing module 823 may be executed by the processor 810 to: for each pixel in an image captured by each camera device, perform the following process: converting the pixel to the first coordinate in the two-dimensional coordinate system of the image a second coordinate in the three-dimensional coordinate system; correcting the second coordinate according to the optical center of the imaging device and the target object point specified in the image to obtain a third coordinate;
  • the splicing module 824 when executed by the processor 810, can splicing all of the images based on the third coordinate of each pixel in all of the images.
  • an image pickup apparatus includes at least two image pickup apparatuses, an image display apparatus, a processor, and a memory, and the memory stores instructions executable by the processor, and when executing the instructions, the processor is configured to:
  • a three-dimensional coordinate system of the imaging device is constructed with a common optical center of at least two preset imaging devices as an origin;
  • each camera device For each pixel in an image captured by each camera device, performing a process of converting a first coordinate of the pixel in a two-dimensional coordinate system of the image to a second coordinate in the three-dimensional coordinate system; The optical center of the imaging device and the target object point specified in the image are corrected to obtain the third coordinate; and,
  • the stitched image is displayed by the image display device.
  • each functional module in each embodiment of the present invention may be integrated into one processing unit, or each module may exist physically separately, or two or more modules may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • each of the embodiments of the present invention can be implemented by a data processing program executed by a data processing device such as a computer.
  • the data processing program constitutes the present invention.
  • a data processing program usually stored in a storage medium is executed by directly reading a program out of a storage medium or by installing or copying the program to a storage device (such as a hard disk and or a memory) of the data processing device. Therefore, such a storage medium also constitutes the present invention.
  • the storage medium can use any type of recording method, such as paper storage medium (such as paper tape, etc.), magnetic storage medium (such as floppy disk, hard disk, flash memory, etc.), optical storage medium (such as CD-ROM, etc.), magneto-optical storage medium (such as MO, etc.).
  • paper storage medium such as paper tape, etc.
  • magnetic storage medium such as floppy disk, hard disk, flash memory, etc.
  • optical storage medium such as CD-ROM, etc.
  • magneto-optical storage medium Such as MO, etc.
  • the present invention also discloses a storage medium in which is stored a data processing program for performing any of the above-described embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé et un appareil de collage d'image. Le procédé est appliqué à un dispositif de capture d'image comprenant au moins deux appareils de capture d'image, et consiste à : acquérir des images prises respectivement par au moins deux appareils de capture d'image ; par rapport à chaque appareil de capture d'image, créer un système de coordonnées tridimensionnelles de l'appareil de capture d'image en prenant un centre optique commun prédéfini des deux appareils de capture d'image ou plus en tant qu'origine ; par rapport à chaque pixel dans une image capturée par chaque appareil de capture d'image, exécuter le traitement suivant : convertir une première coordonnée du pixel dans un système de coordonnées bidimensionnel de l'image en une deuxième coordonnée dans le système de coordonnées tridimensionnelles, et en fonction du centre optique de l'appareil de capture d'image et d'un point d'objet cible spécifique dans l'image, corriger la deuxième coordonnée de sorte à obtenir une troisième coordonnée ; et coller toutes les images selon la troisième coordonnée de chaque pixel dans toutes les images.
PCT/CN2017/105657 2016-10-12 2017-10-11 Procédé et appareil de collage d'image WO2018068719A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610890008.9 2016-10-12
CN201610890008.9A CN106331527B (zh) 2016-10-12 2016-10-12 一种图像拼接方法及装置

Publications (1)

Publication Number Publication Date
WO2018068719A1 true WO2018068719A1 (fr) 2018-04-19

Family

ID=57820319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/105657 WO2018068719A1 (fr) 2016-10-12 2017-10-11 Procédé et appareil de collage d'image

Country Status (2)

Country Link
CN (1) CN106331527B (fr)
WO (1) WO2018068719A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111142825A (zh) * 2019-12-27 2020-05-12 杭州拓叭吧科技有限公司 多屏视野的显示方法、系统及电子设备
CN113873220A (zh) * 2020-12-03 2021-12-31 上海飞机制造有限公司 一种偏差分析方法、装置、系统、设备及存储介质
CN114554176A (zh) * 2022-01-24 2022-05-27 北京有竹居网络技术有限公司 深度相机
CN115781665A (zh) * 2022-11-01 2023-03-14 深圳史河机器人科技有限公司 一种基于单目相机的机械臂控制方法、装置及存储介质
CN116643393A (zh) * 2023-07-27 2023-08-25 南京木木西里科技有限公司 基于显微图像偏转的处理方法及系统
CN118118645A (zh) * 2024-04-23 2024-05-31 北京工业大学 一种基于vr技术的全景农场实现方法及装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331527B (zh) * 2016-10-12 2019-05-17 腾讯科技(北京)有限公司 一种图像拼接方法及装置
TWI660328B (zh) 2017-02-23 2019-05-21 鈺立微電子股份有限公司 利用非平面投影影像產生深度圖的影像裝置及其相關方法
CN110519774B (zh) * 2018-05-21 2023-04-18 中国移动通信集团广东有限公司 基于vr技术的基站勘察方法、系统和设备
EP3606032B1 (fr) * 2018-07-30 2020-10-21 Axis AB Procédé et système de caméras combinant des vues à partir d'une pluralité de caméras
CN109889736B (zh) * 2019-01-10 2020-06-19 深圳市沃特沃德股份有限公司 基于双摄像头、多摄像头的图像获取方法、装置及设备
CN110072158B (zh) * 2019-05-06 2021-06-04 复旦大学 球面赤道区域双c型全景视频投影方法
CN112449100B (zh) * 2019-09-03 2023-11-17 中国科学院长春光学精密机械与物理研究所 航空相机倾斜图像的拼接方法、装置、终端及存储介质
CN111432119B (zh) * 2020-03-27 2021-03-23 北京房江湖科技有限公司 图像拍摄方法、装置、计算机可读存储介质及电子设备
US11645780B2 (en) 2020-03-16 2023-05-09 Realsee (Beijing) Technology Co., Ltd. Method and device for collecting images of a scene for generating virtual reality data
CN112771842A (zh) * 2020-06-02 2021-05-07 深圳市大疆创新科技有限公司 成像方法、成像装置、计算机可读存储介质
CN112669199B (zh) * 2020-12-16 2022-06-21 影石创新科技股份有限公司 图像拼接方法、计算机可读存储介质及计算机设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030184778A1 (en) * 2002-03-28 2003-10-02 Sanyo Electric Co., Ltd. Image processing method, image processing apparatus, computer program product and computer memory product
CN101710932A (zh) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 图像拼接方法及装置
CN102798350A (zh) * 2012-07-10 2012-11-28 中联重科股份有限公司 一种臂架挠度的测量方法、装置及系统
CN103379267A (zh) * 2012-04-16 2013-10-30 鸿富锦精密工业(深圳)有限公司 三维空间图像的获取系统及方法
CN106331527A (zh) * 2016-10-12 2017-01-11 腾讯科技(北京)有限公司 一种图像拼接方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60140320D1 (de) * 2000-02-29 2009-12-10 Panasonic Corp Bildaufnahmesystem und fahrzeugmontiertes Sensorsystem
KR20020025301A (ko) * 2000-09-28 2002-04-04 오길록 다중 사용자를 지원하는 파노라믹 이미지를 이용한증강현실 영상의 제공 장치 및 그 방법
CN101521745B (zh) * 2009-04-14 2011-04-13 王广生 一组多镜头光心重合式全方位摄像装置及全景摄像、转播的方法
CN101783883B (zh) * 2009-12-26 2012-08-29 华为终端有限公司 共光心摄像中的调整方法和共光心摄像系统
US10666860B2 (en) * 2012-09-11 2020-05-26 Ricoh Company, Ltd. Image processor, image processing method and program, and imaging system
CN104506764A (zh) * 2014-11-17 2015-04-08 南京泓众电子科技有限公司 一种基于拼接视频图像的汽车行驶记录系统
CN105812640A (zh) * 2016-05-27 2016-07-27 北京伟开赛德科技发展有限公司 球型全景摄像装置及其视频图像传输方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030184778A1 (en) * 2002-03-28 2003-10-02 Sanyo Electric Co., Ltd. Image processing method, image processing apparatus, computer program product and computer memory product
CN101710932A (zh) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 图像拼接方法及装置
CN103379267A (zh) * 2012-04-16 2013-10-30 鸿富锦精密工业(深圳)有限公司 三维空间图像的获取系统及方法
CN102798350A (zh) * 2012-07-10 2012-11-28 中联重科股份有限公司 一种臂架挠度的测量方法、装置及系统
CN106331527A (zh) * 2016-10-12 2017-01-11 腾讯科技(北京)有限公司 一种图像拼接方法及装置

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111142825A (zh) * 2019-12-27 2020-05-12 杭州拓叭吧科技有限公司 多屏视野的显示方法、系统及电子设备
CN111142825B (zh) * 2019-12-27 2024-04-16 杭州拓叭吧科技有限公司 多屏视野的显示方法、系统及电子设备
CN113873220A (zh) * 2020-12-03 2021-12-31 上海飞机制造有限公司 一种偏差分析方法、装置、系统、设备及存储介质
CN114554176A (zh) * 2022-01-24 2022-05-27 北京有竹居网络技术有限公司 深度相机
CN115781665A (zh) * 2022-11-01 2023-03-14 深圳史河机器人科技有限公司 一种基于单目相机的机械臂控制方法、装置及存储介质
CN115781665B (zh) * 2022-11-01 2023-08-08 深圳史河机器人科技有限公司 一种基于单目相机的机械臂控制方法、装置及存储介质
CN116643393A (zh) * 2023-07-27 2023-08-25 南京木木西里科技有限公司 基于显微图像偏转的处理方法及系统
CN116643393B (zh) * 2023-07-27 2023-10-27 南京木木西里科技有限公司 基于显微图像偏转的处理方法及系统
CN118118645A (zh) * 2024-04-23 2024-05-31 北京工业大学 一种基于vr技术的全景农场实现方法及装置

Also Published As

Publication number Publication date
CN106331527A (zh) 2017-01-11
CN106331527B (zh) 2019-05-17

Similar Documents

Publication Publication Date Title
WO2018068719A1 (fr) Procédé et appareil de collage d'image
US10609282B2 (en) Wide-area image acquiring method and apparatus
CN109087244B (zh) 一种全景图像拼接方法、智能终端及存储介质
CN110809786B (zh) 校准装置、校准图表、图表图案生成装置和校准方法
EP2328125B1 (fr) Procédé et dispositif de raccordement d'images
JP2017112602A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
US20110249117A1 (en) Imaging device, distance measuring method, and non-transitory computer-readable recording medium storing a program
JP2017108387A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
CN106709865B (zh) 一种深度图像合成方法及装置
US8155387B2 (en) Method and system for position determination using image deformation
US10063792B1 (en) Formatting stitched panoramic frames for transmission
JP2007192832A (ja) 魚眼カメラの校正方法。
WO2016155110A1 (fr) Procédé et système de correction de distorsion de perspective d'image
KR102200866B1 (ko) 2차원 이미지를 이용한 3차원 모델링 방법
WO2020151268A1 (fr) Procédé de génération pour carte dynamique astéroïde 3d et terminal portable
CN108282650B (zh) 一种裸眼立体显示方法、装置、系统及存储介质
CN109785225B (zh) 一种用于图像矫正的方法和装置
TWI615808B (zh) 全景即時影像處理方法
JPWO2018167918A1 (ja) プロジェクタ、マッピング用データ作成方法、プログラム及びプロジェクションマッピングシステム
TW201342303A (zh) 三維空間圖像的獲取系統及方法
WO2021093804A1 (fr) Système de configuration de caméra à vision stéréo omnidirectionnelle et procédé de configuration de caméra
WO2018006669A1 (fr) Procédé et appareil de fusion de parallaxe
TW201439664A (zh) 控制方法及電子裝置
JP2018032991A (ja) 画像表示装置、画像表示方法及び画像表示用コンピュータプログラム
JP6071142B2 (ja) 画像変換装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17860624

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17860624

Country of ref document: EP

Kind code of ref document: A1