WO2023236508A1 - Procédé et système d'assemblage d'images basés sur une caméra ayant un réseau d'un milliard de pixels - Google Patents

Procédé et système d'assemblage d'images basés sur une caméra ayant un réseau d'un milliard de pixels Download PDF

Info

Publication number
WO2023236508A1
WO2023236508A1 PCT/CN2022/141925 CN2022141925W WO2023236508A1 WO 2023236508 A1 WO2023236508 A1 WO 2023236508A1 CN 2022141925 W CN2022141925 W CN 2022141925W WO 2023236508 A1 WO2023236508 A1 WO 2023236508A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature point
spliced
moving speed
gigapixel
Prior art date
Application number
PCT/CN2022/141925
Other languages
English (en)
Chinese (zh)
Inventor
袁潮
邓迪旻
温建伟
Original Assignee
北京拙河科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京拙河科技有限公司 filed Critical 北京拙河科技有限公司
Publication of WO2023236508A1 publication Critical patent/WO2023236508A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Definitions

  • the present application relates to the field of image processing, and more specifically, to an image splicing method and system based on a gigapixel array camera.
  • the array camera replaces the shooting effect of one large lens with multiple small lenses. Its principle is to control multiple cameras at the same time for shooting. Compared with traditional cameras, the 100-megapixel array camera has a wider field of view and is more convenient for shooting. The resulting photos are also larger, while being smaller in size.
  • Image registration is the key to image splicing. Image registration aims to find the same area in two images to calculate the coordinate changes between images. The accuracy of image registration directly determines the quality of image splicing.
  • image registration is usually achieved by performing grayscale processing, angle transformation, edge processing, etc. on the image itself, ignoring the image deviation caused by the movement of the shooting target and the shooting device, resulting in low image stitching accuracy.
  • the purpose of the embodiments of the present invention is to provide an image splicing method and system based on a gigapixel array camera, which determines the offset rate based on the movement of the gigapixel array camera and the target object, and corrects the feature points through the offset rate to avoid movement.
  • the resulting image deviation improves the efficiency and accuracy of image stitching.
  • an image splicing method based on a gigapixel array camera including: acquiring image data captured by a gigapixel array camera; the image data is a first image of a target object to be spliced and the second image to be spliced; perform feature point extraction on the first image to be spliced and the second image to be spliced respectively to determine the first feature point and the second feature point; obtain the third feature point of the gigapixel array camera a moving speed; obtaining the second moving speed of the target object; calculating a deviation rate based on the first moving speed and the second moving speed; adjusting the first feature point and the first moving speed based on the deviation rate Use the second feature point to obtain the first modified feature point and the second modified feature point; match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs; filter the multiple optimal feature point pairs.
  • the optimal feature point pair is used to splice the first image to be spliced and the
  • the first image to be spliced and the second image to be spliced are two consecutive frame images.
  • performing feature point extraction on the first image to be spliced and the second image to be spliced respectively, and determining the first feature point and the second feature point include: based on the first image to be spliced or For each pixel on the second image to be spliced, with the pixel as the center, calculate the grayscale difference of n adjacent pixels; if the grayscale difference of adjacent pixels that meet the preset conditions is greater than n /2, then the pixel point is determined as the first feature point or the second feature point.
  • calculating the offset rate based on the first moving speed and the second moving speed includes: calculating a first offset amount based on the first moving speed and the second moving speed respectively. s 1 and the second offset s 2 ; if the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
  • the offset rate R is calculated by the following formula:
  • t 2 represents the shooting time of the second image to be spliced
  • t 1 represents the shooting time of the first image to be spliced
  • vx 1 represents the moving speed of the gigapixel array camera at time t 1
  • vd 1 represents the moving speed of the target object at time t 1 .
  • calculating the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively includes: calculating the first offset through the following formula s 1 and second offset s 2 :
  • vx 1 represents the moving speed of the gigapixel array camera at time t 1
  • vx 2 represents the moving speed of the gigapixel array camera at time t 2
  • vd 1 represents the moving speed of the target object at time t 1
  • vd 2 represents The moving speed of the target object at time t 2
  • ⁇ t represents the shooting time difference between the first image to be spliced and the second image to be spliced.
  • adjusting the first feature point based on the offset rate to obtain a first corrected feature point includes: constructing a three-dimensional coordinate system; obtaining the coordinate value of any one of the plurality of first feature points ( x 1 ,y 1 ,z 1 ); perform coordinate transformation on the feature point through the following formula:
  • f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
  • c x and c y respectively represent the optical center, in pixels
  • Q represents the rotation matrix
  • T represents the translation matrix
  • [Q T] represents the external parameter matrix of the gigapixel array camera
  • R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all feature points in the first feature point and repeats the above steps; maps the first corrected feature point based on the corrected coordinate value of all feature points in the first feature point.
  • adjusting the second feature point based on the offset rate to obtain a second modified feature point includes: obtaining the coordinate value (x 2 , y ) of any one of the plurality of second feature points. 2 ,z 2 ); perform coordinate transformation on the feature point through the following formula:
  • f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
  • c x and c y respectively represent the optical center of the camera, in pixels
  • Q represents rotation Matrix
  • T represents the translation matrix
  • [Q T] represents the external parameter matrix of the gigapixel array camera
  • R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all the feature points in the second feature point and repeats the above steps; and maps the second corrected feature point based on the corrected coordinate value of all the feature points in the second feature point.
  • matching the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs includes: using Hamming distance to match the first modified feature point and the third modified feature point. 2. Correct each feature point in the feature points to obtain multiple optimal feature point pairs with the shortest Hamming distance.
  • the splicing of the first image to be spliced and the second image to be spliced to obtain a total image includes: using a weighted fusion algorithm to obtain a spliced total image.
  • an image splicing system based on a gigapixel array camera
  • an image data acquisition module for acquiring image data captured by the gigapixel array camera
  • the image data is a target The first image to be spliced and the second image to be spliced of the object
  • a feature point determination module configured to extract feature points of the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second image to be spliced.
  • Two feature points Two feature points; a speed determination module, used to obtain the first moving speed and the second moving speed of the gigapixel array camera and the target object; an offset rate calculation module, used to calculate the first moving speed and the second moving speed based on the first moving speed and the second moving speed of the target object; The second moving speed is used to calculate the offset rate; the feature point correction module is used to adjust the first feature point and the second feature point based on the offset rate to obtain the first corrected feature point and the second corrected feature point.
  • an optimal feature point pair acquisition module used to match the first corrected feature point and the second corrected feature point to obtain multiple optimal feature point pairs
  • an image splicing module used to filter the multiple The optimal feature point pair is used to splice the first image to be spliced and the second image to be spliced to obtain a total image.
  • the first image to be spliced and the second image to be spliced are two consecutive frame images.
  • the feature point determination module is further configured to: calculate n neighboring pixels based on each pixel on the first image to be spliced or the second image to be spliced, with the pixel as the center.
  • the grayscale difference if there are more than n/2 adjacent pixels whose grayscale difference meets the preset conditions, then the pixel is determined to be the first feature point or the second feature point.
  • the offset rate calculation module is further configured to: calculate the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively; if If the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
  • the offset rate R is calculated by the following formula:
  • t 2 represents the shooting time of the second image to be spliced
  • t 1 represents the shooting time of the first image to be spliced
  • vx 1 represents the moving speed of the gigapixel array camera at time t 1
  • vd 1 represents the moving speed of the target object at time t 1 .
  • calculating the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively includes: calculating the first offset through the following formula s 1 and second offset s 2 :
  • vx 1 represents the moving speed of the gigapixel array camera at time t 1
  • vx 2 represents the moving speed of the gigapixel array camera at time t 2
  • vd 1 represents the moving speed of the target object at time t 1
  • vd 2 represents The moving speed of the target object at time t 2
  • ⁇ t represents the shooting time difference between the first image to be spliced and the second image to be spliced.
  • the feature point correction module is further used to: construct a three-dimensional coordinate system; obtain the coordinate value (x 1 , y 1 , z 1 ) of any one of the plurality of first feature points; use the following formula Perform coordinate transformation on the feature point:
  • f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
  • c x and c y respectively represent the optical center, in pixels
  • Q represents the rotation matrix
  • T represents the translation matrix
  • [Q T] represents the external parameter matrix of the gigapixel array camera
  • R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all feature points in the first feature point and repeats the above steps; and maps the first corrected feature point based on the corrected coordinate value of all feature points in the first feature point.
  • the feature point correction module is further configured to: obtain the coordinate value (x 2 , y 2 , z 2 ) of any one of the plurality of second feature points; and calculate the feature point through the following formula Perform coordinate transformation:
  • f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
  • c x and c y respectively represent the optical center of the camera, in pixels
  • Q represents rotation Matrix
  • T represents the translation matrix
  • [Q T] represents the external parameter matrix of the gigapixel array camera
  • R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all the feature points in the second feature point and repeats the above steps; and maps the second corrected feature point based on the corrected coordinate value of all the feature points in the second feature point.
  • the optimal feature point pair acquisition module is further configured to: use Hamming distance to match each feature point in the first modified feature point and the second modified feature point, and obtain the shortest Hamming distance Multiple optimal feature point pairs.
  • the image splicing module is further used to: use a weighted fusion algorithm to obtain the spliced total image.
  • the present invention uses a gigapixel array camera to capture a target object and obtain the first image to be spliced and the second image to be spliced; feature point extraction is performed on the first image to be spliced and the second image to be spliced respectively, and the first feature point and the second image to be spliced are determined.
  • Two feature points obtain the first moving speed of the gigapixel array camera; obtain the second moving speed of the target object; calculate the offset rate based on the first moving speed and the second moving speed; adjust the first feature point based on the offset rate and the second feature point to obtain the first modified feature point and the second modified feature point; match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs; screen multiple optimal feature point pairs, Splice the first image to be spliced and the second image to be spliced to obtain a total image.
  • the calculation of the offset rate takes into account the errors that the moving direction and speed of the camera and the subject may cause to the image stitching.
  • a three-dimensional coordinate system is introduced to combine the camera parameters with the offset rate to correct the feature points of the two images; through The image data collected in this way is of higher quality and can improve the efficiency and accuracy of image stitching.
  • Figure 1 is a schematic flow chart of an image stitching method based on a gigapixel array camera provided by an embodiment of the present application
  • Figure 2 is a schematic structural diagram of an image stitching system based on a gigapixel array camera provided by an embodiment of the present application.
  • Embodiments of the present application provide an image splicing method and system based on a gigapixel array camera, which includes photographing a target object with a gigapixel array camera, acquiring a first image to be spliced and a second image to be spliced; and separately processing the first image to be spliced.
  • the present invention can improve the efficiency and accuracy of image stitching.
  • the image splicing method and system based on a gigapixel array camera can be integrated into electronic equipment, and the electronic equipment can be terminals, servers, and other equipment.
  • the terminal can be a light field camera, a vehicle camera, a mobile phone, a tablet, a smart Bluetooth device, a laptop, or a personal computer (PC);
  • the server can be a single server or composed of multiple servers. server cluster.
  • the above examples should not be construed as limitations of this application.
  • Figure 1 shows a schematic flowchart of an image stitching method based on a gigapixel array camera provided by an embodiment of the present application. Please refer to Figure 1, which specifically includes the following steps:
  • the gigapixel array camera is a cross-scale imaging camera that combines a main lens and an array of N micro lenses.
  • the micro lenses can form different focal lengths according to different optical path designs. When multiple lenses work in parallel, they can capture Different images from near and far.
  • the first image to be spliced and the second image to be spliced may be two consecutive frame images.
  • the first image to be spliced and the second image to be spliced may be two frames of images acquired within a preset time interval; for example, the starting time is 17 o'clock and the preset time interval is 5 seconds, the image taken at 17 o'clock will be used as the first image to be stitched, and the image taken at 17:00 minutes and 5 seconds will be used as the second image to be stitched.
  • the computer device receives the image data collected by the gigapixel array camera, and the image data can be transmitted through the fifth generation mobile communication technology or through the wifi network.
  • the image data can be portraits, large animals, small animals, vehicles, plants, etc.
  • S120 Extract feature points from the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second feature point.
  • the grayscale difference of n adjacent pixel points on a circle with a radius of d can be calculated. value; if there are more than n/2 adjacent pixels whose grayscale difference value meets the preset condition, then the pixel is determined to be the first feature point or the second feature point.
  • step S150 may specifically include the following steps:
  • vx 1 represents the moving speed of the gigapixel array camera at time t 1
  • vx 2 represents the moving speed of the gigapixel array camera at time t 2
  • vd 1 represents the moving speed of the target object at time t 1
  • vd 2 represents The moving speed of the target object at time t 2
  • ⁇ t represents the shooting time difference between the first image to be spliced and the second image to be spliced.
  • the offset rate R is calculated by the following formula:
  • the offset rate R is calculated by the following formula:
  • t 2 represents the shooting time of the second image to be spliced
  • t 1 represents the shooting time of the first image to be spliced
  • vx 1 represents the moving speed of the gigapixel array camera at time t 1
  • vd 1 represents the moving speed of the target object at time t 1 .
  • step S160 may specifically include the following steps:
  • f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
  • c x and c y respectively represent the optical center, in pixels
  • Q represents the rotation matrix
  • T represents the translation matrix
  • [Q T] represents the external parameter matrix of the gigapixel array camera
  • R represents the offset rate, Indicates the corrected coordinate value of the feature point.
  • f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
  • c x and c y respectively represent the optical center of the camera, in pixels
  • Q represents rotation Matrix
  • T represents the translation matrix
  • [Q T] represents the external parameter matrix of the gigapixel array camera
  • R represents the offset rate, Indicates the corrected coordinate value of the feature point.
  • the calculation of the offset rate takes into account the errors that may be caused by the moving direction and speed of the camera and the subject in image stitching.
  • This implementation method introduces a three-dimensional coordinate system and innovatively combines the camera's internal parameter matrix and external parameter matrix with the offset rate caused by movement to correct the feature points of the two images and reduce the occurrence of splicing errors.
  • Hamming distance is used to match each feature point in the first modified feature point and the second modified feature point to obtain a plurality of optimal feature point pairs with the shortest Hamming distance.
  • the required number of optimal feature point pairs can be preset, or the Hamming distance threshold can be preset, which is not specifically limited here.
  • a weighted fusion algorithm can be used to obtain the total image after stitching.
  • this embodiment also provides an image stitching system based on a gigapixel array camera. As shown in Figure 2, the system includes:
  • the image data acquisition module 210 is used to acquire image data captured by a gigapixel array camera; the image data is the first image to be spliced and the second image to be spliced of the target object.
  • the feature point determination module 220 is configured to extract feature points from the first image to be spliced and the second image to be spliced respectively, and determine the first feature point and the second feature point.
  • the speed determination module 230 is configured to obtain the first moving speed and the second moving speed of the gigapixel array camera and the target object respectively.
  • the deviation rate calculation module 240 is configured to calculate a deviation rate based on the first moving speed and the second moving speed.
  • the feature point correction module 250 is configured to adjust the first feature point and the second feature point based on the offset rate to obtain first corrected feature points and second corrected feature points.
  • the optimal feature point pair acquisition module 260 is used to match the first modified feature point and the second modified feature point to obtain multiple optimal feature point pairs.
  • the image splicing module 270 is used to screen the multiple optimal feature point pairs, splice the first image to be spliced and the second image to be spliced, and obtain a total image.
  • the first image to be spliced and the second image to be spliced are two consecutive frame images.
  • the feature point determination module 220 is further configured to: calculate n neighboring pixels based on each pixel on the first image to be spliced or the second image to be spliced, with the pixel as the center. The grayscale difference value of the point; if there are more than n/2 adjacent pixel points whose grayscale difference value meets the preset condition, then the pixel point is determined to be the first feature point or the second feature point.
  • the offset rate calculation module 240 is further configured to: calculate the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively; If the gigapixel array camera and the target object move in the same direction, the offset rate R is calculated by the following formula:
  • the offset rate R is calculated by the following formula:
  • t 2 represents the shooting time of the second image to be spliced
  • t 1 represents the shooting time of the first image to be spliced
  • vx 1 represents the moving speed of the gigapixel array camera at time t 1
  • vd 1 represents the moving speed of the target object at time t 1 .
  • calculating the first offset s 1 and the second offset s 2 based on the first moving speed and the second moving speed respectively includes: calculating the first offset through the following formula s 1 and second offset s 2 :
  • vx 1 represents the moving speed of the gigapixel array camera at time t 1
  • vx 2 represents the moving speed of the gigapixel array camera at time t 2
  • vd 1 represents the moving speed of the target object at time t 1
  • vd 2 represents The moving speed of the target object at time t 2
  • ⁇ t represents the shooting time difference between the first image to be spliced and the second image to be spliced.
  • the feature point correction module 250 is further used to: construct a three-dimensional coordinate system; obtain the coordinate value (x 1 , y 1 , z 1 ) of any one of the plurality of first feature points; through the following The formula performs coordinate transformation on the feature point:
  • f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
  • c x and c y respectively represent the optical center, in pixels
  • Q represents the rotation matrix
  • T represents the translation matrix
  • [Q T] represents the external parameter matrix of the gigapixel array camera
  • R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all feature points in the first feature point and repeats the above steps; and maps the first corrected feature point based on the corrected coordinate value of all feature points in the first feature point.
  • the feature point correction module 250 is further configured to: obtain the coordinate value (x 2 , y 2 , z 2 ) of any one of the plurality of second feature points; Point coordinate transformation:
  • f x and f y represent the length of the focal length in the x-axis and y-axis directions, respectively, in millimeters
  • c x and c y respectively represent the optical center of the camera, in pixels
  • Q represents rotation Matrix
  • T represents the translation matrix
  • [Q T] represents the external parameter matrix of the gigapixel array camera
  • R represents the offset rate, represents the corrected coordinate value of the feature point; traverses all the feature points in the second feature point and repeats the above steps; and maps the second corrected feature point based on the corrected coordinate value of all the feature points in the second feature point.
  • the optimal feature point pair acquisition module 260 is further configured to use Hamming distance to match each feature point in the first modified feature point and the second modified feature point to obtain the shortest Hamming distance. multiple optimal feature point pairs.
  • the image splicing module 270 is further configured to use a weighted fusion algorithm to obtain a spliced total image.
  • this system takes into account the errors that the moving direction and speed of the camera and the subject may cause in image stitching. It also introduces a three-dimensional coordinate system, combines camera parameters with the offset rate, and corrects the feature points of the two images. , which can improve the efficiency and accuracy of image stitching.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in the embodiment provided by this application can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention est spécifiquement appliquée au domaine du traitement d'image. La présente invention concerne un procédé et un système d'assemblage d'images basés sur une caméra ayant un réseau d'un milliard de pixels. Le procédé comprend les étapes suivantes : photographier, par une caméra ayant un réseau d'un milliard de pixels, un objet cible, de façon à acquérir une première image à soumettre à un assemblage et une seconde image à soumettre à un assemblage; effectuer respectivement une extraction de points caractéristiques sur ladite première image et ladite seconde image, de façon à déterminer des premiers points caractéristiques et des seconds points caractéristiques; acquérir une première vitesse de déplacement de la caméra ayant un réseau d'un milliard de pixels; acquérir une seconde vitesse de déplacement de l'objet cible; calculer un taux d'écart sur la base de la première vitesse de déplacement et de la seconde vitesse de déplacement; ajuster les premiers points caractéristiques et les seconds points caractéristiques sur la base du taux d'écart, de façon à obtenir des premiers points caractéristiques corrigés et des seconds points caractéristiques corrigés; mettre en correspondance les premiers points caractéristiques corrigés et les seconds points caractéristiques corrigés, de façon à obtenir une pluralité de paires de points caractéristiques optimales; et effectuer un criblage sur la pluralité de paires de points caractéristiques optimales, et assembler ladite première image et ladite seconde image, de façon à obtenir une image complète. De cette manière, la présente invention peut améliorer l'efficacité et la précision d'assemblage d'images.
PCT/CN2022/141925 2022-06-07 2022-12-26 Procédé et système d'assemblage d'images basés sur une caméra ayant un réseau d'un milliard de pixels WO2023236508A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210639481.5 2022-06-07
CN202210639481.5A CN114841862B (zh) 2022-06-07 2022-06-07 一种基于亿像素阵列式相机的图像拼接方法及系统

Publications (1)

Publication Number Publication Date
WO2023236508A1 true WO2023236508A1 (fr) 2023-12-14

Family

ID=82573495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/141925 WO2023236508A1 (fr) 2022-06-07 2022-12-26 Procédé et système d'assemblage d'images basés sur une caméra ayant un réseau d'un milliard de pixels

Country Status (2)

Country Link
CN (1) CN114841862B (fr)
WO (1) WO2023236508A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841862B (zh) * 2022-06-07 2023-02-03 北京拙河科技有限公司 一种基于亿像素阵列式相机的图像拼接方法及系统
CN115829843B (zh) * 2023-01-09 2023-05-12 深圳思谋信息科技有限公司 图像拼接方法、装置、计算机设备及存储介质
CN118014828B (zh) * 2023-12-19 2024-08-20 苏州一际智能科技有限公司 用于阵列摄像机的图像拼接方法、装置和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170006219A1 (en) * 2015-06-30 2017-01-05 Gopro, Inc. Image stitching in a multi-camera array
CN107945113A (zh) * 2017-11-17 2018-04-20 北京天睿空间科技股份有限公司 局部图像拼接错位的矫正方法
CN113891111A (zh) * 2021-09-29 2022-01-04 北京拙河科技有限公司 十亿像素视频的直播方法、装置、介质及设备
CN114418839A (zh) * 2021-12-09 2022-04-29 浙江大华技术股份有限公司 图像拼接方法、电子设备以及计算机可读存储介质
CN114841862A (zh) * 2022-06-07 2022-08-02 北京拙河科技有限公司 一种基于亿像素阵列式相机的图像拼接方法及系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10547825B2 (en) * 2014-09-22 2020-01-28 Samsung Electronics Company, Ltd. Transmission of three-dimensional video
EP3323109B1 (fr) * 2015-07-16 2022-03-23 Google LLC Estimation de pose d'appareil photo pour dispositifs mobiles
JP6741533B2 (ja) * 2016-09-26 2020-08-19 キヤノン株式会社 撮影制御装置およびその制御方法
JP2019164136A (ja) * 2018-03-19 2019-09-26 株式会社リコー 情報処理装置、撮像装置、移動体、画像処理システムおよび情報処理方法
CN108566513A (zh) * 2018-03-28 2018-09-21 深圳臻迪信息技术有限公司 一种无人机对运动目标的拍摄方法
CN110706257B (zh) * 2019-09-30 2022-07-22 北京迈格威科技有限公司 有效特征点对的识别方法、相机状态的确定方法及装置
CN112866542B (zh) * 2019-11-12 2022-08-12 Oppo广东移动通信有限公司 追焦方法和装置、电子设备、计算机可读存储介质
CN111260542A (zh) * 2020-01-17 2020-06-09 中国电子科技集团公司第十四研究所 一种基于子块配准的sar图像拼接方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170006219A1 (en) * 2015-06-30 2017-01-05 Gopro, Inc. Image stitching in a multi-camera array
CN107945113A (zh) * 2017-11-17 2018-04-20 北京天睿空间科技股份有限公司 局部图像拼接错位的矫正方法
CN113891111A (zh) * 2021-09-29 2022-01-04 北京拙河科技有限公司 十亿像素视频的直播方法、装置、介质及设备
CN114418839A (zh) * 2021-12-09 2022-04-29 浙江大华技术股份有限公司 图像拼接方法、电子设备以及计算机可读存储介质
CN114841862A (zh) * 2022-06-07 2022-08-02 北京拙河科技有限公司 一种基于亿像素阵列式相机的图像拼接方法及系统

Also Published As

Publication number Publication date
CN114841862B (zh) 2023-02-03
CN114841862A (zh) 2022-08-02

Similar Documents

Publication Publication Date Title
WO2023236508A1 (fr) Procédé et système d'assemblage d'images basés sur une caméra ayant un réseau d'un milliard de pixels
CN111147741B (zh) 基于对焦处理的防抖方法和装置、电子设备、存储介质
US11019330B2 (en) Multiple camera system with auto recalibration
WO2020259474A1 (fr) Procédé et appareil de suivi de mise au point, équipement terminal, et support d'enregistrement lisible par ordinateur
KR101657039B1 (ko) 화상 처리 장치, 화상 처리 방법, 및 촬상 시스템
TWI808987B (zh) 將相機與陀螺儀融合在一起的五維視頻穩定化裝置及方法
CN109003311B (zh) 一种鱼眼镜头的标定方法
US10915998B2 (en) Image processing method and device
WO2020088133A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage lisible par ordinateur
US10733705B2 (en) Information processing device, learning processing method, learning device, and object recognition device
WO2021139176A1 (fr) Procédé et appareil de suivi de trajectoire de piéton sur la base d'un étalonnage de caméra binoculaire, dispositif informatique et support de stockage
JP2019510234A (ja) 奥行き情報取得方法および装置、ならびに画像取得デバイス
WO2017020150A1 (fr) Procédé de traitement d'image, dispositif et appareil photographique
JP2017112602A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
JP2017108387A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
JP6577703B2 (ja) 画像処理装置及び画像処理方法、プログラム、記憶媒体
CN112005548B (zh) 生成深度信息的方法和支持该方法的电子设备
JPWO2018235163A1 (ja) キャリブレーション装置、キャリブレーション用チャート、チャートパターン生成装置、およびキャリブレーション方法
TWI761684B (zh) 影像裝置的校正方法及其相關影像裝置和運算裝置
WO2019232793A1 (fr) Procédé d'étalonnage de deux caméras, dispositif électronique et support de stockage lisible par ordinateur
JP2010041419A (ja) 画像処理装置、画像処理プログラム、画像処理方法、および電子機器
JP2017017689A (ja) 全天球動画の撮影システム、及びプログラム
CN113875219B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2017128750A1 (fr) Procédé de collecte d'images et dispositif de collecte d'images
JP7312026B2 (ja) 画像処理装置、画像処理方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22945636

Country of ref document: EP

Kind code of ref document: A1