WO2022262273A1 - 光心对齐检测方法和装置、存储介质、电子设备 - Google Patents

光心对齐检测方法和装置、存储介质、电子设备 Download PDF

Info

Publication number
WO2022262273A1
WO2022262273A1 PCT/CN2022/072968 CN2022072968W WO2022262273A1 WO 2022262273 A1 WO2022262273 A1 WO 2022262273A1 CN 2022072968 W CN2022072968 W CN 2022072968W WO 2022262273 A1 WO2022262273 A1 WO 2022262273A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matrix
feature point
feature
pairs
Prior art date
Application number
PCT/CN2022/072968
Other languages
English (en)
French (fr)
Inventor
饶童
胡洋
周杰
李伟
Original Assignee
贝壳技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 贝壳技术有限公司 filed Critical 贝壳技术有限公司
Publication of WO2022262273A1 publication Critical patent/WO2022262273A1/zh

Links

Images

Classifications

    • G06T3/14
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Definitions

  • the present disclosure relates to the technical field of image stitching, in particular to a method and device for detecting alignment of optical centers, a storage medium, and electronic equipment.
  • the disclosure provides an optical center alignment detection method and device, a storage medium, and electronic equipment.
  • a method for detecting optical center alignment including:
  • the determining an internal reference matrix and a rotation matrix of the imaging device based on the first mapping matrix includes:
  • the first mapping matrix is processed by a global optimization algorithm to determine the internal reference matrix and rotation matrix of the imaging device; wherein, the rotation matrix represents the position and orientation of the imaging device when capturing the first image and the acquisition position The difference between the poses of the second image.
  • the determining, based on the internal reference matrix and the rotation matrix, whether the center of the imaging device is aligned when capturing the first image and the second image includes:
  • the determining whether multiple pairs of feature points in the first image and the second image meet the set conditions based on the internal reference matrix and the rotation matrix includes:
  • the determining the first mapping matrix between the first image and the second image acquired by the imaging device at different poses includes:
  • a set of feature point pairs is determined; wherein, the set of feature point pairs includes multiple pairs of feature point pairs, and each pair of feature point pairs includes a corresponding relationship A first feature point in the first image and a second feature point in the second image;
  • a first mapping matrix between the first image and the second image is determined based on the set of feature point pairs.
  • the determining the set of feature point pairs based on the first image and the second image collected by the imaging device at different poses includes:
  • the feature point pair set is obtained based on the plurality of feature point pairs.
  • the determining multiple pairs of feature points in the first image and the second image includes:
  • an optical center alignment detection device including:
  • a mapping matrix determination module configured to determine a first mapping matrix between the first image and the second image collected by the imaging device at different poses
  • a matrix estimation module configured to determine an internal reference matrix and a rotation matrix of the imaging device based on the first mapping matrix
  • An alignment verification module configured to determine, based on the internal reference matrix and the rotation matrix, whether the center of the imaging device is aligned when capturing the first image and the second image.
  • the matrix estimation module is specifically configured to process the first mapping matrix through a global optimization algorithm, and determine an internal reference matrix and a rotation matrix of the imaging device; wherein, the rotation matrix represents the The difference between the pose when the first image is captured and the pose when the second image is captured.
  • the alignment verification module is specifically configured to determine whether multiple pairs of feature point pairs in the first image and the second image meet the set conditions based on the internal reference matrix and the rotation matrix; in response The multiple pairs of feature points conform to the set condition, and it is determined that the centers of the imaging device are aligned when capturing the first image and the second image.
  • the alignment verification module is further configured to stitch the first image and the second image aligned with the optical centers based on the pairs of feature points.
  • the alignment verification module determines whether multiple pairs of feature points in the first image and the second image meet the set conditions based on the internal reference matrix and the rotation matrix, it is configured to Solving the internal reference matrix and the rotation matrix to obtain a second mapping matrix; based on the corresponding relationship between the second mapping matrix and the first mapping matrix, determine whether the plurality of feature point pairs meet the set conditions.
  • the alignment verification module is further configured to, in response to the plurality of feature point pairs not complying with the set condition, determine that the camera device captures the first image and the second image when the centroid not aligned.
  • mapping matrix determination module includes:
  • a set determining unit configured to determine a set of feature point pairs based on the first image and the second image collected by the camera device in different poses; wherein, the set of feature point pairs includes multiple pairs of feature point pairs, and each pair of feature points The point pair includes a first feature point in the first image and a second feature point in the second image that have a corresponding relationship;
  • a feature point mapping unit configured to determine a first mapping matrix between the first image and the second image based on the set of feature point pairs.
  • the set determination unit is specifically configured to determine the first image and the second image collected continuously from the multiple images based on multiple images collected by the imaging device at different poses; determine the first image An image and multiple pairs of feature points in the second image; obtaining the set of feature point pairs based on the multiple pairs of feature points.
  • the set determination unit determines multiple pairs of feature points in the first image and the second image, it is used to extract feature points from the first image and the second image respectively , to obtain a plurality of first feature points corresponding to the first image and a plurality of second feature points corresponding to the second image; based on the feature description corresponding to each first feature point in the plurality of first feature points and the feature descriptor corresponding to each second feature point in the plurality of second feature points, to determine the correspondence between a plurality of the first feature points and the second feature points; based on the existence of the corresponding relationship The first feature point and the second feature point determine one feature point pair to obtain the plurality of feature point pairs.
  • a computer-readable storage medium stores a computer program, and the computer program is used to execute the optical center alignment detection method described in any one of the above embodiments.
  • an electronic device includes:
  • the processor is configured to read the executable instruction from the memory, and execute the instruction to implement any one of the optical center alignment detection methods described above.
  • the first mapping matrix between the first image and the second image collected by the imaging device at different poses is determined; based on the first A mapping matrix to determine the internal reference matrix and rotation matrix of the imaging device; based on the internal reference matrix and the rotation matrix, determine whether there is displacement when the imaging device captures the first image and the second image; this
  • the verification method of the optical center alignment that only depends on the scene is disclosed through the internal reference matrix and the rotation matrix.
  • Fig. 1 is a schematic flowchart of a method for detecting alignment of optical centers provided by an exemplary embodiment of the present disclosure.
  • FIG. 2 is a schematic flowchart of step 106 in the embodiment shown in FIG. 1 of the present disclosure.
  • FIG. 3 is a schematic flowchart of step 102 in the embodiment shown in FIG. 1 of the present disclosure.
  • Fig. 4 is a schematic structural diagram of an optical center alignment detection device provided by an exemplary embodiment of the present disclosure.
  • Fig. 5 is a structural diagram of an electronic device provided by an exemplary embodiment of the present disclosure.
  • plural may refer to two or more than two, and “at least one” may refer to one, two or more than two.
  • the term "and/or" in the present disclosure is only an association relationship describing associated objects, indicating that there may be three relationships, for example, A and/or B may indicate: A exists alone, and A and B exist simultaneously , there are three cases of B alone.
  • the character "/" in the present disclosure generally indicates that the contextual objects are an "or" relationship.
  • Embodiments of the present disclosure may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well known terminal devices, computing systems, environments and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick client computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the foregoing, among others.
  • Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by the computer system.
  • program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • the computer system/server can be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computing system storage media including storage devices.
  • Fig. 1 is a schematic flowchart of a method for detecting alignment of optical centers provided by an exemplary embodiment of the present disclosure. This embodiment can be applied to electronic equipment, as shown in Figure 1, including the following steps:
  • Step 102 determining a first mapping matrix between the first image and the second image acquired by the imaging device at different poses.
  • the first image and the second image in this embodiment may be two consecutive images among the multiple images obtained by the continuous rotation of the camera device on the pan/tilt, or they may be discontinuous in the shooting sequence, but there are For the two images in the overlapping part, in order to obtain a panorama based on multiple images collected by the camera device, it is necessary to determine whether the first image and the second image can be stitched; the first mapping matrix is usually represented as a 3*3 matrix, which is used for Describe the mapping relationship between the first image and the second image.
  • Step 104 determining an internal reference matrix and a rotation matrix of the imaging device based on the first mapping matrix.
  • the internal reference matrix can include the optical center of the imaging device, digital focal length, camera distortion parameters, etc. All the above parameters together describe how light propagates and is displayed on the imaging device after entering the imaging device;
  • the rotation matrix is described in this disclosure In the embodiment, it specifically refers to the pose transformation amount (including: rotation angle on three degrees of freedom) of the imaging device between shooting the first image and shooting the second image, which is a three-degree-of-freedom (including: pitch angle, roll angle) and yaw angle, establish a space coordinate system, and the rotation along each coordinate system is independent of each other, and each is an independent variable of a degree of freedom).
  • Step 106 based on the internal reference matrix and the rotation matrix, it is determined whether the camera equipment is aligned when capturing the first image and the second image.
  • whether the optical centers are aligned is determined by whether there is a displacement between the camera devices corresponding to the first image and the second image.
  • the optical centers are aligned, it means that the first image and the second image can achieve a better stitching effect.
  • This embodiment proposes to determine whether the optical centers of the camera equipment are aligned when collecting different images by means of internal reference matrix and rotation matrix verification, which realizes the verification of real-time optical center alignment and improves the efficiency and effect of image stitching.
  • the first mapping matrix between the first image and the second image acquired by the imaging device at different poses is determined; and the first mapping matrix is determined based on the first mapping matrix.
  • An internal reference matrix and a rotation matrix of the imaging device based on the internal reference matrix and the rotation matrix, determine whether the center of the imaging device is aligned when capturing the first image and the second image; this embodiment uses the internal reference matrix and the rotation matrix
  • the rotation matrix implements a verification method that only depends on the alignment of the optical centers of the scene. When the optical centers of the camera equipment are aligned, better global image stitching can be achieved.
  • step 104 in the above embodiment includes:
  • the first mapping matrix is processed by a global optimization algorithm to determine an internal reference matrix and a rotation matrix of the imaging device.
  • the rotation matrix represents the difference between the pose of the imaging device when capturing the first image and the pose of capturing the second image; through iterative solution, determine the internal reference matrix and rotation matrix of the imaging device in this implementation; optionally , the process of determining the internal reference matrix and the rotation matrix of the imaging device based on the first mapping matrix can be realized based on the following formula (1):
  • H represents the first mapping matrix
  • K represents the internal reference matrix
  • R represents the rotation matrix
  • K -1 represents the inverse matrix of the internal reference matrix
  • represents point multiplication
  • the global optimization algorithm can solve the optimal solution of specific unknown variables for problems with sampled data but unknown distribution; the process of using the global optimization algorithm to iteratively solve the above formula (1) to obtain the internal parameter matrix and rotation matrix belongs to Maximum Likelihood Estimation.
  • step 106 may include the following steps:
  • Step 1061 determine whether multiple pairs of feature points in the first image and the second image meet the set conditions based on the internal reference matrix and the rotation matrix, if yes, perform step 1062 , otherwise, perform step 1063 .
  • the second mapping matrix is obtained by solving based on the internal reference matrix and the rotation matrix; based on the corresponding relationship between the second mapping matrix and the first mapping matrix, it is determined whether the plurality of feature point pairs meet the set conditions.
  • the internal reference matrix and rotation matrix are obtained based on the above formula (1). At this time, if the internal reference matrix and rotation matrix are brought back into the formula (1), by calculating K R K - As a result of 1 , the second mapping matrix is obtained. At this time, if there is a displacement (optical center misalignment) in the process of capturing the first image and the second image by the imaging device, it will cause a large error in the calculation process, so that the second mapping The difference between the matrix and the first mapping matrix is large; therefore, in this embodiment, whether the optical center of the imaging device is aligned is determined by whether the calculated difference between the second mapping matrix and the first mapping matrix is within a set range.
  • step 1062 it is determined that the cameras are aligned when capturing the first image and the second image.
  • step 1063 it is determined that the camera equipment does not align the optical centers when capturing the first image and the second image.
  • mapping formula (2) of a pair of feature point pairs in the first image and the second image
  • p 1 and p 2 represent a pair of feature points corresponding to the first image and the second image respectively (which can be considered as the display of the same real point in the two images)
  • K represents the internal reference matrix
  • K -1 represents the value of the internal reference matrix Inverse matrix
  • R 21 represents the pose change (rotation matrix) between the pose when the camera captures the first image and the pose of the second image (rotation matrix)
  • t 21 represents the difference between the capture of the first image and the capture of the second image by the camera
  • d represents the distance between the real point corresponding to the feature point pair and the optical center
  • n represents the normal vector of the plane where the real point is located
  • T represents the transpose; therefore, when t 21 in formula (2) is approximately When it is zero, the t 21 n T part in the formula can be omitted to obtain the above formula (1).
  • the method may further include:
  • the first image and the second image whose optical centers are aligned are stitched based on multiple pairs of feature points.
  • the first image and the second image when it is determined that the optical center of the imaging device is aligned when capturing the first image and the second image, it means that the first image and the second image can achieve a better splicing effect.
  • the first image and the second image are spliced, but the first image and the second image whose optical centers are not aligned are not spliced, so as to ensure that the panoramic image obtained by splicing has better quality.
  • step 102 may include the following steps:
  • Step 1021 based on the first image and the second image collected by the camera device at different poses, determine a set of feature point pairs.
  • the feature point pair set includes multiple pairs of feature point pairs, and each pair of feature point pairs includes a first feature point in the first image and a second feature point in the second image that have a corresponding relationship.
  • the imaging device determines the first image and the second image that are continuously collected from the multiple images;
  • a set of feature point pairs is obtained based on multiple pairs of feature point pairs.
  • the camera device can be set on the pan-tilt, and the pan-tilt can be rotated at multiple angles to obtain an image collection.
  • the image collection includes multiple images.
  • continuous shooting can be used. The two images are used as the first image and the second image, and all feature point pairs that have a corresponding relationship in the first image and the second image are obtained to form a set of feature point pairs, and the first mapping matrix determined based on the set of feature point pairs is closer to actual value.
  • Step 1022 Determine a first mapping matrix between the first image and the second image based on the set of feature point pairs.
  • the first feature point and the second feature point are points that are displayed on the first image and the second image respectively after the same real point in space is captured by the camera device.
  • you can determine the connection relationship and overlapping parts between the first image and the second image determine the first mapping matrix between the first image and the second image; determine the first mapping matrix through the set of feature point pairs, due to the combination of The connection relationship between the multiple feature point pairs improves the accuracy of the obtained first mapping matrix.
  • determining multiple pairs of feature points in the first image and the second image includes:
  • a feature point pair is determined based on the corresponding first feature point and the second feature point to obtain multiple feature point pairs.
  • the feature point can be a more prominent point in the image such as a corner point, and the feature point in the first image and the second image can be respectively extracted by the ORB extraction algorithm; the feature point can be obtained by the position (coordinate) and description (descriptor) to realize the unique representation, the present embodiment determines which feature points correspond to the same real point through the feature descriptor, wherein, the feature descriptor is that the pixels around the feature point are processed to describe the feature of the feature point; Descriptors and positions can uniquely determine a feature point.
  • multiple pairs of feature points are determined for the first image and the second image by combining feature descriptors, which improves the accuracy of the determined feature point pairs, thereby improving the optical center Accuracy of aligned detection results.
  • any optical center alignment detection method provided in the embodiments of the present disclosure may be executed by any appropriate device with data processing capabilities, including but not limited to: terminal devices, servers, and the like.
  • any optical center alignment detection method provided in the embodiments of the present disclosure may be executed by a processor, for example, the processor executes any optical center alignment detection method mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in a memory. I won't go into details below.
  • Fig. 4 is a schematic structural diagram of an optical center alignment detection device provided by an exemplary embodiment of the present disclosure. As shown in Figure 4, the device provided in this embodiment includes:
  • the mapping matrix determination module 41 is used to determine the first mapping matrix between the first image and the second image collected by the imaging device at different poses.
  • a matrix estimation module 42 configured to determine an internal reference matrix and a rotation matrix of the imaging device based on the first mapping matrix.
  • the alignment verification module 43 is configured to determine, based on the internal reference matrix and the rotation matrix, whether the center of the imaging device is aligned when capturing the first image and the second image.
  • the optical center alignment detection device determines the first mapping matrix between the first image and the second image collected by the imaging device at different poses; determines the first mapping matrix based on the first mapping matrix An internal reference matrix and a rotation matrix of the imaging device; based on the internal reference matrix and the rotation matrix, determine whether the center of the imaging device is aligned when capturing the first image and the second image; this embodiment uses the internal reference matrix and the rotation matrix
  • the rotation matrix implements a verification method that only depends on the alignment of the optical centers of the scene. When the optical centers of the camera equipment are aligned, better global image stitching can be achieved.
  • the matrix estimation module 42 is specifically configured to process the first mapping matrix through a global optimization algorithm to determine an internal reference matrix and a rotation matrix of the imaging device.
  • the rotation matrix represents the difference between the pose of the imaging device when capturing the first image and the pose of capturing the second image.
  • the alignment verification module 43 is specifically configured to determine whether multiple pairs of feature point pairs in the first image and the second image meet the set conditions based on the internal reference matrix and the rotation matrix; in response to multiple pairs of feature point pairs meeting the set conditions , to determine that the camera device is aligned when capturing the first image and the second image.
  • the alignment verification module 43 is further configured to stitch the first image and the second image whose optical centers are aligned based on multiple pairs of feature points.
  • the alignment verification module 43 determines whether multiple pairs of feature points in the first image and the second image meet the set conditions based on the internal reference matrix and the rotation matrix, it is used to obtain the second mapping based on the internal reference matrix and the rotation matrix a matrix; based on the corresponding relationship between the second mapping matrix and the first mapping matrix, determine whether the plurality of feature point pairs meet the set condition.
  • the alignment verification module 43 is further configured to, in response to a plurality of feature point pairs not complying with the set condition, determine that the center of the imaging device is not aligned when capturing the first image and the second image.
  • mapping matrix determination module 41 includes:
  • a set determination unit configured to determine a set of feature point pairs based on the first image and the second image collected by the camera device in different poses; wherein, the set of feature point pairs includes multiple pairs of feature point pairs, and each pair of feature point pairs includes A first feature point in the first image of the correspondence relationship and a second feature point in the second image;
  • a feature point mapping unit configured to determine a first mapping matrix between the first image and the second image based on the set of feature point pairs.
  • the set determining unit is specifically configured to determine the first image and the second image collected continuously from the multiple images based on the multiple images collected by the camera device in different poses; determine the first image and the second image Multiple pairs of feature points in ; Based on multiple pairs of feature points, a set of feature point pairs is obtained.
  • the set determination unit determines multiple pairs of feature points in the first image and the second image, it is used to extract feature points from the first image and the second image respectively, to obtain a plurality of feature point pairs corresponding to the first image.
  • a feature point and a plurality of second feature points corresponding to the second image; based on the feature descriptor corresponding to each first feature point in the plurality of first feature points corresponding to each second feature point in the plurality of second feature points Determining the corresponding relationship between multiple first feature points and second feature points; determining a feature point pair based on the corresponding first feature point and second feature point, and obtaining multiple feature point pairs.
  • the optical center alignment detection device may include: a processor; a memory for storing instructions executable by the processor; the processor is used for reading the executable instructions from the memory, And execute the instructions to implement the optical center alignment detection method provided by the exemplary embodiments of the present disclosure.
  • the electronic device may be either or both of the first device 100 and the second device 200, or a stand-alone device independent of them, and the stand-alone device may communicate with the first device and the second device to receive information from them.
  • the captured input signal may be either or both of the first device 100 and the second device 200, or a stand-alone device independent of them, and the stand-alone device may communicate with the first device and the second device to receive information from them.
  • the captured input signal may be either or both of the first device 100 and the second device 200, or a stand-alone device independent of them, and the stand-alone device may communicate with the first device and the second device to receive information from them.
  • FIG. 5 illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
  • an electronic device 50 includes one or more processors 51 and a memory 52 .
  • the processor 51 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 50 to perform desired functions.
  • CPU central processing unit
  • the processor 51 may control other components in the electronic device 50 to perform desired functions.
  • Memory 52 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include, for example, random access memory (RAM) and/or cache memory (cache).
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 51 may execute the program instructions to implement the optical center alignment detection method and/or the various embodiments of the present disclosure described above or other desired functionality.
  • Various contents such as input signal, signal component, noise component, etc. may also be stored in the computer-readable storage medium.
  • the electronic device 50 may further include: an input device 53 and an output device 54, and these components are interconnected through a bus system and/or other forms of connection mechanisms (not shown).
  • the input device 53 may be the above-mentioned microphone or microphone array for capturing the input signal of the sound source.
  • the input device 53 may be a communication network connector for receiving collected input signals from the first device 100 and the second device 200 .
  • the input device 53 may also include, for example, a keyboard, a mouse, and the like.
  • the output device 54 can output various information to the outside, including determined distance information, direction information, and the like.
  • the output device 54 may include, for example, a display, a speaker, a printer, a communication network and a remote output device connected thereto, and the like.
  • the electronic device 50 may also include any other suitable components.
  • embodiments of the present disclosure may also be computer program products, which include computer program instructions that, when executed by a processor, cause the processor to perform the above-mentioned "exemplary method" of this specification.
  • the steps in the optical center alignment detection method according to various embodiments of the present disclosure are described in the section.
  • the computer program product can be written in any combination of one or more programming languages to execute the program codes for performing the operations of the embodiments of the present disclosure, and the programming languages include object-oriented programming languages, such as Java, C++, etc. , also includes conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute.
  • embodiments of the present disclosure may also be a computer-readable storage medium, on which computer program instructions are stored, and the computer program instructions, when executed by a processor, cause the processor to perform the above-mentioned "Exemplary Method" section of this specification.
  • the computer readable storage medium may employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, but not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • the methods and apparatus of the present disclosure may be implemented in many ways.
  • the methods and apparatuses of the present disclosure may be implemented by software, hardware, firmware or any combination of software, hardware, and firmware.
  • the above sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise.
  • the present disclosure can also be implemented as programs recorded in recording media, the programs including machine-readable instructions for realizing the method according to the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
  • each component or each step can be decomposed and/or reassembled. These decompositions and/or recombinations should be considered equivalents of the present disclosure.

Abstract

本公开公开了一种光心对齐检测方法和装置、存储介质、电子设备,其中,方法包括:确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的映射矩阵;基于所述映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵;基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时是否存在位移;本公开通过内参矩阵和旋转矩阵实现了仅依赖场景的光心对齐的验证方法,摄像设备的光心对齐时,可实现效果更好的全局图拼接。

Description

光心对齐检测方法和装置、存储介质、电子设备
相关申请的交叉引用
本公开要求2021年06月16日提交的中国专利申请202110663882.X的权益,该申请的内容通过引用被合并于本文。
技术领域
本公开涉及图像拼接技术领域,尤其是一种光心对齐检测方法和装置、存储介质、电子设备。
背景技术
基于纯图像的全景图拼接算法中,拍摄图像的摄像设备是否同光心会严重影响得到的全景图的拼接质量。而用户在实际使用云台的过程中,并不能通过精密仪器验证放置的摄像设备是否在一定的容忍度下同光心,同时用户所处的环境也是未知的;现有技术中没有提出在全景图拍摄过程中是否同光心的验证方式。
发明内容
为了解决上述技术问题,提出了本公开。本公开提供了一种光心对齐检测方法和装置、存储介质、电子设备。
根据本公开的一个方面,提供了一种光心对齐检测方法,包括:
确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵;
基于所述第一映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵;
基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时是否存在位移。
可选地,所述基于所述第一映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵,包括:
通过全局优化算法对所述第一映射矩阵进行处理,确定所述摄像设备 的内参矩阵和旋转矩阵;其中,所述旋转矩阵表示所述摄像设备采集所述第一图像时的位姿与采集所述第二图像的位姿之间的差异。
可选地,所述基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时光心是否对齐,包括:
基于所述内参矩阵和所述旋转矩阵确定所述第一图像和所述第二图像中的多对特征点对是否符合设定条件;
响应于所述多对特征点对符合所述设定条件,确定所述摄像设备在采集所述第一图像和所述第二图像时光心对齐。
可选地,还包括:
基于所述多对特征点对对所述光心对齐的所述第一图像和所述第二图像进行拼接。
可选地,所述基于所述内参矩阵和所述旋转矩阵确定所述第一图像和所述第二图像中的多对特征点对是否符合设定条件,包括:
基于所述内参矩阵和所述旋转矩阵求解得到第二映射矩阵;
基于所述第二映射矩阵与所述第一映射矩阵的对应关系,确定所述多个特征点对是否符合设定条件。
可选地,还包括:
响应于所述多个特征点对不符合所述设定条件,确定所述摄像设备在采集所述第一图像和所述第二图像时光心不对齐。
可选地,所述确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵,包括:
基于摄像设备在不同位姿采集得到的第一图像和第二图像,确定特征点对集合;其中,所述特征点对集合包括多对特征点对,每对所述特征点对包括存在对应关系的所述第一图像中的一个第一特征点和所述第二图像中的一个第二特征点;
基于所述特征点对集合确定所述第一图像与所述第二图像之间的第一映射矩阵。
可选地,所述基于摄像设备在不同位姿采集得到的第一图像和第二图像,确定特征点对集合,包括:
基于摄像设备在不同位姿采集得到的多个图像,从所述多个图像中确定连续采集的第一图像和第二图像;
确定所述第一图像和所述第二图像中的多对特征点对;
基于所述多对特征点对得到所述特征点对集合。
可选地,所述确定所述第一图像和所述第二图像中的多对特征点对,包括:
分别对所述第一图像和所述第二图像进行特征点提取,得到所述第一图像对应的多个第一特征点和所述第二图像对应的多个第二特征点;
基于所述多个第一特征点中每个第一特征点对应的特征描述子和所述多个第二特征点中每个第二特征点对应的特征描述子,确定多个所述第一特征点与所述第二特征点之间的对应关系;
基于存在对应关系的所述第一特征点和所述第二特征点确定一个所述特征点对,得到所述多个特征点对。
根据本公开的另一方面,提供了一种光心对齐检测装置,包括:
映射矩阵确定模块,用于确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵;
矩阵估计模块,用于基于所述第一映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵;
对齐验证模块,用于基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时光心是否对齐。
可选地,所述矩阵估计模块,具体用于通过全局优化算法对所述第一映射矩阵进行处理,确定所述摄像设备的内参矩阵和旋转矩阵;其中,所述旋转矩阵表示所述摄像设备采集所述第一图像时的位姿与采集所述第二图像的位姿之间的差异。
可选地,所述对齐验证模块,具体用于基于所述内参矩阵和所述旋转矩阵确定所述第一图像和所述第二图像中的多对特征点对是否符合设定条件;响应于所述多对特征点对符合所述设定条件,确定所述摄像设备在采集所述第一图像和所述第二图像时光心对齐。
可选地,所述对齐验证模块,还用于基于所述多对特征点对对所述光 心对齐的所述第一图像和所述第二图像进行拼接。
可选地,所述对齐验证模块在基于所述内参矩阵和所述旋转矩阵确定所述第一图像和所述第二图像中的多对特征点对是否符合设定条件时,用于基于所述内参矩阵和所述旋转矩阵求解得到第二映射矩阵;基于所述第二映射矩阵与所述第一映射矩阵的对应关系,确定所述多个特征点对是否符合设定条件。
可选地,所述对齐验证模块,还用于响应于所述多个特征点对不符合所述设定条件,确定所述摄像设备在采集所述第一图像和所述第二图像时光心不对齐。
可选地,所述映射矩阵确定模块,包括:
集合确定单元,用于基于摄像设备在不同位姿采集得到的第一图像和第二图像,确定特征点对集合;其中,所述特征点对集合包括多对特征点对,每对所述特征点对包括存在对应关系的所述第一图像中的一个第一特征点和所述第二图像中的一个第二特征点;
特征点映射单元,用于基于所述特征点对集合确定所述第一图像与所述第二图像之间的第一映射矩阵。
可选地,所述集合确定单元,具体用于基于摄像设备在不同位姿采集得到的多个图像,从所述多个图像中确定连续采集的第一图像和第二图像;确定所述第一图像和所述第二图像中的多对特征点对;基于所述多对特征点对得到所述特征点对集合。
可选地,所述集合确定单元在确定所述第一图像和所述第二图像中的多对特征点对时,用于分别对所述第一图像和所述第二图像进行特征点提取,得到所述第一图像对应的多个第一特征点和所述第二图像对应的多个第二特征点;基于所述多个第一特征点中每个第一特征点对应的特征描述子和所述多个第二特征点中每个第二特征点对应的特征描述子,确定多个所述第一特征点与所述第二特征点之间的对应关系;基于存在对应关系的所述第一特征点和所述第二特征点确定一个所述特征点对,得到所述多个特征点对。
根据本公开的又一方面,提供了一种计算机可读存储介质,所述存储 介质存储有计算机程序,所述计算机程序用于执行上述任一实施例所述的光心对齐检测方法。
根据本公开的还一方面,提供了一种电子设备,所述电子设备包括:
处理器;
用于存储所述处理器可执行指令的存储器;
所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述任一所述的光心对齐检测方法。
基于本公开提供的一种光心对齐检测方法和装置、存储介质、电子设备,确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵;基于所述第一映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵;基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时是否存在位移;本公开通过内参矩阵和旋转矩阵实现了仅依赖场景的光心对齐的验证方法,摄像设备的光心对齐时,可实现效果更好的全局图拼接。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
通过结合附图对本公开实施例进行更详细的描述,本公开的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开实施例一起用于解释本公开,并不构成对本公开的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1是本公开一示例性实施例提供的光心对齐检测方法的流程示意图。
图2是本公开图1所示的实施例中步骤106的一个流程示意图。
图3是本公开图1所示的实施例中步骤102的一个流程示意图。
图4是本公开一示例性实施例提供的光心对齐检测装置的结构示意图。
图5是本公开一示例性实施例提供的电子设备的结构图。
具体实施方式
下面,将参考附图详细地描述根据本公开的示例实施例。显然,所描 述的实施例仅仅是本公开的一部分实施例,而不是本公开的全部实施例,应理解,本公开不受这里描述的示例实施例的限制。
应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
本领域技术人员可以理解,本公开实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
还应理解,在本公开实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上。
还应理解,对于本公开实施例中提及的任一部件、数据或结构,在没有明确限定或者在前后文给出相反启示的情况下,一般可以理解为一个或多个。
另外,本公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本公开中字符“/”,一般表示前后关联对象是一种“或”的关系。
还应理解,本公开对各个实施例的描述着重强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,不再一一赘述。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本公开实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设 备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统、大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
示例性方法
图1是本公开一示例性实施例提供的光心对齐检测方法的流程示意图。本实施例可应用在电子设备上,如图1所示,包括如下步骤:
步骤102,确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵。
可选地,本实施例中的第一图像和第二图像可以是摄像设备在云台上连续转动获得的多个图像中的连续两个图像,也可以是在拍摄时序中不连续,但是有重合部分的两个图像,为了基于摄像设备采集的多个图像拼接得到全景图,需要确定第一图像和第二图像是否可以拼接;第一映射矩阵通常表现为一个3*3的矩阵,用于描述第一图像和第二图像之间的映射关系。
步骤104,基于第一映射矩阵确定摄像设备的内参矩阵和旋转矩阵。
可选地,内参矩阵:可以包括摄像设备的光心以及数字焦距、相机畸变参数等,以上所有参数共同描述了光线进入摄像设备后是如何传播并显示在摄像设备上的;旋转矩阵在本公开实施例中特指摄像设备在拍摄第一图像与拍摄第二图像之间的位姿变换量(包括:三个自由度上的旋转角度),是一个三自由度(包括:俯仰角、滚转角和偏航角,建立空间坐标系,沿 每个坐标系的旋转是相互独立的,分别是一个自由度的自变量)。
步骤106,基于内参矩阵和旋转矩阵,确定摄像设备在采集第一图像和第二图像时光心是否对齐。
可选地,通过第一图像和第二图像对应的摄像设备之间是否存在位移来确定光心是否对齐,当光心对齐,说明第一图像和第二图像可以实现较好的拼接效果。
本实施例提出通过内参矩阵和旋转矩阵验证的方式确定摄像设备在采集不同图像时光心是否对齐,实现了实时光心对齐的验证,提高了图像拼接的效率和效果。
本公开上述实施例提供的一种光心对齐检测方法,确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵;基于所述第一映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵;基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时光心是否对齐;本实施例通过内参矩阵和旋转矩阵实现了仅依赖场景的光心对齐的验证方法,摄像设备的光心对齐时,可实现效果更好的全局图拼接。
在一些可选的实施例中,上述实施例中步骤104包括:
通过全局优化算法对第一映射矩阵进行处理,确定摄像设备的内参矩阵和旋转矩阵。
可选地,旋转矩阵表示摄像设备采集第一图像时的位姿与采集第二图像的位姿之间的差异;通过迭代求解,确定本实施中摄像设备的内参矩阵和旋转矩阵;可选地,基于第一映射矩阵确定摄像设备的内参矩阵和旋转矩阵的过程可基于以下公式(1)实现:
H≈K·R·K -1公式(1)
其中,H表示第一映射矩阵,K表示内参矩阵,R表示旋转矩阵,K -1表示内参矩阵的逆矩阵,·表示点乘,对该公式(1)采用全局优化算法,即可得到内参矩阵和旋转矩阵。
其中,全局优化算法对于具有采样数据但分布未知的问题,全局优化算法可以求解特定的未知变量的最优解;利用全局优化算法对上述公式(1) 迭代求解得到内参矩阵和旋转矩阵的过程属于最大似然估计。
如图2所示,在上述图1所示实施例的基础上,步骤106可包括如下步骤:
步骤1061,基于内参矩阵和旋转矩阵确定第一图像和第二图像中的多对特征点对是否符合设定条件,如果是,执行步骤1062,否则,执行步骤1063。
可选地,基于内参矩阵和旋转矩阵求解得到第二映射矩阵;基于第二映射矩阵与第一映射矩阵的对应关系,确定多个特征点对是否符合设定条件。
在确定第一映射矩阵的前提下,基于上述公式(1)得到内参矩阵和旋转矩阵,此时,如果将内参矩阵和旋转矩阵带入回该公式(1),通过计算K·R·K -1的结果,得到第二映射矩阵,此时,如果摄像设备在采集第一图像和第二图像的过程中存在位移(光心不对齐),会导致计算过程中误差较大,使得第二映射矩阵与第一映射矩阵的差异较大;因此,本实施例通过计算得到的第二映射矩阵与第一映射矩阵之间的差异是否在设定范围内,来确定摄像设备的光心是否对齐。
步骤1062,确定摄像设备在采集第一图像和第二图像时光心对齐。
步骤1063,确定摄像设备在采集第一图像和第二图像时光心不对齐。
为了便于理解光心对齐与上述公式(1)的验证关系,可结合第一图像和第二图像中的一对特征点对的映射公式(2)进行理解:
p 2=K·(d·R 21+t 21·n T)·K -1·p 1公式(2)
其中,p 1和p 2表示分别对应第一图像和第二图像的一对特征点对(可认为是同一真实点在两图像中的显示),K表示内参矩阵,K -1表示内参矩阵的逆矩阵,R 21表示摄像设备采集第一图像时的位姿与采集第二图像的位姿之间的位姿变化(旋转矩阵),t 21表示摄像设备采集第一图像时与采集第二图像时的位移变化量,d表示特征点对对应的真实点与光心之间的距离,n表示真实点所在平面的法向量,T表示转置;因此,当公式(2)中的t 21约为零时,可将公式中的t 21·n T部分省去,即可得到上述公式(1),因此,只需得到的内参矩阵和旋转矩阵带入公式(1)求解的第二映射矩阵与第一 映射矩阵之间的差异足够小时,即可认为摄像设备采集第一图像时与采集第二图像时不存在位移,即光心对齐。
可选地,在确定摄像设备在采集第一图像和第二图像时光心对齐之后,还可以包括:
基于多对特征点对对光心对齐的第一图像和第二图像进行拼接。
本实施例中,当确定摄像设备在采集第一图像和第二图像时光心对齐时,说明第一图像和第二图像能实现较好的拼接效果,因此,本实施例对光心对齐的第一图像和第二图像进行拼接,而光心不对齐的第一图像和第二图像不执行拼接,以保证拼接得到的全景图像具有较好的质量。
如图3所示,在上述图1所示实施例的基础上,步骤102可包括如下步骤:
步骤1021,基于摄像设备在不同位姿采集得到的第一图像和第二图像,确定特征点对集合。
其中,特征点对集合包括多对特征点对,每对特征点对包括存在对应关系的第一图像中的一个第一特征点和第二图像中的一个第二特征点。
可选地,基于摄像设备在不同位姿采集得到的多个图像,从多个图像中确定连续采集的第一图像和第二图像;
确定第一图像和第二图像中的多对特征点对;
基于多对特征点对得到特征点对集合。
可选地,可将摄像设备设置在云台上,通过云台转动多个角度实现拍摄得到图像集合,图像集合中包括多个图像,本实施例为了实现全景图的拼接,可以取连续拍摄的两个图像作为第一图像和第二图像,获得第一图像和第二图像中存在对应关系的所有特征点对,构成特征点对集合,基于该特征点对集合确定的第一映射矩阵更接近真实值。
步骤1022,基于特征点对集合确定第一图像与第二图像之间的第一映射矩阵。
本实施例中,第一特征点和第二特征点是空间中同一真实点被摄像设备采集后分别显示在第一图像和第二图像上的点,通过所有的特征点对之间的连接关系,即可确定第一图像和第二图像之间的连接关系以及重叠部 分,确定第一图像与第二图像之间的第一映射矩阵;通过特征点对集合确定第一映射矩阵,由于结合了多个特征点对之间的连接关系,提高了得到的第一映射矩阵的准确性。
在一些可选实施例中,确定第一图像和第二图像中的多对特征点对,包括:
分别对第一图像和第二图像进行特征点提取,得到第一图像对应的多个第一特征点和第二图像对应的多个第二特征点;
基于多个第一特征点中每个第一特征点对应的特征描述子和多个第二特征点中每个第二特征点对应的特征描述子,确定多个第一特征点与第二特征点之间的对应关系;
基于存在对应关系的第一特征点和第二特征点确定一个特征点对,得到多个特征点对。
本实施例中,可选地,特征点可以是角点等图像中较为突出的点,可通过ORB提取算法分别提取第一图像和第二图像中的特征点;特征点可通过位置(坐标)和描述(描述子)来实现唯一表示,本实施例通过特征描述子确定哪些特征点对应同一真实点,其中,特征描述子是特征点周围的像素经过处理来描述该特征点的特征;结合特征描述子和位置可唯一确定一个特征点,本实施例通过结合特征描述子为第一图像和第二图像确定多对特征点对,提高了确定的特征点对的准确性,进而提高了光心对齐的检测结果的准确性。
本公开实施例提供的任一种光心对齐检测方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种光心对齐检测方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的任一种光心对齐检测方法。下文不再赘述。
示例性装置
图4是本公开一示例性实施例提供的光心对齐检测装置的结构示意图。如图4所示,本实施例提供的装置包括:
映射矩阵确定模块41,用于确定摄像设备在不同位姿采集得到的第一 图像和第二图像之间的第一映射矩阵。
矩阵估计模块42,用于基于第一映射矩阵确定摄像设备的内参矩阵和旋转矩阵。
对齐验证模块43,用于基于内参矩阵和旋转矩阵,确定摄像设备在采集第一图像和第二图像时光心是否对齐。
本公开上述实施例提供的一种光心对齐检测装置,确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵;基于所述第一映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵;基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时光心是否对齐;本实施例通过内参矩阵和旋转矩阵实现了仅依赖场景的光心对齐的验证方法,摄像设备的光心对齐时,可实现效果更好的全局图拼接。
可选地,矩阵估计模块42,具体用于通过全局优化算法对第一映射矩阵进行处理,确定摄像设备的内参矩阵和旋转矩阵。
其中,旋转矩阵表示摄像设备采集第一图像时的位姿与采集第二图像的位姿之间的差异。
可选地,对齐验证模块43,具体用于基于内参矩阵和旋转矩阵确定第一图像和第二图像中的多对特征点对是否符合设定条件;响应于多对特征点对符合设定条件,确定摄像设备在采集第一图像和第二图像时光心对齐。
可选地,对齐验证模块43,还用于基于多对特征点对对光心对齐的第一图像和第二图像进行拼接。
可选地,对齐验证模块43在基于内参矩阵和旋转矩阵确定第一图像和第二图像中的多对特征点对是否符合设定条件时,用于基于内参矩阵和旋转矩阵求解得到第二映射矩阵;基于第二映射矩阵与第一映射矩阵的对应关系,确定多个特征点对是否符合设定条件。
可选地,对齐验证模块43,还用于响应于多个特征点对不符合设定条件,确定摄像设备在采集第一图像和第二图像时光心不对齐。
可选地,映射矩阵确定模块41,包括:
集合确定单元,用于基于摄像设备在不同位姿采集得到的第一图像和 第二图像,确定特征点对集合;其中,特征点对集合包括多对特征点对,每对特征点对包括存在对应关系的第一图像中的一个第一特征点和第二图像中的一个第二特征点;
特征点映射单元,用于基于特征点对集合确定第一图像与第二图像之间的第一映射矩阵。
可选地,集合确定单元,具体用于基于摄像设备在不同位姿采集得到的多个图像,从多个图像中确定连续采集的第一图像和第二图像;确定第一图像和第二图像中的多对特征点对;基于多对特征点对得到特征点对集合。
可选地,集合确定单元在确定第一图像和第二图像中的多对特征点对时,用于分别对第一图像和第二图像进行特征点提取,得到第一图像对应的多个第一特征点和第二图像对应的多个第二特征点;基于多个第一特征点中每个第一特征点对应的特征描述子和多个第二特征点中每个第二特征点对应的特征描述子,确定多个第一特征点与第二特征点之间的对应关系;基于存在对应关系的第一特征点和第二特征点确定一个特征点对,得到多个特征点对。
示例性实施例提供的光心对齐检测装置可以包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现本公开示例性实施例提供的光心对齐检测方法。
示例性电子设备
下面,参考图5来描述根据本公开实施例的电子设备。该电子设备可以是第一设备100和第二设备200中的任一个或两者、或与它们独立的单机设备,该单机设备可以与第一设备和第二设备进行通信,以从它们接收所采集到的输入信号。
图5图示了根据本公开实施例的电子设备的框图。
如图5所示,电子设备50包括一个或多个处理器51和存储器52。
处理器51可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备50中的其他组 件以执行期望的功能。
存储器52可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器51可以运行所述程序指令,以实现上文所述的本公开的各个实施例的光心对齐检测方法以及/或者其他期望的功能。在所述计算机可读存储介质中还可以存储诸如输入信号、信号分量、噪声分量等各种内容。
在一个示例中,电子设备50还可以包括:输入装置53和输出装置54,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
例如,在该电子设备是第一设备100或第二设备200时,该输入装置53可以是上述的麦克风或麦克风阵列,用于捕捉声源的输入信号。在该电子设备是单机设备时,该输入装置53可以是通信网络连接器,用于从第一设备100和第二设备200接收所采集的输入信号。
此外,该输入装置53还可以包括例如键盘、鼠标等等。
该输出装置54可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出装置54可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图5中仅示出了该电子设备50中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备50还可以包括任何其他适当的组件。
示例性计算机程序产品和计算机可读存储介质
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本公开各种实施例的光心对齐检测方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编 写用于执行本公开实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本公开的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本公开各种实施例的光心对齐检测方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配 置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
可能以许多方式来实现本公开的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
还需要指出的是,在本公开的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (14)

  1. 一种光心对齐检测方法,其特征在于,包括:
    确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵;
    基于所述第一映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵;
    基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时光心是否对齐;
    所述基于所述第一映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵,包括:
    通过全局优化算法对所述第一映射矩阵进行处理,确定所述摄像设备的内参矩阵和旋转矩阵;其中,所述旋转矩阵表示所述摄像设备采集所述第一图像时的位姿与采集所述第二图像的位姿之间的差异。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时光心是否对齐,包括:
    基于所述内参矩阵和所述旋转矩阵确定所述第一图像和所述第二图像中的多对特征点对是否符合设定条件;
    响应于所述多对特征点对符合所述设定条件,确定所述摄像设备在采集所述第一图像和所述第二图像时光心对齐。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述内参矩阵和所述旋转矩阵确定所述第一图像和所述第二图像中的多对特征点对是否符合设定条件,包括:
    基于所述内参矩阵和所述旋转矩阵求解得到第二映射矩阵;
    基于所述第二映射矩阵与所述第一映射矩阵的对应关系,确定所述多个特征点对是否符合设定条件。
  4. 根据权利要求1-3任一所述的方法,其特征在于,所述确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵,包括:
    基于摄像设备在不同位姿采集得到的第一图像和第二图像,确定特征点对集合,其中,所述特征点对集合包括多对特征点对,每对所述特征点对包括存在对应关系的所述第一图像中的一个第一特征点和所述第二图像中的一个第二特征点;
    基于所述特征点对集合确定所述第一图像与所述第二图像之间的第一映射矩阵。
  5. 根据权利要求4所述的方法,其特征在于,所述基于摄像设备在不同位姿采集得到的第一图像和第二图像,确定特征点对集合,包括:
    基于摄像设备在不同位姿采集得到的多个图像,从所述多个图像中确定连续采集的第一图像和第二图像;
    确定所述第一图像和所述第二图像中的多对特征点对;
    基于所述多对特征点对得到所述特征点对集合。
  6. 根据权利要求5所述的方法,其特征在于,所述确定所述第一图像和所述第二图像中的多对特征点对,包括:
    分别对所述第一图像和所述第二图像进行特征点提取,得到所述第一图像对应的多个第一特征点和所述第二图像对应的多个第二特征点;
    基于所述多个第一特征点中每个第一特征点对应的特征描述子和所述多个第二特征点中每个第二特征点对应的特征描述子,确定多个所述第一特征点与所述第二特征点之间的对应关系;
    基于存在对应关系的所述第一特征点和所述第二特征点确定一个所述特征点对,得到所述多个特征点对。
  7. 一种光心对齐检测装置,其特征在于,包括:
    映射矩阵确定模块,用于确定摄像设备在不同位姿采集得到的第一图像和第二图像之间的第一映射矩阵;
    矩阵估计模块,用于基于所述第一映射矩阵确定所述摄像设备的内参矩阵和旋转矩阵;
    对齐验证模块,用于基于所述内参矩阵和所述旋转矩阵,确定所述摄像设备在采集所述第一图像和所述第二图像时光心是否对齐;
    所述矩阵估计模块,具体用于通过全局优化算法对所述第一映射矩阵进行处理,确定所述摄像设备的内参矩阵和旋转矩阵;其中,所述旋转矩阵表示所述摄像设备采集所述第一图像时的位姿与采集所述第二图像的位姿之间的差异。
  8. 根据权利要求7所述的装置,其特征在于,所述对齐验证模块,具体用于:
    基于所述内参矩阵和所述旋转矩阵确定所述第一图像和所述第二图像中的多对特征点对是否符合设定条件;
    响应于所述多对特征点对符合所述设定条件,确定所述摄像设备在采集所述第一图像和所述第二图像时光心对齐。
  9. 根据权利要求8所述的装置,其特征在于,所述对齐验证模块在基于所述内参矩阵和所述旋转矩阵确定所述第一图像和所述第二图像中的多对特征点对是否符合设定条件时,用于:
    基于所述内参矩阵和所述旋转矩阵求解得到第二映射矩阵;
    基于所述第二映射矩阵与所述第一映射矩阵的对应关系,确定所述多个特征点对是否符合设定条件。
  10. 根据权利要求7-9任一所述的装置,其特征在于,所述映射矩阵确定模块,包括:
    集合确定单元,用于基于摄像设备在不同位姿采集得到的第一图像和第二图像,确定特征点对集合;其中,所述特征点对集合包括多对特征点对,每对所述特征点对包括存在对应关系的所述第一图像中的一个第一特征点和所述第二图像中的一个第二特征点;
    特征点映射单元,用于基于所述特征点对集合确定所述第一图像与所述第二图像之间的第一映射矩阵。
  11. 根据权利要求10所述的装置,其特征在于,所述集合确定单元,具体用于:
    基于摄像设备在不同位姿采集得到的多个图像,从所述多个图像中确定连续采集的第一图像和第二图像;
    确定所述第一图像和所述第二图像中的多对特征点对;
    基于所述多对特征点对得到所述特征点对集合。
  12. 根据权利要求11所述的装置,其特征在于,所述集合确定单元在确定所述第一图像和所述第二图像中的多对特征点对时,用于:
    分别对所述第一图像和所述第二图像进行特征点提取,得到所述第一图像对应的多个第一特征点和所述第二图像对应的多个第二特征点;
    基于所述多个第一特征点中每个第一特征点对应的特征描述子和所述多个第二特征点中每个第二特征点对应的特征描述子,确定多个所述第一特征点与所述第二特征点之间的对应关系;
    基于存在对应关系的所述第一特征点和所述第二特征点确定一个所述特征点对,得到所述多个特征点对。
  13. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-6任一所述的光心对齐检测方法。
  14. 一种电子设备,其特征在于,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-6任一所述的光心对齐检测方法。
PCT/CN2022/072968 2021-06-16 2022-01-20 光心对齐检测方法和装置、存储介质、电子设备 WO2022262273A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110663882.X 2021-06-16
CN202110663882.XA CN113129211B (zh) 2021-06-16 2021-06-16 光心对齐检测方法和装置、存储介质、电子设备

Publications (1)

Publication Number Publication Date
WO2022262273A1 true WO2022262273A1 (zh) 2022-12-22

Family

ID=76783259

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/072968 WO2022262273A1 (zh) 2021-06-16 2022-01-20 光心对齐检测方法和装置、存储介质、电子设备

Country Status (3)

Country Link
US (1) US20220405968A1 (zh)
CN (1) CN113129211B (zh)
WO (1) WO2022262273A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129211B (zh) * 2021-06-16 2021-08-17 贝壳技术有限公司 光心对齐检测方法和装置、存储介质、电子设备
CN113572978A (zh) * 2021-07-30 2021-10-29 北京房江湖科技有限公司 全景视频的生成方法和装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530358A (zh) * 2016-12-15 2017-03-22 北京航空航天大学 仅用两幅场景图像标定ptz摄像机的方法
CN111325792A (zh) * 2020-01-23 2020-06-23 北京字节跳动网络技术有限公司 用于确定相机位姿的方法、装置、设备和介质
CN111429353A (zh) * 2020-03-27 2020-07-17 贝壳技术有限公司 图像拼接及全景图拼接方法和装置、存储介质、电子设备
CN111445537A (zh) * 2020-06-18 2020-07-24 浙江中控技术股份有限公司 一种摄像机的标定方法及系统
US20210027493A1 (en) * 2019-07-25 2021-01-28 Second Spectrum, Inc. Data processing systems for real-time camera parameter estimation
CN112819904A (zh) * 2021-03-15 2021-05-18 亮风台(上海)信息科技有限公司 一种用于标定ptz摄像机的方法与设备
CN113129211A (zh) * 2021-06-16 2021-07-16 贝壳技术有限公司 光心对齐检测方法和装置、存储介质、电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131113B1 (en) * 2007-11-29 2012-03-06 Adobe Systems Incorporated Method and apparatus for estimating rotation, focal lengths and radial distortion in panoramic image stitching
CN109003226A (zh) * 2017-06-06 2018-12-14 中林信达(北京)科技信息有限责任公司 一种全景图像实时拼接方法及装置
CN110009567A (zh) * 2019-04-09 2019-07-12 三星电子(中国)研发中心 用于鱼眼镜头的图像拼接方法和装置
CN112102419B (zh) * 2020-09-24 2024-01-26 烟台艾睿光电科技有限公司 双光成像设备标定方法及系统、图像配准方法
CN112927306B (zh) * 2021-02-24 2024-01-16 深圳市优必选科技股份有限公司 拍摄装置的标定方法、装置及终端设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530358A (zh) * 2016-12-15 2017-03-22 北京航空航天大学 仅用两幅场景图像标定ptz摄像机的方法
US20210027493A1 (en) * 2019-07-25 2021-01-28 Second Spectrum, Inc. Data processing systems for real-time camera parameter estimation
CN111325792A (zh) * 2020-01-23 2020-06-23 北京字节跳动网络技术有限公司 用于确定相机位姿的方法、装置、设备和介质
CN111429353A (zh) * 2020-03-27 2020-07-17 贝壳技术有限公司 图像拼接及全景图拼接方法和装置、存储介质、电子设备
CN111445537A (zh) * 2020-06-18 2020-07-24 浙江中控技术股份有限公司 一种摄像机的标定方法及系统
CN112819904A (zh) * 2021-03-15 2021-05-18 亮风台(上海)信息科技有限公司 一种用于标定ptz摄像机的方法与设备
CN113129211A (zh) * 2021-06-16 2021-07-16 贝壳技术有限公司 光心对齐检测方法和装置、存储介质、电子设备

Also Published As

Publication number Publication date
CN113129211A (zh) 2021-07-16
CN113129211B (zh) 2021-08-17
US20220405968A1 (en) 2022-12-22

Similar Documents

Publication Publication Date Title
WO2022262273A1 (zh) 光心对齐检测方法和装置、存储介质、电子设备
WO2017181899A1 (zh) 一种人脸活体验证方法及装置
WO2019164379A1 (en) Method and system for facial recognition
WO2021196548A1 (zh) 距离确定方法、装置及系统
WO2022095543A1 (zh) 图像帧拼接方法和装置、可读存储介质及电子设备
CN111429354B (zh) 图像拼接及全景图拼接方法和装置、存储介质、电子设备
CN111612842B (zh) 生成位姿估计模型的方法和装置
CN112037279B (zh) 物品位置识别方法和装置、存储介质、电子设备
WO2023005170A1 (zh) 全景视频的生成方法和装置
WO2021189804A1 (zh) 图像矫正方法、装置和电子系统
WO2023169281A1 (zh) 图像配准方法、装置、存储介质及电子设备
WO2023125224A1 (zh) 全景图的垂直矫正方法和装置、电子设备和存储介质
CN111432119A (zh) 图像拍摄方法、装置、计算机可读存储介质及电子设备
CN113724135A (zh) 图像拼接方法、装置、设备及存储介质
WO2023082822A1 (zh) 图像数据的处理方法和装置
TW202301273A (zh) 圖像配准方法、視覺定位方法、電子設備和電腦可讀儲存介質
WO2016208404A1 (ja) 情報処理装置および方法、並びにプログラム
CN111402136A (zh) 全景图生成方法、装置、计算机可读存储介质及电子设备
CN113793370B (zh) 三维点云配准方法、装置、电子设备及可读介质
CN111429353A (zh) 图像拼接及全景图拼接方法和装置、存储介质、电子设备
CN113592706A (zh) 调整单应性矩阵参数的方法和装置
WO2023231435A1 (zh) 视觉感知方法、装置、存储介质和电子设备
CN113744339B (zh) 生成全景图像的方法、装置、电子设备和存储介质
JP2015032256A (ja) 画像処理装置およびそのデータベース構築装置
CN111310818B (zh) 特征描述子确定方法、装置及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22823768

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE