WO2022206728A1 - 实时取景方法、全景相机及计算机可读存储介质 - Google Patents

实时取景方法、全景相机及计算机可读存储介质 Download PDF

Info

Publication number
WO2022206728A1
WO2022206728A1 PCT/CN2022/083566 CN2022083566W WO2022206728A1 WO 2022206728 A1 WO2022206728 A1 WO 2022206728A1 CN 2022083566 W CN2022083566 W CN 2022083566W WO 2022206728 A1 WO2022206728 A1 WO 2022206728A1
Authority
WO
WIPO (PCT)
Prior art keywords
longitude
latitude
panoramic camera
real
screen
Prior art date
Application number
PCT/CN2022/083566
Other languages
English (en)
French (fr)
Inventor
何红烨
郭奕滨
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2022206728A1 publication Critical patent/WO2022206728A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Definitions

  • the present application relates to the technical field of image processing, and in particular, to a live view method for a panoramic camera, a panoramic camera, and a computer-readable storage medium.
  • the screen framing of the existing panoramic camera can only be used for framing the impact directly in front of the lens. If you want to achieve 360-degree framing, you need to first stream the video preview to a client such as a mobile phone, and then the client will stitch it into a panoramic image, and then On the screen of the mobile client to achieve any angle framing.
  • the purpose of the present invention is to provide a real-time viewing method of a panoramic camera, a panoramic camera and a computer-readable storage medium, aiming at solving the existing defects.
  • the present invention provides a real-time viewfinder method for a panoramic camera, the method comprising:
  • the present invention provides a panoramic camera, including a camera body, at least two lenses, a touch display screen disposed on the camera body, and a latitude and longitude map module for calibrating parameters according to each lens of the panoramic camera generating a latitude and longitude map mapping table; a first calculation module for calculating the first three-dimensional coordinates of the image displayed on the screen of the touch display screen of the panoramic camera; a second calculation module for according to the detected touch direction and angle of the touch display screen Calculate the second three-dimensional coordinates with the first three-dimensional coordinates; the mapping module is used to map the second three-dimensional coordinates into latitude and longitude; the preview screen module is used for displaying on the touch screen according to the latitude and longitude, the latitude and longitude map mapping table and the acquired real-time image data of each lens.
  • a real-time preview picture is generated on the screen; wherein, the latitude and longitude map map includes the abscissa, ordinate and weight value of each lens corresponding to any point on the s
  • the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the above-mentioned live view method for a panoramic camera is implemented.
  • the present invention does not need to splicing images of multiple lenses globally, but only needs to splicing images that need to be displayed on the display screen, so as to realize any angle on the screen of the touch display screen of the panoramic camera.
  • Real-time viewfinder and realize the direction of screen preview display by sliding your finger, so as to better assist users in shooting or video recording, improve user experience, and have the advantages of good real-time performance, convenient operation and low implementation cost.
  • FIG. 1 is a flowchart of a real-time viewfinder method of a panoramic camera in Embodiment 1 of the present invention.
  • FIG. 2 is a structural block diagram of a panoramic camera in Embodiment 2 of the present invention.
  • a preferred embodiment of the live view method of the panoramic camera in this embodiment includes the following steps.
  • the panoramic camera in this embodiment includes at least two lenses (such as a fisheye lens or a wide-angle lens), and a partial field of view overlaps between two adjacent lenses to form a 360° panoramic field of view.
  • the process of generating the latitude and longitude map mapping table in the present embodiment is as follows: first, perform parameter calibration on the real-time image data (video) captured by the panoramic camera to obtain calibration parameters, and the content of the calibration parameters includes the horizontal angle, pitch angle, spin angle of each lens Then, establish a corresponding lens latitude and longitude table for each lens, and combine the calibration parameters to merge the latitude and longitude map mapping table, so that each point in the space can find its coordinates and weights corresponding to the real-time image data captured by at least one lens.
  • the latitude and longitude map mapping table includes abscissa, ordinate and weight value.
  • the abscissa represents the longitude of the spherical surface centered on the panoramic camera, and the ordinate represents the latitude of the sphere centered on the panoramic camera.
  • taking the panoramic camera as the center includes the geometric center of the overall shape of the panoramic camera or the center of gravity of the panoramic camera; in this embodiment, the geometric center of the focus of each lens of the panoramic camera can also be used as the center, for example, When the number of lenses is two, the center is the midpoint of the line connecting the focal points of the two lenses; when the number of lenses is three, the center is the center of gravity of the triangle formed by the focal points of the three lenses; when the number of lenses is four, the center is four The intersection of the diagonals of the quadrilateral formed by the focal points of the lenses.
  • the weight is set according to the distance from the center of the images generated by the two lenses. The closer the center of the generated image is, the larger the weight value is, and the sum of the two weight values is 1, so as to effectively eliminate the stitching gap between the two lens images.
  • the weight value of each point specifically, assuming that the distance from a point to the first lens is d1, and the distance to the second lens is d2, then the weight of the first lens is d2/(d1+d2), and the weight of the second lens is d2/(d1+d2).
  • the weight is d1/(d1+d2); for a point located in an image generated by only one lens, the lens weight is set to 1 to ensure the clarity of the full-space image.
  • the two lenses with the smallest distance from the point to the center of the image are selected, and then based on the difference between the point and the images generated by these two lenses The distance of the center position sets different weights.
  • the first three-dimensional coordinates of all points in the touch display screen of the panoramic camera can be obtained.
  • the sliding state of the finger or the touch pen on the screen of the touch display screen is detected.
  • swiping left and right on the screen indicates that the angle of view swings the yaw axis
  • sliding up and down on the screen indicates that the angle of view swings the pitch axis. Therefore, there is no swing of the roll axis, that is, the angle of the roll axis remains 0, and ⁇ is used to represent the detected radian of the screen sliding left and right, with Indicates the detected radian of the screen sliding up and down, then the rot of the 3*3 rotation matrix can be calculated:
  • the calculation method of the second three-dimensional coordinate (X2, Y2, Z2) of the displayed image is:
  • X2 X1*rot(0,0)+Y1*rot(0,1)+Z1*rot(0,2);
  • Y2 X1*rot(1,0)+Y1*rot(1,1)+Z1*rot(1,2);
  • step S4 Find the coordinates and weight values of the real-time image data (video) of the latitude and longitude map mapping table according to the latitude and longitude of the second three-dimensional coordinates obtained in step S4; then copy the corresponding color value in the coordinates or the color value after weight calculation to In order to get the real-time preview picture, it can be displayed by touching the screen of the display screen.
  • the embodiment of the present invention only needs to perform stitching processing on the content that needs to be displayed on the touch screen of the panoramic camera, and does not need to perform overall stitching of the images of all lenses, so the processing speed is fast, and it is suitable for the real-time viewfinder of the panoramic camera. .
  • the panoramic camera in this embodiment includes two fisheye lenses (or other numbers) and a touch display screen.
  • the two fisheye lenses are respectively It is installed on the opposite sides of the panoramic camera, so that there is a partial overlap between the two fisheye lenses to form a 360° panoramic field of view; the touch screen is roughly rectangular to display the preview image of the panoramic camera.
  • the panoramic camera further includes: a latitude and longitude map mapping table module, used for generating a latitude and longitude map mapping table according to the calibration parameters of each lens of the panoramic camera; a first calculation module, used for calculating the first image displayed on the screen of the touch display screen of the panoramic camera.
  • the latitude and longitude map mapping table includes the abscissa, ordinate and weight value of each lens corresponding to any point on the spherical surface centered on the panoramic camera.
  • This embodiment discloses a computer-readable storage medium, characterized in that, a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, real-time real-time monitoring of the panoramic camera in Embodiment 1 is realized. Viewing method.
  • the storage medium can be a computer-readable storage medium, for example, a ferroelectric memory (FRAM, Ferromagnetic Random Access Memory), Read Only Memory (ROM, Read Only Memory), Programmable Read Only Memory (PROM, Programmable Read Only Memory), Erasable Programmable Read Only Memory (EPROM, Erasable Programmable Read Only Memory), electrified Erasable programmable read only memory (EEPROM, Electrically Erasable Programmable Read Only Memory), flash memory, magnetic surface memory, optical disk, or compact disk read only memory (CD-ROM, Compact Disk-Read Only Memory) and other memories; can also include Various devices of one or any combination of the above memories.
  • FRAM ferroelectric memory
  • ROM Read Only Memory
  • PROM Programmable Read Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • flash memory magnetic surface memory, optical disk, or compact disk read only memory (CD-ROM, Compact Disk-Read Only Memory) and other memories
  • CD-ROM Compact Dis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

本发明揭示了一种全景相机的实时取景方法,该方法包括:根据全景相机每个镜头的标定参数生成经纬图映射表;计算全景相机的触摸显示屏的屏幕所显示图像的第一三维坐标;根据检测到的触摸显示屏的触摸方向及角度和第一三维坐标计算第二三维坐标;将第二三维坐标映射成经纬度;根据经纬度和经纬图映射表在触摸显示屏上生成实时预览画面;其中,所述经纬图映射表的包括以全景相机为中心的球面上的任意一点对应的各镜头的横坐标、纵坐标及权重值。与现有技术相比,本发明无需对多个镜头的图像进行全局拼接,只需要对需要在显示屏上的显示的图像进行拼接,具有实时性好、操作方便及实施成本低的优点。

Description

实时取景方法、全景相机及计算机可读存储介质 技术领域
本申请涉及图像处理技术领域,具体涉及一种全景相机的实时取景方法、全景相机及计算机可读存储介质。
背景技术
现有的全景相机的屏幕取景只能对镜头正前方的影响进行取景,如果要实现360度取景,则需要先把视频预览流传到手机等客户端,再由客户端进行拼接成全景图像,然后在手机客户端的屏幕来实现任意角度取景。
技术问题
但上述取景方式存在两个主要缺陷:
1、不能实现实时预览。
由于将视频预览流传输至客户端需要一定时间且拼接图像需要花费大量时间,因此必然增加了延时。
2、操作不便且实现成本较高。
另外由于需要额外的客户端才能实现这个功能,在操作上必然不便,另外实时成本较高。
因此,有必要对现有的全景相机的实时取景方法进行改进。
技术解决方案
本发明的目的在于提供一种全景相机的实时取景方法、全景相机及计算机可读存储介质,旨在解决现有的缺陷。
第一方面,本发明提供了一种全景相机的实时取景方法,该方法包括:
根据全景相机每个镜头的标定参数生成经纬图映射表;计算全景相机的触摸显示屏的屏幕所显示图像的第一三维坐标;根据检测到的触摸显示屏的触摸方向及角度和第一三维坐标计算第二三维坐标;将第二三维坐标映射成经纬度;根据经纬度、经纬图映射表及获取的各镜头的实时图像数据在触摸显示屏上生成实时预览画面;其中,所述经纬图映射表的包括以全景相机为中心的球面上的任意一点对应的各镜头的横坐标、纵坐标及权重值。
第二方面,本发明提供了一种全景相机,包括相机主体、至少两个镜头和设置在相机主体上的触摸显示屏,以及经纬图映射表模块,用于根据全景相机每个镜头的标定参数生成经纬图映射表;第一计算模块,用于计算全景相机的触摸显示屏的屏幕所显示图像的第一三维坐标;第二计算模块,用于根据检测到的触摸显示屏的触摸方向及角度和第一三维坐标计算第二三维坐标;映射模块,用于将第二三维坐标映射成经纬度;预览画面模块,用于根据经纬度、经纬图映射表及获取的各镜头的实时图像数据在触摸显示屏上生成实时预览画面;其中,所述经纬图映射表的包括以全景相机为中心的球面上的任意一点对应的各镜头的横坐标、纵坐标及权重值。
第三方面,本发明提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述全景相机的实时取景方法。
技术效果
与现有技术相比,本发明无需对多个镜头的图像进行全局拼接,只需要对需要在显示屏上的显示的图像进行拼接,实现在全景相机的触摸显示屏的屏幕上实现任意角度的实时取景,并通过手指的滑动来实现屏幕预览显示的方向,从而更好的辅助用户的拍摄或者录像,提升了用户的使用体验,具有实时性好、操作方便及实施成本低的优点。
附图说明
图1为本发明实施例1中的全景相机的实时取景方法的流程图。
图2为本发明实施例2中的全景相机的构成模块图。
本发明的实施方式
为了使本发明的目的、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
实施例1
如图1所示,本实施例中的全景相机的实时取景方法的一个优选实施例包括以下步骤。
S1、根据全景相机每个镜头的标定参数生成经纬图映射表。
本实施例中的全景相机包括至少两个镜头(如鱼眼镜头或广角镜头),两个相邻镜头之间有部分视场重叠,以形成360°全景视场。本实施例中生成经纬图映射表的过程如下:首先对全景相机拍摄的实时图像数据(视频)进行参数标定以得到标定参数,标定参数的内容包括每个镜头的水平角度、俯仰角度、自旋角度等;然后为每个镜头建立相应的镜头经纬表,并结合标定参数合并出经纬图映射表,使得空间中每一点都能找到其对应于至少一个镜头拍摄的实时图像数据的坐标和权重。其中,经纬图映射表的包括横坐标、纵坐标及权重值,横坐标表示以全景相机为中心的球面的经度,纵坐标表示以全景相机为中心的球面的纬度。其中,这里的以全景相机为中心包括以全景相机整体形状的几何中心或全景相机的重心为中心;在本实施例中,还可以将全景相机的各个镜头的焦点的几何中心作为中心,例如,当镜头数目为两个时,中心为两个镜头焦点的连线的中点;当镜头为三个时,中心为三个镜头焦点构成的三角形的重心;当镜头为四个时,中心为四个镜头焦点构成的四边形的对角线的交点。
在镜头经纬表合成经纬图映射表的过程中,对于同时位于两个相邻镜头生成的图像中的点,根据其距离两个镜头生成的图像的中心位置的距离来设置权重,距离某个镜头生成的图像中心越近,权重值越大,并使得两个权重值之和为1,以有效地消除两个镜头图像之间的拼接缝隙,例如,可以通过加权平均的方式计算各镜头对某个点的权重值,具体地,假设某个点到第一镜头的距离为d1,到第二镜头的距离为d2,则第一镜头的权重为d2/(d1+d2),第二镜头的权重为d1/(d1+d2);对于只位于一个镜头生成的图像中的点,其所在镜头权重设为1,用以保证全空间图像的清晰度。在上述过程中,对于同时位于三个或以上镜头生成的图像中的点,选择该点到距离图像的中心位置的距离最小的两个镜头,再根据该点与这两个镜头生成的图像的中心位置的距离大小设置不同的权重。
S2、计算全景相机的触摸显示屏的屏幕所显示图像的第一三维坐标。
在直线投影模式下,假设对全景相机的正前方取景,则触摸显示屏的屏幕所显示图像的坐标的计算方法如下。
对于屏幕中的任意一点的屏幕坐标(i,j),其显示图像的第一三维坐标(X1,Y1,Z1)的计算方法为:X1=tan(PI*0.5-fov*PI/180*0.5)*rayW*0.5;Y1=(j-rayW*0.5);Z1=(i-rayH*0.5);其中,rayW表示屏幕的宽,rayH表示屏幕的高,fov表示屏幕所显示图像的视场角,PI表示圆周率π。
通过上述方式,就可以得到全景相机的触摸显示屏中的所有点的第一三维坐标。
S3、根据检测到的触摸显示屏的触摸方向及角度和第一三维坐标计算第二三维坐标。
本步骤中,检测手指或触摸笔在触摸显示屏的屏幕上的滑动状态。本实施例中,在屏幕上的左右滑动表示视角做yaw轴的摆动,在屏幕上的上下滑动表示视角做pitch轴的摆动,由于手指或触摸笔在触摸显示屏的屏幕上的滑动不涉及屏幕的翻转,因此没有roll轴的摆动,即roll轴的角度保持为0,用θ表示检测到的屏幕左右滑动的弧度,用
Figure PCTCN2022083566-appb-000001
表示检测到的屏幕上下滑动的弧度,则可以计算3*3旋转矩阵的rot:
Figure PCTCN2022083566-appb-000002
rot(0,1)=-sinf(θ);
Figure PCTCN2022083566-appb-000003
Figure PCTCN2022083566-appb-000004
rot(1,1)=cosf(θ);
Figure PCTCN2022083566-appb-000005
Figure PCTCN2022083566-appb-000006
rot(2,1)=0;
Figure PCTCN2022083566-appb-000007
对于任意一个第一三维坐标(X1,Y1,Z1),其显示图像的第二三维坐标(X2,Y2,Z2)的计算方法为:
X2=X1*rot(0,0)+Y1*rot(0,1)+Z1*rot(0,2);
Y2=X1*rot(1,0)+Y1*rot(1,1)+Z1*rot(1,2);
Z2=X1*rot(2,0)+Y1*rot(2,1)+Z1*rot(2,2);
将3*3旋转矩阵的rot代入后,得到第二三维坐标(X2,Y2,Z2)的计算公式:
Figure PCTCN2022083566-appb-000008
Figure PCTCN2022083566-appb-000009
Figure PCTCN2022083566-appb-000010
S4、将第二三维坐标映射成经纬度。
对于任意一个第二三维坐标(X2,Y2,Z2),其经纬度(fi,theta)的计算方法为:
fi=atan2f(Y2,X2);
theta=PI*0.5-atan2f(Z2,sqrt(X2*X2+Y2*Y2));
当计算出来的fi的值小于0时,对fi的值进行调整,调整后的值为2*PI-fi,以保证fi 的范围在0~2PI之间。其中,PI表示圆周率π,atan2f(a,b)为以弧度表示的b/a的反正切。
S5、根据经纬度、经纬图映射表及获取的各镜头的实时图像数据在触摸显示屏上生成实时预览画面。
根据步骤S4中得到的第二三维坐标的经纬度查找经纬图映射表的实时图像数据(视频)的坐标及权重值;再将该坐标中的对应的颜色值或经权重计算后的颜色值复制至目标图中以得到实时预览画面,再通过触摸显示屏的屏幕予以显示。
通过该实施例可知,本发明实施例仅需要对全景相机的触摸显示屏需要显示的内容进行拼接处理,不需要对所有镜头的画面进行整体拼接,因此处理速度快,适用于全景相机的实时取景。
实施例2
如图2所示,为本实施例中的全景相机的构成模块图,本实施例中的全景相机包括两个鱼眼镜头(也可以为其他数目)和触摸显示屏,两个鱼眼镜头分别安装在全景相机相对两侧,并使得两个鱼眼镜头之间有部分视场重叠,以形成360°全景视场;触摸显示屏大致呈矩形,用以显示全景相机的预览图像。全景相机还包括:经纬图映射表模块,用于根据全景相机每个镜头的标定参数生成经纬图映射表;第一计算模块,用于计算全景相机的触摸显示屏的屏幕所显示图像的第一三维坐标;第二计算模块,用于根据检测到的触摸显示屏的触摸方向及角度和第一三维坐标计算第二三维坐标;映射模块,用于将第二三维坐标映射成经纬度;预览画面模块,用于根据经纬度、经纬图映射表及获取的各镜头的实时图像数据在触摸显示屏上生成实时预览画面。其中,经纬图映射表的包括以全景相机为中心的球面上的任意一点对应的各镜头的横坐标、纵坐标及权重值。
实施例3
本实施例中揭示了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上 存储有计算机程序,所述计算机程序被处理器执行时实现实施例1中的全景相机的实时取景方法。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,存储介质可以是计算机可读存储介质,例如,铁电存储器(FRAM,Ferromagnetic Random Access Memory)、只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read Only Memory)、带电可擦可编程只读存储器(EEPROM,Electrically Erasable Programmable Read Only Memory)、闪存、磁表面存储器、光盘、或光盘只读存储器(CD-ROM,Compact Disk-Read Only Memory)等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种全景相机的实时取景方法,其特征在于,包括:
    根据全景相机每个镜头的标定参数生成经纬图映射表;
    计算全景相机的触摸显示屏的屏幕所显示图像的第一三维坐标;
    根据检测到的触摸显示屏的触摸方向及角度和第一三维坐标计算第二三维坐标;
    将第二三维坐标映射成经纬度;
    根据经纬度、经纬图映射表及获取的各镜头的实时图像数据在触摸显示屏上生成实时预览画面;
    其中,所述经纬图映射表的包括以全景相机为中心的球面上的任意一点对应的各镜头的横坐标、纵坐标及权重值。
  2. 根据权利要求1所述的全景相机的实时取景方法,其特征在于,根据全景相机每个镜头的标定参数生成经纬图映射表包括:对全景相机拍摄的实时图像数据进行参数标定以得到标定参数,为每个镜头建立相应的镜头经纬表,结合镜头标定参数和镜头经纬表合并生成经纬图映射表。
  3. 根据权利要求1所述的全景相机的实时取景方法,其特征在于,所述结合镜头标定参数和镜头经纬表合并生成经纬图映射表具体为:对于同时位于两个相邻镜头生成的图像中的点,根据其距离两个镜头生成的图像的中心位置的距离来设置权重。
  4. 根据权利要求1所述的全景相机的实时取景方法,其特征在于,所述计算全景相机的触摸显示屏的屏幕所显示图像的第一三维坐标为:对于屏幕中的任意一点的屏幕坐标(i,j),其显示图像的第一三维坐标(X1,Y1,Z1)的计算方法为:
    X1=tan(PI*0.5-fov*PI/180*0.5)*rayW*0.5;
    Y1=(j-rayW*0.5);
    Z1=(i-rayH*0.5);
    其中,rayW表示屏幕的宽,rayH表示屏幕的高,fov表示屏幕所显示图像的视场角,PI表示圆周率π。
  5. 根据权利要求2所述的全景相机的实时取景方法,其特征在于,所述根据检测到的触摸显示屏的触摸方向及角度和第一三维坐标计算第二三维坐标的计算方法为:对于任意一个第一三维坐标(X1,Y1,Z1),其显示图像的第二三维坐标(X2,Y2,Z2)的计算方法为:
    Figure PCTCN2022083566-appb-100001
    Figure PCTCN2022083566-appb-100002
    Figure PCTCN2022083566-appb-100003
    其中,θ为检测到的屏幕左右滑动的弧度,
    Figure PCTCN2022083566-appb-100004
    为检测到的屏幕上下滑动的弧度。
  6. 根据权利要求1所述的全景相机的实时取景方法,其特征在于,所述将第二三维坐标映射成经纬度具体为:对于任意一个第二三维坐标(X2,Y2,Z2),其经纬度(fi,theta)的计算方法为:
    fi=atan2f(Y2,X2);
    theta=PI*0.5-atan2f(Z2,sqrt(X2*X2+Y2*Y2));
    其中,PI表示圆周率π,atan2f(a,b)为以弧度表示的b/a的反正切。
  7. 根据权利要求1所述的全景相机的实时取景方法,其特征在于,所述根据经纬度、经纬图映射表及获取的各镜头的实时图像数据在触摸显示屏上生成实时预览画面包括:
    根据得到的第二三维坐标的经纬度查找经纬图映射表的各镜头对应的实时图像数据的各坐标;
    将各坐标中的对应的颜色值或经权重计算后的颜色值复制至目标图中以得到实时预览画面。
  8. 一种全景相机,包括至少两个镜头和设触摸显示屏,其特征在于,还包括:
    经纬图映射表模块,用于根据全景相机每个镜头的标定参数生成经纬图映射表;
    第一计算模块,用于计算全景相机的触摸显示屏的屏幕所显示图像的第一三维坐标;
    第二计算模块,用于根据检测到的触摸显示屏的触摸方向及角度和第一三维坐标计算第二三维坐标;
    映射模块,用于将第二三维坐标映射成经纬度;
    预览画面模块,用于根据经纬度、经纬图映射表及获取的各镜头的实时图像数据在触摸显示屏上生成实时预览画面;
    其中,所述经纬图映射表的包括横坐标、纵坐标及权重值。
  9. 根据权利要求8所述的全景相机,其特征在于,还包括:
    加权计算模块,用于在所述横坐标和所述纵坐标对应多个镜头的实时图像数据的具体坐标时,根据对应的各实时图像数据的具体坐标和该具体坐标到各自的实时图像中心的距离进行加权平均计算以获得所述坐标值。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的全景相机的实时取景方法。
PCT/CN2022/083566 2021-03-31 2022-03-29 实时取景方法、全景相机及计算机可读存储介质 WO2022206728A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110350854.2 2021-03-31
CN202110350854.2A CN115147268A (zh) 2021-03-31 2021-03-31 实时取景方法、全景相机及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022206728A1 true WO2022206728A1 (zh) 2022-10-06

Family

ID=83404463

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/083566 WO2022206728A1 (zh) 2021-03-31 2022-03-29 实时取景方法、全景相机及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN115147268A (zh)
WO (1) WO2022206728A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503492A (zh) * 2023-06-27 2023-07-28 北京鉴智机器人科技有限公司 自动驾驶系统中双目相机模组标定方法及标定装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007192832A (ja) * 2007-03-06 2007-08-02 Iwate Univ 魚眼カメラの校正方法。
KR101383997B1 (ko) * 2013-03-08 2014-04-10 홍익대학교 산학협력단 실시간 동영상 합병 방법 및 시스템, 실시간 동영상 합병을 이용한 영상 감시 시스템 및 가상 영상 투어 시스템
CN107071268A (zh) * 2017-01-20 2017-08-18 深圳市圆周率软件科技有限责任公司 一种多目全景相机全景拼接方法及系统
US20170301065A1 (en) * 2016-04-15 2017-10-19 Gopro, Inc. Systems and methods for combined pipeline processing of panoramic images
CN107948662A (zh) * 2017-12-04 2018-04-20 深圳岚锋创视网络科技有限公司 一种拍摄期间实时预览全景画面的方法、装置及全景相机

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007192832A (ja) * 2007-03-06 2007-08-02 Iwate Univ 魚眼カメラの校正方法。
KR101383997B1 (ko) * 2013-03-08 2014-04-10 홍익대학교 산학협력단 실시간 동영상 합병 방법 및 시스템, 실시간 동영상 합병을 이용한 영상 감시 시스템 및 가상 영상 투어 시스템
US20170301065A1 (en) * 2016-04-15 2017-10-19 Gopro, Inc. Systems and methods for combined pipeline processing of panoramic images
CN107071268A (zh) * 2017-01-20 2017-08-18 深圳市圆周率软件科技有限责任公司 一种多目全景相机全景拼接方法及系统
CN107948662A (zh) * 2017-12-04 2018-04-20 深圳岚锋创视网络科技有限公司 一种拍摄期间实时预览全景画面的方法、装置及全景相机

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503492A (zh) * 2023-06-27 2023-07-28 北京鉴智机器人科技有限公司 自动驾驶系统中双目相机模组标定方法及标定装置

Also Published As

Publication number Publication date
CN115147268A (zh) 2022-10-04

Similar Documents

Publication Publication Date Title
WO2021103347A1 (zh) 投影仪的梯形校正方法、装置、系统及可读存储介质
US10645284B2 (en) Image processing device, image processing method, and recording medium storing program
CN111857329B (zh) 注视点计算方法、装置及设备
US10210622B2 (en) Image processing device, image processing method, and recording medium storing program
CN106815869B (zh) 鱼眼相机的光心确定方法及装置
CN104778656B (zh) 基于球面透视投影的鱼眼图像校正方法
CN110611767B (zh) 图像处理方法、装置和电子设备
WO2017080280A1 (zh) 一种深度图像合成方法及装置
WO2022206728A1 (zh) 实时取景方法、全景相机及计算机可读存储介质
WO2020232971A1 (zh) 鱼眼相机标定系统、方法、装置、电子设备及存储介质
WO2023125362A1 (zh) 一种图像显示方法、装置和电子设备
WO2022126921A1 (zh) 全景图片的检测方法、装置、终端及存储介质
CN114511447A (zh) 图像处理方法、装置、设备及计算机存储介质
US11657477B2 (en) Image processing device, image processing system, imaging device, image processing method, and recording medium storing program code
CN114286066A (zh) 投影校正方法、装置、存储介质以及投影设备
JP2011108028A (ja) 画像再生装置、撮像装置、画像再生方法
US20100134597A1 (en) Electronic apparatus with photographing function and 3D image forming method
JP2019047204A (ja) 画像処理装置、画像処理方法、及びプログラム
CN113674356A (zh) 相机筛选方法及相关装置
CN113678163A (zh) 图像矫正方法、装置、电子设备和存储介质
CN115004683A (zh) 成像装置、成像方法和程序
CN109945840B (zh) 三维影像摄取方法及系统
JP2020204973A (ja) 情報処理装置、プログラム及び情報処理システム
JP2017126868A (ja) 画像表示システム、情報処理装置、画像表示方法及び画像表示プログラム
CN115499594B (zh) 全景图像生成方法及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22778910

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22778910

Country of ref document: EP

Kind code of ref document: A1