TWI837563B - Image processing system and method thereof for generating projection images based on a multiple-lens camera - Google Patents

Image processing system and method thereof for generating projection images based on a multiple-lens camera Download PDF

Info

Publication number
TWI837563B
TWI837563B TW110149260A TW110149260A TWI837563B TW I837563 B TWI837563 B TW I837563B TW 110149260 A TW110149260 A TW 110149260A TW 110149260 A TW110149260 A TW 110149260A TW I837563 B TWI837563 B TW I837563B
Authority
TW
Taiwan
Prior art keywords
image
lens
vertex
row
projection
Prior art date
Application number
TW110149260A
Other languages
Chinese (zh)
Other versions
TW202327349A (en
Inventor
呂忠晏
Original Assignee
信驊科技股份有限公司
Filing date
Publication date
Application filed by 信驊科技股份有限公司 filed Critical 信驊科技股份有限公司
Priority to TW110149260A priority Critical patent/TWI837563B/en
Publication of TW202327349A publication Critical patent/TW202327349A/en
Application granted granted Critical
Publication of TWI837563B publication Critical patent/TWI837563B/en

Links

Abstract

An image processing system is disclosed, comprising: an M-lens camera, a compensation device and a correspondence generator. The M-lens camera generates M lens images. The compensation device generates a projection image according to a first vertex list and the M lens images. The correspondence generator is configured to conduct calibration for vertices to define vertex mappings, horizontally and vertically scan each lens image to determine texture coordinates of its image center, determine texture coordinates of control points according to the vertex mappings, and P1 control points in each overlap region in the projection image; and, determine two adjacent control points and a coefficient blending weight for each vertex in each lens image according to the texture coordinates of the control points and the image center in each lens image to generate the first vertex list.

Description

以多鏡頭相機產生投影影像之影像處理系統及其方法Image processing system and method for generating projection images using a multi-lens camera

本發明係有關於廣角影像,特別地,尤有關於一種影像處理系統及其方法,利用內向式或外向式多鏡頭相機產生投影影像。The present invention relates to wide-angle images, and more particularly, to an image processing system and method thereof, which utilizes an inward-facing or outward-facing multi-lens camera to generate a projected image.

圖1A是揭露於中華民國第I 728620號專利文獻中的一立方體架構11A與一球體12之間的關係(上述專利的內容在此被整體引用作為本說明書內容的一部份)。圖1B顯示一等距長方全景影像(equirectangular panoramic image),係源自於架設在該立方體架構11A的六個工作面的六鏡頭相機的六個鏡頭影像(頂面、底面、背面、左面、右面、正面)的等距長方投影。因為圖1B中有六個鏡頭影像,故可推論架設在該立方體架構11A的六個工作面的該相機的六個鏡頭(頂面、底面、背面、左面、右面、正面),係相對於該立方體架構11A的重心,分別朝向外,如圖2A所示。該立方體架構11A的任二個鄰近表面/邊緣(用來架設上述六個鏡頭)所形成的內角 1等於90 o。以下,將圖2A中的六鏡頭相機稱為”外向式六鏡頭相機”。圖2B例示架設在架構11B的外向式三鏡頭相機。圖2B的外向式三鏡頭相機是用來產生三個鏡頭影像,且上述三個鏡頭影像係用來形成一廣角影像,該架構11B的任二個鄰近表面/邊緣(用來架設上述三個鏡頭)形成的內角 2小於180 o。然而,前述外向式多鏡頭相機的缺點是鏡頭會突出於電子產品的機身22之外,故該電子產品不利於攜帶且鏡頭本身容易磨損。 FIG. 1A shows the relationship between a cubic structure 11A and a sphere 12 disclosed in the patent document No. 1 728620 of the Republic of China (the contents of the above patent are hereby quoted as a part of the contents of this specification). FIG. 1B shows an equirectangular panoramic image, which is derived from the equirectangular projection of six lens images (top, bottom, back, left, right, front) of a six-lens camera set up on the six working surfaces of the cubic structure 11A. Since there are six lens images in FIG. 1B, it can be inferred that the six lenses (top, bottom, back, left, right, front) of the camera set up on the six working surfaces of the cubic structure 11A are facing outwards relative to the center of gravity of the cubic structure 11A, as shown in FIG. 2A. The internal angle formed by any two adjacent surfaces/edges of the cubic structure 11A (used to set up the above six lenses) 1 is equal to 90 ° . Hereinafter, the six-lens camera in FIG. 2A is referred to as an “external-facing six-lens camera”. FIG. 2B illustrates an external-facing three-lens camera mounted on a frame 11B. The external-facing three-lens camera in FIG. 2B is used to generate three lens images, and the three lens images are used to form a wide-angle image. The inner angle formed by any two adjacent surfaces/edges of the frame 11B (used to mount the three lenses) 2 is less than 180 ° . However, the disadvantage of the aforementioned outward-facing multi-lens camera is that the lens will protrude outside the body 22 of the electronic product, so the electronic product is not convenient to carry and the lens itself is easily worn.

因此,業界亟需一種具內向式多鏡頭相機的影像處理系統,用產生廣角影像且鏡頭免於磨損。Therefore, the industry is in urgent need of an image processing system with an inward-looking multi-lens camera to produce wide-angle images and prevent the lens from wear and tear.

有鑒於上述問題,本發明的目的之一是提供一種影像處理系統,具內向式多鏡頭的相機,用以產生投影影像且鏡頭免於磨損。In view of the above problems, one of the objects of the present invention is to provide an image processing system having an inward-looking multi-lens camera for generating a projected image and preventing the lens from being worn.

根據本發明之一實施例,係提供一種影像處理系統,包含:一部具M個鏡頭的相機、一補償裝置以及一對應性產生器。該具M個鏡頭的相機,用以捕捉一個涵蓋X度水平視域以及Y度垂直視域的視野,以產生M個鏡頭影像。該補償裝置,用以根據一第一頂點列表及該M個鏡頭影像,產生一投影影像。對應性產生器,用來產生一組操作,包含:對多個頂點進行校正,以在該M個鏡頭影像及該投影影像之間定義多個第一頂點映射;水平地及垂直地掃描各鏡頭影像,以決定各鏡頭影像的影像中心;根據該些第一頂點映射及該投影影像中各重疊區內的P1個控制點,決定所有控制點的紋理座標;以及,根據所有控制點的紋理座標及各鏡頭影像的影像中心,決定各頂點於各鏡頭影像的二個鄰近控制點及一係數混合權重,以產生該第一頂點列表,其中X<=360,Y<180,M>=2以及P1>=3。According to an embodiment of the present invention, an image processing system is provided, comprising: a camera with M lenses, a compensation device, and a correspondence generator. The camera with M lenses is used to capture a field of view covering an X-degree horizontal field of view and a Y-degree vertical field of view to generate M lens images. The compensation device is used to generate a projection image according to a first vertex list and the M lens images. A correspondence generator is used to generate a set of operations, including: correcting multiple vertices to define multiple first vertex mappings between the M lens images and the projection image; scanning each lens image horizontally and vertically to determine the image center of each lens image; determining the texture coordinates of all control points based on the first vertex mappings and P1 control points in each overlapping area in the projection image; and, based on the texture coordinates of all control points and the image center of each lens image, determining two neighboring control points of each vertex in each lens image and a coefficient mixing weight to generate the first vertex list, where X<=360, Y<180, M>=2 and P1>=3.

本發明之另一實施例,係提供一種影像處理方法,包含:對多個頂點進行校正,以在M個鏡頭影像及一投影影像之間定義多個第一頂點映射,其中該M個鏡頭影像是由一個具M個鏡頭的相機捕捉到一個涵蓋X度水平視域以及Y度垂直視域的視野而產生;水平地及垂直地掃描各鏡頭影像,以決定各鏡頭影像的影像中心;根據該些第一頂點映射及該投影影像中各重疊區內的P1個控制點,決定所有控制點的紋理座標;根據所有控制點的紋理座標及各鏡頭影像的影像中心,決定各頂點於各鏡頭影像的二個鄰近控制點及一係數混合權重,以產生一第一頂點列表;以及,根據該第一頂點列表及該M個鏡頭影像,產生該投影影像,其中X<=360,Y<180,M>=2以及P1>=3。Another embodiment of the present invention provides an image processing method, comprising: calibrating a plurality of vertices to define a plurality of first vertex mappings between M lens images and a projection image, wherein the M lens images are generated by a camera having M lenses capturing a field of view covering an X-degree horizontal field of view and a Y-degree vertical field of view; scanning each lens image horizontally and vertically to determine an image center of each lens image; and determining a plurality of first vertex mappings between the plurality of lens images and a projection image. Determine the texture coordinates of all control points based on P1 control points in each overlapping area of the projection image; determine two neighboring control points of each vertex in each lens image and a coefficient mixing weight based on the texture coordinates of all control points and the image center of each lens image to generate a first vertex list; and generate the projection image based on the first vertex list and the M lens images, wherein X<=360, Y<180, M>=2 and P1>=3.

茲配合下列圖示、實施例之詳細說明及申請專利範圍,將上述及本發明之其他目的與優點詳述於後。The above and other objects and advantages of the present invention are described in detail below with reference to the following drawings, detailed description of embodiments and patent claims.

在通篇說明書及後續的請求項當中所提及的「一」及「該」等單數形式的用語,都同時包含單數及複數的涵義,除非本說明書中另有特別指明。在通篇說明書及後續的請求項當中所提及的相關用語定義如下,除非本說明書中另有特別指明。在通篇說明書中,具相同功能的電路元件使用相同的參考符號。The singular forms of "a", "an", "the" and the like mentioned in the entire specification and the subsequent claims include both the singular and the plural, unless otherwise specified in the specification. The relevant terms mentioned in the entire specification and the subsequent claims are defined as follows, unless otherwise specified in the specification. Throughout the specification, circuit elements with the same function use the same reference symbols.

本發明的特色之一是利用一內向式多鏡頭相機產生多個鏡頭影像,再利用拼接(stitching)及混合(blending)方式來產生一廣角影像。One of the features of the present invention is to use an inward-looking multi-lens camera to generate multiple lens images, and then use stitching and blending methods to generate a wide-angle image.

圖3A係根據本發明一實施例,例示架設在架構11C上的內向式三鏡頭相機的二個側視圖。圖3B係根據本發明一實施例,例示架設在架構11D上的內向式雙鏡頭相機的二個側視圖。該架構11C的任二個鄰近表面/邊緣形成的優角 3以及該架構11D的任二個鄰近表面/邊緣形成的優角 4都是大於180 o且小於270 o。各外向式多鏡頭相機具有一個內部的中心,而各內向式多鏡頭相機具有一個外部的中心。舉例而言,圖2B的外向式三鏡頭相機的三個鏡頭(左面鏡頭、正面鏡頭、右面鏡頭)的三條光軸D1~D3的交叉點O1係形成於架構11B的內部或位在該三個鏡頭的下方,故稱之為”內部的中心”;相對地,圖3B的內向式雙鏡頭相機的雙鏡頭(鏡頭A、鏡頭B)的二條光軸D4~D5的交叉點O2係形成於架構11D的外部或位在該雙鏡頭的上方,再者,圖4A的內向式三鏡頭相機的三鏡頭(鏡頭A、鏡頭C、鏡頭B)的三條光軸D6、D2及D7的交叉點O3形成於架構11C的外部或位在該三鏡頭的上方,故稱交叉點O2、O3為”外部的中心”。 FIG. 3A is a diagram showing two side views of an inward-facing three-lens camera mounted on a frame 11C according to an embodiment of the present invention. FIG. 3B is a diagram showing two side views of an inward-facing two-lens camera mounted on a frame 11D according to an embodiment of the present invention. The reflex angle formed by any two adjacent surfaces/edges of the frame 11C is 3 and the reflex angle formed by any two adjacent surfaces/edges of the structure 11D 4 are all greater than 180° and less than 270 ° . Each outward-facing multi-lens camera has an internal center, and each inward-facing multi-lens camera has an external center. For example, the intersection O1 of the three optical axes D1~D3 of the three lenses (left lens, front lens, right lens) of the outward-facing three-lens camera in FIG2B is formed inside the frame 11B or below the three lenses, so it is called the "inner center"; in contrast, the two optical axes D1~D3 of the dual lenses (lens A, lens B) of the inward-facing dual-lens camera in FIG3B The intersection O2 of D4~D5 is formed outside the structure 11D or located above the double lens. Furthermore, the intersection O3 of the three optical axes D6, D2 and D7 of the three lenses (lens A, lens C, lens B) of the inward-looking three-lens camera in Figure 4A is formed outside the structure 11C or located above the three lenses, so the intersection points O2 and O3 are called "external centers".

前述內向式多鏡頭相機的優點是鏡頭會被包覆於電子產品的機身32/33之內,故該電子產品可便於攜帶且鏡頭本身可免於磨損。The advantage of the aforementioned inward-facing multi-lens camera is that the lens will be enclosed within the body 32/33 of the electronic product, so the electronic product can be convenient to carry and the lens itself can be protected from wear and tear.

圖4A例示一內向式三鏡頭相機及一外向式三鏡頭相機的架設方式。圖4B例示包含三個鏡頭影像(由圖4A的外向式三鏡頭相機所輸出)之一廣角影像。圖4C例示包含三個鏡頭影像(由圖4A的內向式三鏡頭相機所輸出)之一廣角影像。圖4D例示包含二個鏡頭影像(由圖3B的內向式雙鏡頭相機所輸出)之一廣角影像。圖4A中二個相機的架設方式僅用於做比較。圖4A中外向式三鏡頭相機的三鏡頭(由左至右為:左面鏡頭、正面鏡頭、右面鏡頭)架設於該架構11B 上的順序相同於三個鏡頭影像被配置於圖4B的廣角影像中的順序(由左至右為:左面鏡頭影像、正面鏡頭影像、右面鏡頭影像)。然而,圖4A中內向式三鏡頭相機的三鏡頭(由左至右為:鏡頭A、鏡頭C、鏡頭B)架設於該架構11C 上的順序係相反於三個鏡頭影像被配置於圖4C的廣角影像中的順序(由左至右為:鏡頭B影像、鏡頭C影像、鏡頭A影像);另外,圖3B中內向式雙鏡頭相機的雙鏡頭(由左至右為:鏡頭A、鏡頭B)架設於該架構11D 上的順序亦相反於二個鏡頭影像被配置於圖4D的廣角影像中的順序(由左至右為:鏡頭B影像、鏡頭A影像)。於圖4A的例子中,係透過適當選擇架構11B中的內角 2以及架構11C中的內角 3,使得內向式三鏡頭相機的鏡頭B的光軸D7相對於水平線(圖未示)的角度相同於外向式三鏡頭相機的左向鏡頭的光軸D1相對於水平線的角度,以及內向式三鏡頭相機的鏡頭A的光軸D6相對於水平線的角度相同於外向式三鏡頭相機的右向鏡頭的光軸D3相對於水平線的角度。然而,即使光軸的角度相同,該左向鏡頭影像與該鏡頭B影像之間以及該右向鏡頭影像與該鏡頭A影像之間仍存有位移,因此,需適當地拼接及混合由內向式多鏡頭相機輸出的多個鏡頭影像,以形成高品質的廣角影像。 FIG. 4A illustrates the setup of an inward-facing three-lens camera and an outward-facing three-lens camera. FIG. 4B illustrates a wide-angle image including three lens images (output by the outward-facing three-lens camera of FIG. 4A). FIG. 4C illustrates a wide-angle image including three lens images (output by the inward-facing three-lens camera of FIG. 4A). FIG. 4D illustrates a wide-angle image including two lens images (output by the inward-facing dual-lens camera of FIG. 3B). The setup of the two cameras in FIG. 4A is for comparison only. The order in which the three lenses of the outward-facing three-lens camera in FIG. 4A (from left to right: left lens, front lens, right lens) are mounted on the frame 11B is the same as the order in which the three lens images are arranged in the wide-angle image of FIG. 4B (from left to right: left lens image, front lens image, right lens image). However, the order in which the three lenses of the inward-facing three-lens camera in FIG. 4A (from left to right: lens A, lens C, lens B) are mounted on the frame 11C is opposite to the order in which the three lens images are arranged in the wide-angle image of FIG. 4C (from left to right: lens B image, lens C image, lens A image); in addition, the order in which the two lenses of the inward-facing two-lens camera in FIG. 3B (from left to right: lens A, lens B) are mounted on the frame 11D is also opposite to the order in which the two lens images are arranged in the wide-angle image of FIG. 4D (from left to right: lens B image, lens A image). In the example of FIG. 4A, by appropriately selecting the inner angle in the frame 11B, the lens images are arranged in the wide-angle image of FIG. 2 and the internal angles in structure 11C 3, so that the angle of the optical axis D7 of the lens B of the inward-looking three-lens camera relative to the horizontal line (not shown) is the same as the angle of the optical axis D1 of the left-pointing lens of the outward-looking three-lens camera relative to the horizontal line, and the angle of the optical axis D6 of the lens A of the inward-looking three-lens camera relative to the horizontal line is the same as the angle of the optical axis D3 of the right-pointing lens of the outward-looking three-lens camera relative to the horizontal line. However, even if the angles of the optical axes are the same, there is still a displacement between the left-pointing lens image and the lens B image, and between the right-pointing lens image and the lens A image. Therefore, it is necessary to properly splice and mix the multiple lens images output by the inward-looking multi-lens camera to form a high-quality wide-angle image.

根據本發明,上述內向式或外向式多鏡頭相機的各鏡頭,係同時捕捉到一個涵蓋x1度水平視域(horizontal field of view,HFOV)以及y1度的VFOV的視野,以產生一鏡頭影像,之後,來自上述內向式或外向式多鏡頭相機的多個鏡頭影像形成一個具x2度HFOV以及y2度VFOV的投影影像,其中0<x1<x2<=360,0<y1<y2<180。舉例而言,圖4A中內向式三鏡頭相機的各鏡頭,可同時捕捉到一個涵蓋70度HFOV以及70度VFOV的視野,以產生一鏡頭影像,之後,鏡頭B影像、鏡頭C影像及鏡頭A影像形成一個具160度HFOV以及60度VFOV的廣角影像,如圖4C;圖3B中內向式雙鏡頭相機的各鏡頭,可同時捕捉到一個涵蓋100度HFOV以及70度VFOV的視野,以產生一鏡頭影像,之後,鏡頭B影像及鏡頭A影像形成一個具160度HFOV以及60度VFOV的廣角影像,如圖4D。架設該多鏡頭相機的一個必要條件是任二個鄰近鏡頭的視野之間應有足夠的重疊,以助影像拼接。參考圖4C-4D,重疊區41內的像素是由二個鏡頭/紋理影像重疊而成,而非重疊區43內的像素則來自於單一鏡頭/紋理影像,至於區域42內的像素則被捨棄。因此,影像處理裝置520需對該些重疊區41進行混合及拼接操作,以形成一廣角影像(將於後面說明)。According to the present invention, each lens of the above-mentioned inward-looking or outward-looking multi-lens camera simultaneously captures a field of view covering a horizontal field of view (HFOV) of x1 degree and a VFOV of y1 degree to generate a lens image. Thereafter, multiple lens images from the above-mentioned inward-looking or outward-looking multi-lens camera form a projected image with a HFOV of x2 degree and a VFOV of y2 degree, wherein 0<x1<x2<=360, 0<y1<y2<180. For example, each lens of the inward-looking three-lens camera in FIG. 4A can simultaneously capture a field of view covering a 70-degree HFOV and a 70-degree VFOV to generate a lens image, and then the lens B image, the lens C image, and the lens A image form a wide-angle image with a 160-degree HFOV and a 60-degree VFOV, as shown in FIG. 4C ; each lens of the inward-looking dual-lens camera in FIG. 3B can simultaneously capture a field of view covering a 100-degree HFOV and a 70-degree VFOV to generate a lens image, and then the lens B image and the lens A image form a wide-angle image with a 160-degree HFOV and a 60-degree VFOV, as shown in FIG. 4D . A necessary condition for setting up the multi-lens camera is that there should be enough overlap between the fields of view of any two adjacent lenses to facilitate image stitching. Referring to Figures 4C-4D, the pixels in the overlap area 41 are formed by the overlap of two lens/texture images, while the pixels in the non-overlap area 43 are from a single lens/texture image, and the pixels in area 42 are discarded. Therefore, the image processing device 520 needs to perform blending and stitching operations on the overlap areas 41 to form a wide-angle image (to be explained later).

圖5A係根據本發明一實施例,顯示一投影影像處理系統之方塊圖。參考圖5A,投影影像處理系統500包含一影像擷取模組51、一補償裝置52以及一對應性產生器(correspondence generator)53。該補償裝置52接收來自對應性產生器23之原始頂點列表及來自影像擷取模組51的多張鏡頭影像,以產生一投影影像,例如一張廣角影像或一張全景影像。FIG5A is a block diagram showing a projection image processing system according to an embodiment of the present invention. Referring to FIG5A , the projection image processing system 500 includes an image capture module 51, a compensation device 52, and a correspondence generator 53. The compensation device 52 receives the original vertex list from the correspondence generator 23 and multiple lens images from the image capture module 51 to generate a projection image, such as a wide-angle image or a panoramic image.

許多投影方式都適用於本發明投影影像處理系統500。「投影」一詞指的是:將一球體表面攤平成一個二維平面,例如一投影平面。該投影包含,但不受限於,等距長方投影、圓柱(cylindrical)投影以及修正圓柱投影。修正圓柱形投影包含,但不受限於,米勒(Miller)投影、墨卡托(Mercator)投影、蘭伯特圓柱等面積(Lambert cylindrical equal area)投影、帕尼尼(Pannini)投影等。據此,上述投影影像包含,但不受限於,一等距長方投影影像、一圓柱投影影像以及一修正圓柱投影影像。圖1A-1B、6A-6C、8A-8D及10B-10C係有關於等距長方投影。至於圓柱投影以及修正圓柱投影的實施方式已為本領域技術人士所熟知,在此不予贅述。請注意,無論本發明投影影像處理系統500採用哪一種投影方式,上述對應性產生器53都會對應地產生一原始頂點列表(例如表一),其定義了該些鏡頭影像及該投影影像之間的頂點映射關係。Many projection methods are applicable to the projection image processing system 500 of the present invention. The term "projection" refers to: flattening the surface of a sphere into a two-dimensional plane, such as a projection plane. The projection includes, but is not limited to, equirectangular projection, cylindrical projection, and modified cylindrical projection. Modified cylindrical projection includes, but is not limited to, Miller projection, Mercator projection, Lambert cylindrical equal area projection, Pannini projection, etc. Accordingly, the above-mentioned projection image includes, but is not limited to, an equirectangular projection image, a cylindrical projection image, and a modified cylindrical projection image. Figures 1A-1B, 6A-6C, 8A-8D, and 10B-10C are related to equirectangular projection. As for the implementation methods of cylindrical projection and modified cylindrical projection, they are already well known to technical personnel in this field and will not be elaborated here. Please note that no matter which projection method is adopted by the projection image processing system 500 of the present invention, the correspondence generator 53 will correspondingly generate an original vertex list (such as Table 1), which defines the vertex mapping relationship between the lens images and the projection image.

該影像擷取模組51是一部具多鏡頭的照相機,例如一部外向式多鏡頭相機、一部內向式雙鏡頭相機或一部內向式三鏡頭相機,可同時捕捉到一個涵蓋X度HFOV以及Y度VFOV的視野,以產生多個鏡頭影像,其中X<=360,Y<180。為清楚及方便描述,以下的例子及實施例僅以等距長方投影作說明及假設該影像擷取模組51是一部內向式三鏡頭相機且該投影影像是一張等距長方廣角影像。須注意的是,本發明投影影像處理系統500的運作方式及圖7B-7C、8A-8D及9A-9C的相關描述與方法同樣亦適用於上述外向式多鏡頭相機、內向式雙鏡頭相機、圓柱投影以及修正圓柱投影。The image capture module 51 is a camera with multiple lenses, such as an extroverted multi-lens camera, an introverted dual-lens camera or an introverted three-lens camera, which can simultaneously capture a field of view covering an X-degree HFOV and a Y-degree VFOV to generate multiple lens images, where X<=360, Y<180. For clarity and convenience of description, the following examples and embodiments are only illustrated with equirectangular projection and assume that the image capture module 51 is an introverted three-lens camera and the projected image is an equirectangular wide-angle image. It should be noted that the operation of the projection image processing system 500 of the present invention and the related descriptions and methods of FIGS. 7B-7C , 8A-8D and 9A-9C are also applicable to the above-mentioned exotropic multi-lens camera, introtropic dual-lens camera, cylindrical projection and modified cylindrical projection.

在通篇說明書及後續的請求項當中所提及的相關用語定義如下,除非本說明書中另有特別指明。「紋理座標」一詞指的是一紋理空間(如一紋理影像或鏡頭影像)中的座標;「柵格化操作(rasterization)」一詞指的是將場景幾何形狀(scene geometry)(或一投影影像)映射至各鏡頭影像的紋理座標的計算過程。The definitions of terms used throughout this specification and in the claims that follow are as follows, unless otherwise specified in this specification. The term "texture coordinates" refers to coordinates in a texture space (such as a texture image or a lens image); the term "rasterization" refers to the computational process of mapping scene geometry (or a projection image) to texture coordinates of each lens image.

投影影像處理系統500的處理管線(pipeline)分為離線階段(offline phase)和連線階段。於離線階段,分別校正該影像擷取模組51的三個內向式鏡頭,該對應性產生器53採用適合的影像對準(registration)技術來產生一原始頂點列表,並且該原始頂點列表中的各頂點提供該等距長方投影影像及該些鏡頭影像之間(或該等距長方座標及該些紋理座標之間)的映射關係。例如,圖6A中半徑2公尺(r=2)的球體62表面上被劃出許多圓圈,當作經度及緯度,其多個交叉點被視為多個校正點。架設於架構11D 上的影像擷取模組51的三個鏡頭捕捉該些校正點,且該些校正點於該些鏡頭影像上的位置為已知。然後,因為該些校正點的視角(view angle)和該些紋理座標被連結,故可建立該等距長方全景影像及該些鏡頭影像之間的映射關係。在本說明書中,具上述映射關係的校正點被定義為一個”頂點”。簡言之,該對應性產生器53對各頂點進行校正,以定義在該等距長方投影影像及該些鏡頭影像之間的頂點映射關係,進而得到該原始頂點列表。在離線階段中,該對應性產生器53會完成所有必要計算。The processing pipeline of the projection image processing system 500 is divided into an offline phase and an online phase. In the offline phase, the three inward-facing lenses of the image capture module 51 are calibrated respectively, and the correspondence generator 53 uses a suitable image registration technology to generate an original vertex list, and each vertex in the original vertex list provides a mapping relationship between the equidistant rectangular projection image and the lens images (or between the equidistant rectangular coordinates and the texture coordinates). For example, a sphere 62 with a radius of 2 meters (r=2) in FIG. 6A is marked with many circles on its surface, which are used as longitude and latitude, and its multiple intersections are regarded as multiple calibration points. The three lenses of the image capture module 51 mounted on the frame 11D capture the calibration points, and the positions of the calibration points on the lens images are known. Then, because the view angles of the calibration points and the texture coordinates are linked, a mapping relationship between the equirectangular panoramic image and the lens images can be established. In this specification, a calibration point with the above mapping relationship is defined as a "vertex". In short, the correspondence generator 53 calibrates each vertex to define the vertex mapping relationship between the equirectangular projection image and the lens images, and then obtains the original vertex list. In the offline stage, the correspondence generator 53 completes all necessary calculations.

圖6B顯示一個三角形網格,係用以模型化一球體表面。參考圖6B,一個三角形網格被利用來模型化一球體62的表面。圖6C顯示一個多邊形網格,係用以組成/模型化該等距長方投影影像。透過對圖6B的三角形網格進行一等距長方投影而產生圖6C的多邊形網格,而圖6C的多邊形網格是由上述頂點定義的多個四邊形或/及多個三角形的集合。FIG6B shows a triangular mesh used to model a spherical surface. Referring to FIG6B , a triangular mesh is used to model the surface of a sphere 62. FIG6C shows a polygonal mesh used to form/model the equirectangular projection image. The polygonal mesh of FIG6C is generated by performing an equirectangular projection on the triangular mesh of FIG6B , and the polygonal mesh of FIG6C is a collection of multiple quadrilaterals or/and multiple triangles defined by the above vertices.

於離線階段,根據該等距長方投影影像及該些鏡頭影像的幾何形狀,該對應性產生器53為多邊形網格(圖6C)的各頂點,計算其等距長方座標及紋理座標,以產生該原始頂點列表。之後,該對應性產生器53將該原始頂點列表傳送給該頂點處理裝置510。該原始頂點列表是多個頂點的列表,該些頂點形成該多邊形網格(圖6C)的多個四邊形或/及三角形,且各頂點由一相對應資料結構所定義。該資料結構定義了一目的空間及一紋理空間之間(或該等距長方座標及該紋理座標之間)的頂點映射關係。表一顯示該原始頂點列表中各頂點之資料結構的一個例子。                                   表 一 屬性 說明   (x, y) 等距長方座標   N 涵蓋/重疊的鏡頭影像數目   ID 1 第一個鏡頭影像的ID   (u 1, v 1) 在第一個鏡頭影像中的紋理座標   (idx 10, idx 11) 第一個鏡頭影像的接合係數索引   Alpha 1 第一個鏡頭影像中接合係數的混合權值   w 1 第一個鏡頭影像的拼接混合權值   ….. ……….   ID N 第N個鏡頭影像的ID   (u N, v N) 在第N個鏡頭影像中的紋理座標   (idx N0, idx N1) 在第N個鏡頭影像中的接合係數索引   Alpha N 第N個鏡頭影像中接合係數的混合權值   w N 第N個鏡頭影像的拼接混合權值   In the offline stage, according to the geometric shapes of the equirectangular projection image and the lens images, the correspondence generator 53 calculates the equirectangular coordinates and texture coordinates for each vertex of the polygonal grid (FIG. 6C) to generate the original vertex list. Afterwards, the correspondence generator 53 transmits the original vertex list to the vertex processing device 510. The original vertex list is a list of multiple vertices, which form multiple quadrilaterals and/or triangles of the polygonal grid (FIG. 6C), and each vertex is defined by a corresponding data structure. The data structure defines the vertex mapping relationship between a destination space and a texture space (or between the equirectangular coordinates and the texture coordinates). Table 1 shows an example of the data structure of each vertex in the original vertex list. Table I Attributes instruction (x, y) Equirectangular coordinates N Number of overlapping camera images ID 1 The ID of the first camera image (u 1 , v 1 ) Texture coordinates in the first shot image (idx 10 , idx 11 ) Index of stitching factor for the first lens image Alpha 1 Blending weights for the stitching coefficients in the first shot image w 1 The stitching blending weights of the first lens image ….. ……. ID N The ID of the Nth lens image (u N , v N ) Texture coordinates in the Nth lens image (idx N0 , idx N1 ) Index of stitching coefficient in the Nth lens image Alpha N The blending weight of the stitching coefficient in the Nth lens image 1v The stitching blending weight of the Nth lens image

在理想狀況下,該影像擷取模組51的三個鏡頭同時位在該架構11C的相機系統中心73(圖未示),因此一遠物體75的單一理想成像位置(imaging point)70係位在半徑2公尺(r=2)的成像平面(image plane)62上。以鏡頭B及鏡頭C為例,因為鏡頭B影像的理想成像位置70與鏡頭C影像的理想成像位置70相符,在完成影像拼接/混合操作後,該等距長方廣角影像就會呈現出完美的拼接/混合結果。然而,在實際狀況下,鏡頭B影像與鏡頭C影像的鏡頭中心76相對於系統中心73有一偏移量ofs,如圖7A左側所示,結果,在完成影像拼接/混合操作後,該等距長方廣角影像就會清楚呈現出不匹配的影像缺陷。In an ideal situation, the three lenses of the image capture module 51 are simultaneously located at the camera system center 73 of the frame 11C (not shown), so a single ideal imaging point 70 of a distant object 75 is located on an image plane 62 with a radius of 2 meters (r=2). Taking lens B and lens C as an example, because the ideal imaging point 70 of the lens B image matches the ideal imaging point 70 of the lens C image, after completing the image stitching/blending operation, the equidistant rectangular wide-angle image will present a perfect stitching/blending result. However, in actual conditions, the lens center 76 of the lens B image and the lens C image has an offset ofs relative to the system center 73, as shown on the left side of FIG. 7A . As a result, after the image stitching/blending operation is completed, the equidistant rectangular wide-angle image will clearly show mismatched image defects.

由於鏡頭的光學特性,例如鏡頭陰影(lens shading)及亮度陰影(luma shading),尺寸等於Wi Hi的鏡頭影像的影像中心74未必位在上述鏡頭影像的中間(Wi/2,Hi/2),其中,Wi及Hi分別表示上述鏡頭影像的寬與高。於離線階段,該對應性產生器53執行以下五個步驟來決定一鏡頭影像的實際影像中心74的紋理座標。(i)決定一亮度臨界值TH,以界定該鏡頭影像的邊界;(ii)由左至右掃描各列像素,以決定各列的左邊界點,如圖7B所示。掃描時,若有一像素的亮度值小於TH,表示該像素位在該鏡頭影像之外,反之,表示該像素位在該鏡頭影像內,之後,儲存各列的左邊界點。同樣地,由右至左掃描各列像素,以決定並儲存各列的右邊界點;由上至下掃描各行像素,以決定並儲存各行的上邊界點;由下至上掃描各行像素,以決定並儲存各行的下邊界點,如圖7C所示。(iii)根據第i列的左邊界點及右邊界點,計算第i列的列中心Uc(i);以及,根據第j行的上邊界點及下邊界點,計算第j行的行中心Vc(j),其中,0<=i<=(Wi-1)及0<=j<=(Hi-1)。(iv)平均所有列的列中心,以得到一平均u值,以及平均所有行的行中心,以得到一平均v值。(v)將該鏡頭影像之真實影像中心74的紋理座標(u center, v center)分別設定等於該平均u值及該平均v值。若有任何影像中心74無法以上述方式進行計算,則將該鏡頭影像之影像中心74的紋理座標(u center, v center) 分別設定等於(Wi/2, Hi/2)。 Due to the optical properties of the lens, such as lens shading and luma shading, the size is equal to Wi The image center 74 of the lens image Hi may not be located in the middle of the lens image (Wi/2, Hi/2), where Wi and Hi represent the width and height of the lens image, respectively. In the offline stage, the correspondence generator 53 performs the following five steps to determine the texture coordinates of the actual image center 74 of a lens image. (i) Determine a brightness threshold TH to define the boundary of the lens image; (ii) Scan each row of pixels from left to right to determine the left boundary point of each row, as shown in Figure 7B. During scanning, if the brightness value of a pixel is less than TH, it means that the pixel is outside the lens image. Otherwise, it means that the pixel is inside the lens image. Afterwards, the left boundary point of each row is stored. Similarly, scan each column of pixels from right to left to determine and store the right boundary point of each column; scan each row of pixels from top to bottom to determine and store the upper boundary point of each row; scan each row of pixels from bottom to top to determine and store the lower boundary point of each row, as shown in Figure 7C. (iii) Calculate the column center Uc(i) of the i-th column based on the left boundary point and the right boundary point of the i-th column; and, calculate the row center Vc(j) of the j-th row based on the upper boundary point and the lower boundary point of the j-th row, where 0<=i<=(Wi-1) and 0<=j<=(Hi-1). (iv) Average the column centers of all columns to obtain an average u value, and average the row centers of all rows to obtain an average v value. (v) The texture coordinates (u center , v center ) of the real image center 74 of the lens image are respectively set equal to the average u value and the average v value. If any image center 74 cannot be calculated in the above manner, the texture coordinates (u center , v center ) of the image center 74 of the lens image are respectively set equal to (Wi/2, Hi/2).

圖8A例示一網格(grid)用以模型化一等距長方廣角影像。參考圖8A,可將一網格視為在等距長方廣角影像內的一個具五列及六行四邊形的矩形結構。圖8A的網格示例係對應至圖6C的多邊形網格,因此,圖8A網格上的頂點/交叉點81對應至圖6C多邊形網格的頂點。在等距長方座標系統中,水平軸上任二相鄰頂點的水平間距是相同的,而垂直軸上任二相鄰頂點的垂直間距是相同的。圖8A中的水平軸及垂直軸分別以 來標示,來代表經度及緯度。圖8A的例子中,係假設該影像擷取模組51的三個鏡頭同時捕捉到一個涵蓋160度HFOV以及50度的VFOV的視野,以產生三個鏡頭影像,進而形成該等距長方廣角影像。 FIG8A illustrates a grid for modeling an equirectangular wide-angle image. Referring to FIG8A , a grid can be viewed as a rectangular structure with five columns and six rows of quadrilaterals within the equirectangular wide-angle image. The grid example of FIG8A corresponds to the polygonal grid of FIG6C , and thus, the vertices/intersections 81 on the grid of FIG8A correspond to the vertices of the polygonal grid of FIG6C . In an equirectangular coordinate system, the horizontal distance between any two adjacent vertices on the horizontal axis is the same, and the vertical distance between any two adjacent vertices on the vertical axis is the same. The horizontal axis and the vertical axis in FIG8A are respectively denoted by and In the example of FIG8A , it is assumed that the three lenses of the image capture module 51 simultaneously capture a field of view covering a 160-degree HFOV and a 50-degree VFOV to generate three lens images, thereby forming the equidistant rectangular wide-angle image.

根據本發明,一投影影像中各重疊區域包含有一行的P1個控制點,其中P1>=3。該些重疊區域的大小會隨著影像擷取模組51的三個鏡頭的FOV、鏡頭感應器解析度及鏡頭架設角度而改變。一般而言,該投影影像中,各重疊區域的寬度會大於或等於一行四邊形的寬度的倍數。若各重疊區域內包含有多行四邊形,則於該多行四邊形中預設其中一行四邊形(以下稱之為”預設行”)來容納P1個控制點。在圖8A的例子中,等距長方廣角影像包含二個重疊區域A(1)及A(2),且各重疊區域內僅包含一行四邊形,該行四邊形直接被當成該預設行,用來容納P1個控制點。According to the present invention, each overlapping area in a projected image includes a row of P1 control points, where P1>=3. The size of these overlapping areas will change with the FOV of the three lenses of the image capture module 51, the lens sensor resolution and the lens installation angle. Generally speaking, in the projected image, the width of each overlapping area will be greater than or equal to a multiple of the width of a row of quadrilaterals. If each overlapping area includes multiple rows of quadrilaterals, one row of quadrilaterals (hereinafter referred to as the "default row") is preset in the multiple rows of quadrilaterals to accommodate P1 control points. In the example of FIG. 8A , the equirectangular wide-angle image includes two overlapping areas A( 1 ) and A( 2 ), and each overlapping area includes only one row of quadrilaterals, which are directly regarded as the default row for accommodating P1 control points.

於離線階段,該對應性產生器53執行以下四個步驟(a)~(d)來決定等距長方廣角影像中各控制點的紋理座標。(a)將等距長方廣角影像的最左側一行四邊形、最右側一行四邊形及各重疊區內的預設行皆定義為”控制行”。 (b) 於各控制行中,將最上面及最下面的二個控制點分別放置在最上面及最下面的四邊形的中心。在本步驟(b)中,各控制行根據本身四邊形的左邊界與右邊界於等距長方域的x座標(例如x1及x2),決定位在同一控制行內各控制點的x座標,例如x=(x1+x2)/2。 (c) 於各控制行中,將最上面及最下面的二個控制點之間的距離除以(P1-2),以得到該(P1-2)個控制點在等距長方域的y座標。換言之,於各控制行中,該(P1-2)個控制點係平均分布於最上面及最下面的二個控制點之間。圖8B例示具有不同數量四邊形的二個預設行且P1=3。如圖8B所示,二個不同控制行分別具有五個四邊形及六個四邊形,因此,該二個不同預設行的三個控制點A~C的y座標都不相同。量測區的範圍定義 (用來在量測模式中量測區域誤差量)的方法如下:對於各重疊區之預設行內的一四邊形i,搜尋最接近的控制點j,並將該四邊形i分類或定義為量測區j(對應至控制點j),其中1<=i<=n1、1<=j<=P1以及n1表示各預設行所包含的四邊形數目。(d)根據各控制點的等距長方座標及原始頂點列表(定義了等距長方座標及紋理座標之間的頂點映射關係),進行內插以得到各控制點的紋理座標。In the offline stage, the correspondence generator 53 executes the following four steps (a) to (d) to determine the texture coordinates of each control point in the equidistant rectangular wide-angle image. (a) The leftmost row of quadrilaterals, the rightmost row of quadrilaterals and the default rows in each overlapping area of the equidistant rectangular wide-angle image are defined as "control rows". (b) In each control row, the top and bottom two control points are placed at the center of the top and bottom quadrilaterals respectively. In this step (b), each control row determines the x coordinates of each control point in the same control row according to the x coordinates of the left and right boundaries of its own quadrilateral in the equidistant rectangular region (e.g., x1 and x2), for example, x=(x1+x2)/2. (c) In each control row, the distance between the top and bottom two control points is divided by (P1-2) to obtain the y coordinates of the (P1-2) control points in the equidistant rectangular region. In other words, in each control row, the (P1-2) control points are evenly distributed between the top and bottom two control points. FIG. 8B illustrates two preset rows with different numbers of quadrilaterals and P1=3. As shown in FIG. 8B , two different control rows have five and six quadrilaterals, respectively. Therefore, the y coordinates of the three control points A~C of the two different preset rows are different. The range definition of the measurement area (used to measure the area error in the measurement mode) is as follows: for a quadrilateral i in the default row of each overlapping area, search for the closest control point j and classify or define the quadrilateral i as the measurement area j (corresponding to the control point j), where 1<=i<=n1, 1<=j<=P1 and n1 represents the number of quadrilaterals contained in each default row. (d) According to the equirectangular coordinates of each control point and the original vertex list (which defines the vertex mapping relationship between the equirectangular coordinates and the texture coordinates), interpolate to obtain the texture coordinates of each control point.

圖8C係與圖8A有關,例示一等距長方廣角影像內包含二十個控制點R(1)~R(20)及十個量測區M(1)~M(10)且各控制行包含5個控制點(P1=5)。圖8C中,二十個控制點R(1)~R(20)分別具有二十個接合係數C(1)~C(20),且該些接合係數C(1)~C(20)分別表示該二十個控制點R(1)~R(20)的不同接合程度。另外,十個控制點R(1)~R(10)係位在二個重疊區A(1)~A(2)內,而另十個控制點R(11)~R(20)則位在非重疊區。由於控制點R(11)~R(20)位在非重疊區,故將其對應的十個接合係數C(11)~C(20)設定等於一常數(例如1),且在本說明書中稱為”常數接合係數”。以下的例子及實施例係以圖8C中等距長方廣角影像的二十個控制點R(1)~R(20)及十個量測區M(1)~M(10)為例作說明。FIG8C is related to FIG8A , and illustrates an example in which an equidistant rectangular wide-angle image includes twenty control points R(1) to R(20) and ten measurement areas M(1) to M(10), and each control row includes five control points (P1=5). In FIG8C , the twenty control points R(1) to R(20) have twenty joint coefficients C(1) to C(20), and the joint coefficients C(1) to C(20) represent different degrees of joint of the twenty control points R(1) to R(20). In addition, ten control points R(1) to R(10) are located in two overlapping areas A(1) to A(2), and the other ten control points R(11) to R(20) are located in non-overlapping areas. Since the control points R(11)-R(20) are located in a non-overlapping region, the corresponding ten stitching coefficients C(11)-C(20) are set equal to a constant (e.g., 1), and are referred to as "constant stitching coefficients" in this specification. The following examples and implementation examples are illustrated using the twenty control points R(1)-R(20) and ten measurement regions M(1)-M(10) of the equal-distance rectangular wide-angle image in FIG. 8C as an example.

請回到圖5A,該補償裝置52包含一頂點處理裝置510、一拼接決策單元530以及一影像處理裝置520。在一量測模式下,該頂點處理裝置510接收該原始頂點列表及來自該拼接決策單元520的測試接合係數C t(1)~C t(10)及常數接合係數C(11)~C(20)、以逐頂點方式修正該原始頂點列表中各頂點於所有鏡頭影像之紋理座標、以及產生一修正頂點列表(如表二);根據該修正頂點列表及來自影像擷取模組51的三個鏡頭影像,該影像處理裝置520量測該等距長方廣角影像中所有量測區M(1)~M(10)的區域誤差量E(1)~E(10),並輸出該些區域誤差量E(1)~E(10),之後,該拼接決策單元530根據偏移量ofs,設定該些測試接合係數C t(1)~C t(10)的值,再接收該些區域誤差量E(1)~E(10),以產生一2D誤差表格,再根據該2D誤差表格,產生十個控制點R(1)~R(10)的最佳接合係數C(1)~C(10)。另一方面,在一顯像模式下,該拼接決策單元530被禁能,同時,該頂點處理裝置510以及該影像處理裝置520根據該影像擷取模組51輸出的三個鏡頭影像以及該拼接決策單元530輸出的十個最佳接合係數C(1)~C(10)與十個常數接合係數C(11)~C(20),一起運作以產生一等距長方廣角影像。 Referring back to FIG. 5A , the compensation device 52 includes a vertex processing device 510, a stitching decision unit 530, and an image processing device 520. In a measurement mode, the vertex processing device 510 receives the original vertex list and the test stitching coefficients C t (1)~C t from the stitching decision unit 520. (10) and constant stitching coefficients C(11)~C(20), and the texture coordinates of each vertex in the original vertex list in all lens images are corrected vertex by vertex, and a corrected vertex list (as shown in Table 2) is generated; according to the corrected vertex list and the three lens images from the image capture module 51, the image processing device 520 measures the regional errors E(1)~E(10) of all measurement areas M(1)~M(10) in the equidistant rectangular wide-angle image, and outputs the regional errors E(1)~E(10). Thereafter, the stitching decision unit 530 sets the test stitching coefficients C t (1)~C t according to the offset ofs. (10), and then receive the regional error amounts E(1)~E(10) to generate a 2D error table, and then generate the optimal stitching coefficients C(1)~C(10) of the ten control points R(1)~R(10) according to the 2D error table. On the other hand, in a display mode, the stitching decision unit 530 is disabled, and at the same time, the vertex processing device 510 and the image processing device 520 operate together to generate an equidistant rectangular wide-angle image according to the three lens images output by the image capture module 51 and the ten optimal stitching coefficients C(1)~C(10) and the ten constant stitching coefficients C(11)~C(20) output by the stitching decision unit 530.

在該量測模式下,該頂點處理裝置510根據來自該原始頂點列表的各頂點於各鏡頭影像中對應的接合係數混合權值以及二個最鄰近控制點的”二個測試接合係數”或”一個測試接合係數與一常數接合係數”,修正各頂點於各鏡頭影像中的紋理座標,以便該影像處理裝置520量測區域誤差量E(1)~E(10)(如圖9A的步驟S905~S906)。在顯像模式下,該頂點處理裝置510根據來自該原始頂點列表的各頂點於各鏡頭影像中對應的接合係數混合權值以及二個最鄰近控制點的”二個最佳接合係數”或”一個最佳接合係數與一常數接合係數”,修正各頂點於各鏡頭影像中的紋理座標,以最小化上述不匹配的影像缺陷。圖8D例示於鏡頭C影像中一目標頂點P及十個控制點R(1)~R(10)間的位置關係。在圖8D的例子中,第一角度 為順時針方向且形成於一第一向量V1及一第二向量V2之間;該第一向量V1以影像中心74(具紋理座標(u center, v center))為起點,以控制點R(4)的位置為終點,而該第二向量V2以影像中心74為起點,以該目標頂點P(u P, v P)為終點。第二角度 為順時針方向且形成於一第三向量V3及該第二向量V2之間;該第三向量V3以影像中心74為起點,以控制點R(5)的位置為終點。在離線階段,該對應性產生器53預先決定哪二個控制點(即R(4)及R(5))是最鄰近該目標頂點P,並將其索引(4及5)寫到該原始頂點列表內該目標頂點P的資料結構之該鏡頭C影像的”接合係數索引”欄位中;另外,該對應性產生器53亦預先計算接合係數(C(4)及C(5))的混合權值(= /( + )),並將該混合權值( /( + ))寫到該原始頂點列表內該目標頂點P的資料結構之該鏡頭C影像的”接合係數的混合權值(Alpha)”欄位中。請注意,從該拼接決策單元530輸出的一組測試接合係數C t(1)~C t(10)(量測模式)或一組最佳接合係數C(1)~C(10)(顯像模式)被安排為1D的接合係數陣列或1D的資料串流。再者,量測模式下,根據圖7A中位移量ofs,該拼接決策單元530設定該些測試接合係數C t(1)~ C t(10)的值(如圖9A的步驟S902;將於後面說明);在量測模式結束時,該拼接決策單元530決定該些最佳接合係數C(1)~C(10)的值(如圖9B的步驟S972;將於後面說明),並在顯像模式下使用該些最佳接合係數C(1)~C(10)的值。 In the measurement mode, the vertex processing device 510 corrects the texture coordinates of each vertex in each lens image according to the stitching coefficient mixing weights corresponding to each vertex in the original vertex list in each lens image and the "two test stitching coefficients" or "one test stitching coefficient and a constant stitching coefficient" of the two nearest control points, so that the image processing device 520 measures the regional errors E(1)~E(10) (such as steps S905~S906 in Figure 9A). In the display mode, the vertex processing device 510 corrects the texture coordinates of each vertex in each lens image according to the blending weights of the stitching coefficients corresponding to each vertex in the original vertex list in each lens image and the "two best stitching coefficients" or "one best stitching coefficient and a constant stitching coefficient" of the two nearest control points to minimize the above-mentioned mismatched image defects. FIG8D illustrates the positional relationship between a target vertex P and ten control points R(1) to R(10) in the lens C image. In the example of FIG8D , the first angle The first vector V1 is formed in a clockwise direction between a first vector V1 and a second vector V2; the first vector V1 starts from the image center 74 (with texture coordinates (u center , v center )) and ends at the position of the control point R(4), while the second vector V2 starts from the image center 74 and ends at the target vertex P (u P , v P ). Second angle is in a clockwise direction and is formed between a third vector V3 and the second vector V2; the third vector V3 starts from the image center 74 and ends at the position of the control point R(5). In the offline stage, the correspondence generator 53 pre-determines which two control points (i.e., R(4) and R(5)) are the closest to the target vertex P, and writes their indexes (4 and 5) to the "joining coefficient index" field of the lens C image in the data structure of the target vertex P in the original vertex list; in addition, the correspondence generator 53 also pre-calculates the blending weights (= /( + )), and the mixed weight ( /( + )) is written into the "blending weight (Alpha) of stitching coefficient" field of the lens C image in the data structure of the target vertex P in the original vertex list. Please note that a set of test stitching coefficients C t (1) to C t (10) (measurement mode) or a set of optimal stitching coefficients C (1) to C (10) (imaging mode) output from the stitching decision unit 530 is arranged as a 1D stitching coefficient array or a 1D data stream. Furthermore, in the measurement mode, according to the displacement ofs in Figure 7A, the stitching decision unit 530 sets the values of the test bonding coefficients C t (1) ~ C t (10) (such as step S902 in Figure 9A; to be explained later); when the measurement mode ends, the stitching decision unit 530 determines the values of the optimal bonding coefficients C(1) ~ C(10) (such as step S972 in Figure 9B; to be explained later), and uses the values of the optimal bonding coefficients C(1) ~ C(10) in the display mode.

本發明特色之一是在一預設的迴圈數目(圖9A中的max)內最小化不匹配的影像缺陷。該預設的迴圈數目係有關於一偏移量ofs,該偏移量ofs係上述影像擷取模組51的鏡頭中心76偏離其系統中心73的距離(參考圖7A)。在量測模式下,根據圖7A的偏移量ofs,該拼接決策單元530將十個測試接合係數C t(1)~C t(10)設定至不同數值範圍,以量測該些區域誤差量E(1)~E(10),且每一次(或每一次迴圈)係將該十個測試接合係數設定至相同值。例如,假設ofs=3公分,該十個測試接合係數C t(1)~C t(10)被設定至數值範圍0.96~1.04,倘若每次增量是0.01,就會有總共九次的量測(即圖9A中的max=9);假設ofs=1公分,該十個測試接合係數C t(1)~C t(10)被設定至數值範圍0.99~1.00,若每次增量是0.001,就會有總共十次的量測(即圖9A中的max=10)。請注意,在離線階段就已偵測到/決定偏移量ofs,故可預先決定該十個測試接合係數C t(1)~C t(10)的值。 One of the features of the present invention is to minimize the mismatched image defects within a preset number of loops (max in FIG. 9A ). The preset number of loops is related to an offset ofs, which is the distance that the lens center 76 of the image capture module 51 deviates from its system center 73 (see FIG. 7A ). In the measurement mode, according to the offset ofs in FIG. 7A , the stitching decision unit 530 sets ten test joint coefficients C t (1) to C t (10) to different value ranges to measure the regional errors E (1) to E (10), and the ten test joint coefficients are set to the same value each time (or each loop). For example, assuming ofs = 3 cm, the ten test joint coefficients C t (1) to C t (10) are set to a value range of 0.96 to 1.04. If each increment is 0.01, there will be a total of nine measurements (i.e., max = 9 in FIG. 9A ); assuming ofs = 1 cm, the ten test joint coefficients C t (1) to C t (10) are set to a value range of 0.99 to 1.00. If each increment is 0.001, there will be a total of ten measurements (i.e., max = 10 in FIG. 9A ). Please note that the offset ofs has been detected/determined in the offline stage, so the values of the ten test joint coefficients C t (1) to C t (10) can be predetermined.

圖9A係根據本發明一實施例,顯示決定控制點之最佳接合係數之方法流程圖(由該拼接決策單元530於量測模式下執行)。以下,假設ofs=3公分,說明圖9A之決定十個控制點R(1)~R(10)之最佳接合係數C(1)~C(10)之方法及圖9B之係數決策步驟(步驟S912)之方法流程圖。FIG. 9A is a flowchart showing a method for determining the optimal joint coefficient of a control point according to an embodiment of the present invention (executed by the stitching decision unit 530 in the measurement mode). Assuming ofs=3 cm, the method for determining the optimal joint coefficients C(1)~C(10) of the ten control points R(1)~R(10) in FIG. 9A and the method flowchart of the coefficient decision step (step S912) in FIG. 9B are described below.

步驟S902:將迴圈數目Q1及測試接合係數分別設定成新值。一實施例中,第一次迴圈中將Q1設為1,之後每次迴圈都將Q1增加1;若ofs=3公分,第一次迴圈中將該些測試接合係數C t(1)~C t(10)都設為0.96(即C t(1)=…=C t(10)=0.96),並在後續迴圈中,依序將該些測試接合係數C t(1)~C t(10)設為0.97,…., 1.04。 Step S902: Set the number of loops Q1 and the test bonding coefficient to new values. In one embodiment, Q1 is set to 1 in the first loop, and Q1 is increased by 1 in each subsequent loop; if ofs=3 cm, the test bonding coefficients C t (1)~C t (10) are all set to 0.96 in the first loop (i.e., C t (1)=…=C t (10)=0.96), and in subsequent loops, the test bonding coefficients C t (1)~C t (10) are set to 0.97,…., 1.04 in sequence.

步驟S904:將所有區域誤差量E(i)清除為0,其中,i=1,2,…,10。Step S904: Clear all regional errors E(i) to 0, where i=1, 2, ..., 10.

步驟S905:根據測試接合係數C t(1)~C t(10)的值及原始頂點列表,產生一修正頂點列表。以下,再次以圖8D為例做說明。於接收來自對應性產生器53的原始頂點列表後,該頂點處理裝置510根據該目標頂點P的資料結構中在鏡頭C影像的”接合係數索引”欄位(即4及5),從一維的測試接合係數陣列中擷取出二個測試接合係數(C t(4)及C t(5)),再根據該目標頂點P的資料結構中在該鏡頭C影像 (請參考表一)的” 接合係數的混合權值(Alpha)”欄位(即 /( + )),根據下列方程式計算出內插接合係數C’ :C’=C t(4) ( /( + ))+C t(5) ( /( + ))。之後,該頂點處理裝置510根據下列方程式,計算該目標頂點P在鏡頭C影像中的修正紋理座標(u P’, v P’):u P’=(u P- u center)* C’ + u center; v P’=(v P- v center)*C’ + v center。依此方式,該頂點處理裝置510根據該十個測試接合係數C t(1)~C t(10),依序修正來自該原始頂點列表之各頂點於鏡頭C影像的紋理座標,以產生一修正頂點列表的一部分。同樣地,該頂點處理裝置510亦根據該十個測試接合係數C t(1)~C t(10)及十個常數接合係數C(11)~C(20),依序修正來自該原始頂點列表之各頂點於鏡頭A影像及鏡頭B影像的紋理座標,以完成該修正頂點列表。表二顯示該修正頂點列表中各頂點之資料結構的一個例子。                               表 二 屬性 說明 (x, y) 等距長方座標 N 涵蓋/重疊的鏡頭影像數目 ID 1 第一個鏡頭影像的ID (u 1’, v 1’) 在第一個鏡頭影像中的修正紋理座標 w 1 第一個鏡頭影像的拼接混合權值 ….. ………. ID N 第N個鏡頭影像的ID (u N’, v N’) 在第N個鏡頭影像中的修正紋理座標 w N 第N個鏡頭影像的拼接混合權值 Step S905: Generate a modified vertex list according to the values of the test stitching coefficients C t (1) to C t (10) and the original vertex list. The following is again explained using FIG. 8D as an example. After receiving the original vertex list from the correspondence generator 53, the vertex processing device 510 extracts two test stitching coefficients (C t (4) and C t (5)) from the one-dimensional test stitching coefficient array according to the "stitching coefficient index" field (i.e., 4 and 5) in the data structure of the target vertex P in the lens C image, and then extracts two test stitching coefficients (C t (4) and C t (5)) according to the "blending weight (Alpha) of the stitching coefficient" field (i.e., 4 and 5) in the data structure of the target vertex P in the lens C image (please refer to Table 1). /( + )), the interpolation coefficient C' is calculated according to the following equation: C'=C t (4) ( /( + ))+C t (5) ( /( + )). Thereafter, the vertex processing device 510 calculates the modified texture coordinates (u P ', v P ') of the target vertex P in the lens C image according to the following equations: u P '=(u P - u center )* C' + u center ; v P '=(v P - v center )*C' + v center . In this way, the vertex processing device 510 sequentially modifies the texture coordinates of each vertex from the original vertex list in the lens C image according to the ten test joint coefficients C t (1)~C t (10) to generate a portion of a modified vertex list. Similarly, the vertex processing device 510 also sequentially modifies the texture coordinates of each vertex from the original vertex list in the lens A image and the lens B image according to the ten test joint coefficients C t (1) to C t (10) and the ten constant joint coefficients C (11) to C (20) to complete the modified vertex list. Table 2 shows an example of the data structure of each vertex in the modified vertex list. Table II Attributes instruction (x, y) Equirectangular coordinates N Number of overlapping camera images ID 1 The ID of the first camera image (u 1 ', v 1 ') Corrected texture coordinates in the first camera image w 1 The stitching blending weights of the first lens image ….. ……. ID N The ID of the Nth lens image (u N ', v N ') Corrected texture coordinates in the Nth lens image 1v The stitching blending weight of the Nth lens image

步驟S906:由該影像處理裝置520,根據該修正頂點列表及來自影像擷取模組51的三個鏡頭影像,量測該等距長方廣角影像的十個量測區M(1)~M(10)的區域誤差量E(1)~E(10)(將於圖5B詳述)。為方便描述,利用E(i)=f(C t(i))來代表本步驟S906,其中,i=1,…,10,以及f()代表(由該影像處理裝置520)根據該修正頂點列表及上述三個鏡頭影像,量測些區域誤差量E(i)。 Step S906: The image processing device 520 measures the regional errors E(1)-E(10) of the ten measurement areas M(1)-M(10) of the equidistant rectangular wide-angle image according to the corrected vertex list and the three lens images from the image capture module 51 (described in detail in FIG. 5B ). For ease of description, E(i)=f(C t (i)) is used to represent this step S906, where i=1,…,10, and f() represents (measured by the image processing device 520) according to the corrected vertex list and the three lens images.

步驟S908:儲存所有區域誤差量E(1)~E(10) 於一個二維(2D)誤差表格中。表三顯示當ofs=3公分(測試接合係數的數值範圍0.96~1.04)時,該2D誤差表格的一個例子。在表三的2D誤差表格中,總共有十個區域誤差量E(1)~E(10)及測試接合係數的九個不同的值(0.96~1.04)。                                  表三   1st 2nd 3rd ---- 7th 8th 9th 接合係數 0.96 0.97 0.98 ---- 1.02 1.03 1.04 E(1)             E(2)                                   ----------------------------- E(7)       ----       E(8)             E(9)             E(10)             Step S908: Store all the regional errors E(1)~E(10) in a two-dimensional (2D) error table. Table 3 shows an example of the 2D error table when ofs=3 cm (the value range of the test joint coefficient is 0.96~1.04). In the 2D error table of Table 3, there are a total of ten regional errors E(1)~E(10) and nine different values of the test joint coefficient (0.96~1.04). Table 3 1st 2nd 3rd ---- 7th 8th 9th Bonding coefficient 0.96 0.97 0.98 ---- 1.02 1.03 1.04 E(1) E(2) ----------------------------- E(7) ---- E(8) E(9) E(10)

步驟S910:決定迴圈次數Q1是否到達max值9。若是,跳到步驟S912,否則,回到步驟S902。Step S910: Determine whether the number of loops Q1 has reached the maximum value of 9. If yes, jump to step S912, otherwise, return to step S902.

步驟S912:根據上述2D誤差表格,進行係數決策操作。Step S912: Perform coefficient decision operation according to the above 2D error table.

步驟S914:輸出最佳接合係數C(i),其中,i=1,2,…,10。在該顯像模式中,輸出該些最佳接合係數C(i)至該頂點處理裝置510,以產生對應的修正頂點列表,使得該影像處理裝置520根據該對應的修正頂點列表及來自影像擷取模組51的三個鏡頭影像,產生一對應的廣角影像(將於稍後說明)。Step S914: Output the best stitching coefficient C(i), where i=1, 2, ..., 10. In the imaging mode, the best stitching coefficients C(i) are output to the vertex processing device 510 to generate a corresponding modified vertex list, so that the image processing device 520 generates a corresponding wide-angle image (to be described later) according to the corresponding modified vertex list and the three lens images from the image capture module 51.

圖9B係根據本發明一實施例,顯示該拼接決策單元530進行步驟S912之係數決策操作之方法流程圖。以下,請參考圖9B,說明該進行係數決策操作之所有步驟。FIG9B is a flowchart showing a method for the splicing decision unit 530 to perform the coefficient decision operation of step S912 according to an embodiment of the present invention.

步驟961:將Q2設為0以進行初始化。Step 961: Set Q2 to 0 for initialization.

步驟S962:從上述2D誤差表格中,擷取出一選定之決策群組。回到圖8C,通常各控制區係分別鄰接二個控制區,一選定之控制區及其鄰接的二個控制區形成一選定之決策群組,以決定該選定之控制區之最佳接合係數。例如,一選定控制點R(9)及其鄰接的二個控制點R(8)及R(10)形成一決策群組,如圖8A及8C所示。然而,若一選定之控制點(如R(6))係位在重疊區A(2)的頂部或底部,則該選定之控制點R(6)就只會與其唯一鄰接的控制點R(7)形成一決策群組,以決定其最佳接合係數C(6)。後續步驟的說明係假設選定一控制點R(7),且R(7)與其鄰接的二個控制點R(6)及R(8)形成一選定之決策群組,以決定其最佳接合係數C(7)。Step S962: Extract a selected decision group from the above 2D error table. Returning to FIG8C, each control region is usually adjacent to two control regions, and a selected control region and its two adjacent control regions form a selected decision group to determine the optimal joint coefficient of the selected control region. For example, a selected control point R(9) and its two adjacent control points R(8) and R(10) form a decision group, as shown in FIGS. 8A and 8C. However, if a selected control point (such as R(6)) is located at the top or bottom of the overlap region A(2), the selected control point R(6) will only form a decision group with its only adjacent control point R(7) to determine its optimal joint coefficient C(6). The description of the subsequent steps assumes that a control point R(7) is selected, and R(7) and its two adjacent control points R(6) and R(8) form a selected decision group to determine its optimal joint coefficient C(7).

步驟S964:在該選定決策群組的各控制點的區域誤差量中,決定局部最小值。表四顯示R(6)~R(8)的區域誤差量E(6)~E(8)及測試接合係數的一個例子。                                表 四 索引 測試接合係數 E(6) E(7) E(8) 1 0.96 1010 2600(*) 820 2 0.97 1005 2650 750 3 0.98 1000 2800 700 4 0.99 900 3000 600(*) 5 1.00 800(*) 2700 650 6 1.01 850 2500 580 7 1.02 950 2400(*) 500(*) 8 1.03 960 2820 700 9 1.04 975 2900 800 Step S964: Determine the local minimum value in the regional error of each control point of the selected decision group. Table 4 shows an example of the regional error E(6)~E(8) and the test joint coefficient of R(6)~R(8). Table 4 index Test bonding coefficient E(6) E(7) E(8) 1 0.96 1010 2600(*) 820 2 0.97 1005 2650 750 3 0.98 1000 2800 700 4 0.99 900 3000 600(*) 5 1.00 800(*) 2700 650 6 1.01 850 2500 580 7 1.02 950 2400(*) 500(*) 8 1.03 960 2820 700 9 1.04 975 2900 800

如表四所示,在R(6)的九個區域誤差量中只有一個局部最小值,而在R(7)及R(8)的九個區域誤差量中各有二個局部最小值,其中表四中各局部最小值旁分別標出星號(*)。As shown in Table 4, there is only one local minimum value among the nine regional errors of R(6), while there are two local minimum values among the nine regional errors of R(7) and R(8), respectively. An asterisk (*) is marked next to each local minimum value in Table 4.

步驟S966:根據該些局部最小值,選出候選者。表五顯示從表四之該些局部最小值中,選出候選者,其中ID表示索引、WC表示接合係數,而RE表示區域誤差量。候選者的數目等於表四中該些局部最小值的數目。                                  表 五   R(6) R(7) R(8) 局部最小值的數目 1 2 2   ID WC RE ID WC RE ID WC RE 候選者[0] 5 1.00 800 1 0.96 2600 4 0.99 600 候選者[1]       7 1.02 2400 7 1.02 500 Step S966: Select candidates based on the local minima. Table 5 shows the candidates selected from the local minima in Table 4, where ID represents the index, WC represents the join coefficient, and RE represents the regional error. The number of candidates is equal to the number of local minima in Table 4. Table 5 R(6) R(7) R(8) Number of local minima 1 2 2 ID WC RE ID WC RE ID WC RE Candidates[0] 5 1.00 800 1 0.96 2600 4 0.99 600 Candidate[1] 7 1.02 2400 7 1.02 500

步驟S968:根據表五的該些候選者,建立一連結計量(link metric)。如圖9C所示,根據表五的該些候選者,建立一連結計量。Step S968: Create a link metric based on the candidates in Table 5. As shown in FIG9C , a link metric is created based on the candidates in Table 5.

步驟S970:在該連結計量的所有路徑中,決定連結計量值的最小總和。關於二個連結計量值 =0.03 及 =0.06,其二者間的最小值 =min( )=0.03。關於二個連結計量值 =0.03及 =0.00,其二者間的最小值 =min( )=0.00。 之後,分別計算路徑0-0-0 及 路徑0-1-1的連結計量值的總和如下: + =0.04+0.03=0.07 及 + =0.02+0.00=0.02。因為 ,故可決定 (路徑0-1-1)是該連結計量的所有路徑中,連結計量值的最小總和,如圖9C的實線路徑。 Step S970: Determine the minimum sum of the link metric values among all paths of the link metric. =0.03 and =0.06, the minimum value between the two =min( )=0.03. About the two link measurements =0.03 and =0.00, the minimum value between the two =min( )=0.00. Then, the sum of the link metrics of path 0-0-0 and path 0-1-1 is calculated as follows: + =0.04+0.03=0.07 and + =0.02+0.00=0.02. Because , so we can decide (Path 0-1-1) is the minimum sum of link metric values among all paths of the link metric, such as the solid line path in FIG. 9C .

步驟S972:決定該選定控制區之最佳接合係數。關於步驟S970所舉的例子中,因為 (路徑0-1-1)是所有路徑中連結計量值的最小總和,故決定1.02為控制區R(7)之最佳接合係數。然而,若結束計算時有二條或更多路徑的連結計量值總和相同,就選擇具最小區域誤差量之節點的接合係數,當作該選定控制區之最佳接合係數。在此,將迴圈次數Q2的值遞增1。 Step S972: Determine the optimal bonding coefficient for the selected control region. In the example of step S970, because (Path 0-1-1) is the minimum sum of the link metrics of all paths, so 1.02 is determined as the optimal joint coefficient for control region R(7). However, if the sum of the link metrics of two or more paths is the same at the end of the calculation, the joint coefficient of the node with the minimum regional error is selected as the optimal joint coefficient for the selected control region. Here, the value of the loop number Q2 is increased by 1.

步驟S974:決定迴圈次數Q2是否到達上限值TH1(=10)。若是,結束本流程,否則,回到步驟S962以處理下一個控制區。Step S974: Determine whether the number of loops Q2 reaches the upper limit TH1 (=10). If so, end this process, otherwise, return to step S962 to process the next control area.

在該頂點處理裝置510根據該些測試/最佳接合係數C(1)~C(20)修正完來自該原始頂點列表之所有頂點之所有紋理座標之後,因影像擷取模組51的鏡頭中心偏移(亦即一鏡頭中心76相對於該系統中心73有一個偏移量ofs)而造成的不匹配影像缺陷問題即可大幅改善(即實際成像位置78會被推向理想成像位置70),如圖7A所示。請注意,因為球體62是虛擬的,故物體75有可能位在球體62的外面或裡面或表面。After the vertex processing device 510 corrects all texture coordinates of all vertices from the original vertex list according to the test/optimal joint coefficients C(1)-C(20), the mismatch image defect problem caused by the lens center offset of the image capture module 51 (i.e., a lens center 76 has an offset ofs relative to the system center 73) can be greatly improved (i.e., the actual imaging position 78 will be pushed toward the ideal imaging position 70), as shown in FIG7A. Please note that because the sphere 62 is virtual, the object 75 may be located outside, inside, or on the surface of the sphere 62.

圖5B係根據本發明一實施例,顯示該影像處理裝置的示意圖。請參考圖5B,該影像處理裝置520包含一柵格化引擎521、一紋理映射電路522、一混合單元523(由一控制訊號CS2所控制) 、一目的緩衝器524以及一量測單元525(由一控制訊號CS1所控制)。請注意,在量測模式下,若一像素/點的等距長方座標落在任一量測區內,透過控制訊號CS2,該混合單元523會被禁能(disabled),以及透過控制訊號CS1,該量測單元524會被致能(enabled);在顯像模式下,透過二個控制訊號CS1及CS2,該混合單元523會被致能以及該量測單元524會被禁能。該紋理映射電路522包含二個紋理映射引擎52a~52b。柵格化引擎521可對來自一修正頂點列表的各組四個頂點所形成之一個四邊形(如圖6C)內的各像素進行四邊形柵格化操作,或者對來自該修正頂點列表的各組三個頂點所形成一個三角形(如圖6C)內的各像素進行三角形柵格化操作。FIG5B is a schematic diagram showing the image processing device according to an embodiment of the present invention. Referring to FIG5B , the image processing device 520 includes a rasterization engine 521, a texture mapping circuit 522, a blending unit 523 (controlled by a control signal CS2), a destination buffer 524, and a measurement unit 525 (controlled by a control signal CS1). Please note that in the measurement mode, if the equidistant rectangular coordinates of a pixel/point fall within any measurement area, the blending unit 523 will be disabled through the control signal CS2, and the measurement unit 524 will be enabled through the control signal CS1; in the display mode, the blending unit 523 will be enabled and the measurement unit 524 will be disabled through the two control signals CS1 and CS2. The texture mapping circuit 522 includes two texture mapping engines 52a-52b. The rasterization engine 521 can perform a quadrilateral rasterization operation on each pixel in a quadrilateral (such as FIG. 6C ) formed by each group of four vertices from a modified vertex list, or perform a triangle rasterization operation on each pixel in a triangle (such as FIG. 6C ) formed by each group of three vertices from the modified vertex list.

請回到圖8C,對於四邊形的情況,假設來自該修正頂點列表的一組四個頂點(E、F、G、H)(形成多邊形網格內的一個四邊形)係位在重疊區A(1)的五個量測區M(1)~M(5)之一的範圍內且被鏡頭B影像及鏡頭C影像所重疊(N=2),該四個頂點(E、F、G、H)分別包含以下資料結構:頂點E:{(x E, y E), 2, ID lens-B, (u 1E, v 1E), w 1E, ID lens-c, (u 2E, v 2E), w 2E};頂點F:{(x F, y F), 2, ID lens-B, (u 1F, v 1F), w 1F, ID lens-C, (u 2F, v 2F), w 2F};頂點G:{(x G, y G), 2, ID lens-B, (u 1G, v 1G), w 1G, ID lens-C, (u 2G, v 2G), w 2G};頂點H:{(x H, y H), 2, ID lens-B, (u 1H, v 1H), w 1H, ID lens-C, (u 2H, v 2H), w 2H}。柵格化引擎521直接對四邊形EFGH內的各點/像素進行四邊形柵格化操作。具體而言,柵格化引擎521利用以下步驟,對一個點Q(具有等距長方座標(x, y)且位在該多邊形網格的該四邊形EFGH內)計算各鏡頭影像的紋理座標:(1)利用一雙線性內插(bi-linear interpolation)方法,根據等距長方座標(x E, y E, x F, y F, x G, y G, x H, y H, x, y),計算四個空間權值(e, f, g, h);(2) 計算鏡頭B影像中一取樣點Q B(對應該點Q)之工作面混合權值:fw 1=e*w 1E+f*w 1F+g*w 1G+h*w 1H;計算一鏡頭C影像中一取樣點Q C(對應該點Q)之工作面混合權值:fw 2= e*w 2E+f*w 2F+g*w 2G+h*w 2H;(3) 計算該鏡頭影像B中該取樣點Q B(對應該點Q)之紋理座標:(u1, v1) =(e*u 1E+f*u 1F+g*u 1G+h*u 1H, e*v 1E+f*v 1F+g*v 1G+h*v 1H);計算該鏡頭C影像中該取樣點Q C(對應該點Q)之紋理座標:(u2, v2) =(e*u 2E+f*u 2F+g*u 2G+h*u 2H, e*v 2E+f*v 2F+g*v 2G+h*v 2H)。最後,柵格化引擎521將該二個紋理座標(u1, v1)及(u2, v2)平行傳送給該二個紋理映射引擎52a~52b。其中,e+f+g+h =1及fw 1+ fw 2=1。根據該二個紋理座標(u1, v1) 及(u2, v2),該二個紋理映射引擎52a~52b利用任何合適的方法(例如最近相鄰內插(nearest-neighbour interpolation)法、雙線性內插法、或三線性(trilinear)內插法),紋理映射鏡頭B影像及鏡頭C影像的紋理資料,以產生二個取樣值s1、s2。其中,各該取樣值可以是一亮度(luma)值、一色度(chroma)值、一邊緣(edge)值,一像素顏色值(RGB)或一移動向量(motion vector)。 Please go back to FIG. 8C . For the case of a quadrilateral, assume that a set of four vertices (E, F, G, H) from the modified vertex list (forming a quadrilateral in the polygonal grid) are located within one of the five measurement areas M(1) to M(5) of the overlap area A(1) and are overlapped by the lens B image and the lens C image (N=2). The four vertices (E, F, G, H) respectively include the following data structures: Vertex E: {(x E , y E ), 2, ID lens-B , (u 1E , v 1E ), w 1E , ID lens-c , (u 2E , v 2E ), w 2E }; Vertex F: {(x F , y F ), 2, ID lens-B , (u 1F , v 1F ) , ), w 1F , ID lens-C , (u 2F , v 2F ), w 2F }; vertex G: {(x G , y G ), 2, ID lens-B , (u 1G , v 1G ), w 1G , ID lens-C , (u 2G , v 2G ), w 2G }; vertex H: {(x H , y H ), 2, ID lens-B , (u 1H , v 1H ), w 1H , ID lens-C , (u 2H , v 2H ), w 2H }. The rasterization engine 521 directly performs a quadrilateral rasterization operation on each point/pixel within the quadrilateral EFGH. Specifically, the rasterization engine 521 calculates the texture coordinates of each lens image for a point Q (having equidistant rectangular coordinates (x, y) and located in the quadrilateral EFGH of the polygonal grid) using the following steps: (1) using a bilinear interpolation method, four spatial weights (e, f, g, h) are calculated based on the equidistant rectangular coordinates (x E , y E , x F , y F , x G , y G , x H , y H , x, y); (2) the working surface blending weight of a sampling point Q B in the lens B image (corresponding to the point Q) is calculated: fw 1 =e*w 1E +f*w 1F +g*w 1G +h*w 1H ; the working surface blending weight of a sampling point Q C in the lens C image is calculated: The working surface blending weight (corresponding to the point Q) is: fw 2 = e*w 2E +f*w 2F +g*w 2G +h*w 2H ; (3) Calculate the texture coordinates of the sampling point Q B (corresponding to the point Q) in the lens image B: (u1, v1) =(e*u 1E +f*u 1F +g*u 1G +h*u 1H , e*v 1E +f*v 1F +g*v 1G +h*v 1H ); Calculate the texture coordinates of the sampling point Q C (corresponding to the point Q) in the lens image C: (u2, v2) =(e*u 2E +f*u 2F +g*u 2G +h*u 2H , e*v 2E +f*v 2F +g*v 2G +h*v 2H ). Finally, the rasterization engine 521 transmits the two texture coordinates (u1, v1) and (u2, v2) in parallel to the two texture mapping engines 52a-52b, wherein e+f+g+h = 1 and fw1 + fw2 = 1. According to the two texture coordinates (u1, v1) and (u2, v2), the two texture mapping engines 52a-52b use any suitable method (such as nearest-neighbor interpolation, bilinear interpolation, or trilinear interpolation) to texture map the texture data of the lens B image and the lens C image to generate two sample values s1 and s2. Each of the sample values may be a luma value, a chroma value, an edge value, a pixel color value (RGB) or a motion vector.

對於三角形的情況,類似於上述四邊形的操作,柵格化引擎521及二個紋理映射引擎52a~52b對來自該修正頂點列表的各組三個頂點所形成一個三角形(如圖6C)內的各像素進行三角形柵格化操作及紋理映射,以產生二個對應的取樣值s1、s2,除了修改步驟(1)如下:柵格化引擎521不是利用上述雙線性內插方法,而是利用一重心加權(barycentric weighting)方法,根據等距長方座標(x E, y E, x F, y F, x G, y G, x, y),來計算三個頂點(E,F,G) (形成圖6C的多邊形網格的一個三角形)的三個空間權值(e, f, g)。 For the case of triangles, similar to the above-mentioned operations for quadrilaterals, the rasterization engine 521 and the two texture mapping engines 52a~52b perform triangle rasterization operations and texture mapping on each pixel in a triangle formed by each group of three vertices from the modified vertex list (as shown in Figure 6C) to generate two corresponding sample values s1 and s2, except that step (1) is modified as follows: the rasterization engine 521 does not use the above-mentioned bilinear interpolation method, but instead uses a barycentric weighting method to calculate the three spatial weights ( e , f , g) of the three vertices (E, F , G ) (forming a triangle of the polygonal grid of Figure 6C) based on equidistant rectangular coordinates ( xE , yE, xF, yF, xG, yG, x, y).

接著,柵格化引擎521根據該點Q的等距長方座標(x,y),判斷該點Q是否落入該五個五個量測區M(1)~M(5)之一,若判斷該點Q落入該五個量測區M(1)~M(5)之一,則將控制訊號CS1設為有效(asserted)使量測單元525開始量測該量測區的區域誤差量。量測單元525可利用任何已知的演算法,例如絕對差值和(sum of absolute differences,SAD)、平方差值和(sum of squared differences,SSD)、中位數絕對誤差(median absolute deviation,MAD)等等,估計/量測該些控制區的區域誤差量。舉例而言,若判斷該點Q落入量測區M(1),該量測單元525利用下列方程式:E= |s1-s2|; E(1) += E,來累積該鏡頭B影像中量測區M(1)的各點與該鏡頭C影像中量測區M(1)的對應點之間的取樣值差異的絕對值,以得到一SAD值當作該量測區M(1)的區域誤差量E(1)。依此方式,該量測單元525量測五個量測區M(1)~M(5)的區域誤差量E(1)~E(5)。依同樣的方式,量測單元525,根據該修正頂點列表、鏡頭C影像及鏡頭A影像,量測五個量測區M(6)~M(10)的區域誤差量E(6)~E(10)。Next, the rasterization engine 521 determines whether the point Q falls into one of the five measurement areas M(1)-M(5) according to the equidistant rectangular coordinates (x, y) of the point Q. If the point Q falls into one of the five measurement areas M(1)-M(5), the control signal CS1 is set to be valid (asserted) to enable the measurement unit 525 to start measuring the regional error of the measurement area. The measurement unit 525 can use any known algorithm, such as sum of absolute differences (SAD), sum of squared differences (SSD), median absolute deviation (MAD), etc., to estimate/measure the regional errors of the control areas. For example, if the point Q is determined to fall into the measurement area M(1), the measurement unit 525 uses the following equation: E= |s1-s2|; E(1) += E to accumulate the absolute value of the difference between the sample values of each point in the measurement area M(1) in the lens B image and the corresponding point in the measurement area M(1) in the lens C image to obtain a SAD value as the regional error E(1) of the measurement area M(1). In this way, the measurement unit 525 measures the regional errors E(1) to E(5) of the five measurement areas M(1) to M(5). In the same manner, the measuring unit 525 measures the regional errors E(6)~E(10) of the five measuring areas M(6)~M(10) according to the corrected vertex list, the lens C image and the lens A image.

顯像模式下,柵格化引擎521及紋理映射電路522的運作方式如同在量測模式下。以下,再次以上述的例子(一個點Q具有等距長方座標(x, y)且位在該多邊形網格的四邊形EFGH內,並且該四邊形EFGH被鏡頭B影像及鏡頭C影像所重疊(N=2))做說明。在二個紋理映射引擎52a~52b紋理映射鏡頭B影像及鏡頭C影像的紋理資料以產生二個取樣值s1、s2後,混合單元523以下列方程式混合二個取樣值s1、s2,以產生點Q的混合值Vb:Vb= fw 1*s1+ fw 2*s2。最後,混合單元523將點Q的混合值Vb儲存於目的緩衝器524。依此方式,混合單元523將所有混合值Vb依序儲存於目的緩衝器524,直到四邊形EFGH內的點都處理完為止。另一方面,假設一個點Q’具有等距長方座標(x’, y’)且位在圖6C多邊形網格的四邊形E’F’G’H’內,並且該四邊形E’F’G’H’位在鏡頭B影像的非重疊區內(N=1),柵格化引擎521僅傳送紋理座標(如(u1,v1))給一紋理映射引擎52a,及一工作面混合權值fw 1(=1)給混合單元523。對應地,紋理映射引擎52a紋理映射鏡頭B影像的紋理資料以產生一取樣值s1後,混合單元523產生點Q’的混合值Vb(= fw 1*s1)。依此方式,混合單元523將所有混合值Vb依序儲存於目的緩衝器524,直到四邊形E’F’G’H’內的點都處理完為止。一旦處理完所有四邊形及三角形,一投影影像就儲存於目的緩衝器524內。 In the display mode, the rasterization engine 521 and the texture mapping circuit 522 operate in the same manner as in the measurement mode. Hereinafter, the above example (a point Q has equidistant rectangular coordinates (x, y) and is located in the quadrilateral EFGH of the polygonal grid, and the quadrilateral EFGH is overlapped by the lens B image and the lens C image (N=2)) is used for explanation. After the two texture mapping engines 52a~52b texture map the texture data of the lens B image and the lens C image to generate two sample values s1 and s2, the blending unit 523 blends the two sample values s1 and s2 according to the following equation to generate the blend value Vb of the point Q: Vb= fw 1 *s1+ fw 2 *s2. Finally, the blending unit 523 stores the blending value Vb of point Q in the destination buffer 524. In this way, the blending unit 523 stores all the blending values Vb in the destination buffer 524 in sequence until all the points in the quadrilateral EFGH are processed. On the other hand, assuming that a point Q' has equidistant rectangular coordinates (x', y') and is located in the quadrilateral E'F'G'H' of the polygonal grid of Figure 6C, and the quadrilateral E'F'G'H' is located in the non-overlapping area of the lens B image (N=1), the rasterization engine 521 only transmits the texture coordinates (such as (u1, v1)) to a texture mapping engine 52a, and a working surface blending weight fw 1 (=1) to the blending unit 523. Correspondingly, after the texture mapping engine 52a texture maps the texture data of the lens B image to generate a sample value s1, the blending unit 523 generates a blending value Vb (= fw 1 * s1) of the point Q'. In this way, the blending unit 523 stores all the blending values Vb in sequence in the destination buffer 524 until all the points in the quadrilateral E'F'G'H' are processed. Once all the quadrilaterals and triangles are processed, a projected image is stored in the destination buffer 524.

當該影像擷取模組51為一外向式M鏡頭相機且該投影影像為一全景影像時,於離線階段,該對應性產生器53也是執行前述四個步驟(a)~(d)來決定控制點的紋理座標,除了修改步驟(a)如下:將等距長方全景影像中各重疊區內的預設行皆定義為”控制行”。以M=4(如圖10A)為例,圖10A例示架設在架構11E的外向式四鏡頭相機。假設圖10A的外向式四鏡頭相機可同時捕捉到一個涵蓋360度HFOV以及50度的VFOV的視野,以產生四個鏡頭影像(左面鏡頭、正面鏡頭、右面鏡頭、背面鏡頭)。圖10B例示一網格用以模型化一等距長方全景影像,且該等距長方全景影像包含四個重疊區A(1)~A(4)及(由圖10A的外向式四鏡頭相機產生的)四個鏡頭影像。假設圖10B的網格示例對應至圖6C的多邊形網格,因此,圖10B網格上的頂點/交叉點對應至圖6C多邊形網格的頂點。圖10C係根據圖10B,例示一等距長方全景影像內包含二十個控制點R(1)~R(20)及二十個量測區M(1)~M(20)且各控制行包含5個控制點(P1=5)。圖10B-10C的例子中,由於二十個控制點R(1)~R(20)都位在重疊區A(1)~A(4)內,因此,利用圖9A~9B的方法來決定二十個控制點R(1)~R(20)之最佳接合係數C(1)~C(20)且TH1=20。When the image capture module 51 is an outward-facing M-lens camera and the projected image is a panoramic image, in the offline stage, the correspondence generator 53 also executes the aforementioned four steps (a) to (d) to determine the texture coordinates of the control points, except that step (a) is modified as follows: the default rows in each overlapping area of the equidistant rectangular panoramic image are defined as "control rows". Taking M=4 (as shown in Figure 10A) as an example, Figure 10A illustrates an outward-facing four-lens camera mounted on the frame 11E. Assume that the outward-facing four-lens camera of Figure 10A can simultaneously capture a field of view covering a 360-degree HFOV and a 50-degree VFOV to generate four lens images (left lens, front lens, right lens, and back lens). FIG. 10B illustrates a grid for modeling an equirectangular panoramic image, and the equirectangular panoramic image includes four overlapping areas A(1) to A(4) and four lens images (generated by the outward-facing four-lens camera of FIG. 10A). Assume that the grid example of FIG. 10B corresponds to the polygonal grid of FIG. 6C, so the vertices/intersections on the grid of FIG. 10B correspond to the vertices of the polygonal grid of FIG. 6C. FIG. 10C illustrates an equirectangular panoramic image including twenty control points R(1) to R(20) and twenty measurement areas M(1) to M(20) according to FIG. 10B, and each control row includes five control points (P1=5). In the example of FIGS. 10B-10C , since the twenty control points R(1) to R(20) are all located in the overlapping area A(1) to A(4), the method of FIGS. 9A to 9B is used to determine the optimal joint coefficients C(1) to C(20) of the twenty control points R(1) to R(20) and TH1=20.

本發明補償裝置52及對應性產生器53可以軟體、硬體、或軟體(或韌體)及硬體的組合來實施,一單純解決方案的例子是現場可程式閘陣列(field programmable gate array)或一特殊應用積體電路(application specific integrated circuit)。一較佳實施例中,該頂點處理裝置510以及該影像處理裝置520係利用一圖形處理單元(Graphics Processing Unit)以及一第一程式記憶體來實施;該拼接決策單元530及該對應性產生器53利用一第一一般用途(general-purpose)處理器以及一第二程式記憶體來實施。該第一程式記憶體儲存一第一處理器可執行程式,而第二程式記憶體儲存一第二處理器可執行程式。當該圖形處理單元執行該第一處理器可執行程式時,該圖形處理單元被組態(configured)以運作有如:該頂點處理裝置510以及該影像處理裝置520。當該第一一般用途處理器執行該第二處理器可執行程式時,該第一一般用途處理器被組態以運作有如:該拼接決策單元530及該對應性產生器53。The compensation device 52 and the correspondence generator 53 of the present invention can be implemented by software, hardware, or a combination of software (or firmware) and hardware. An example of a simple solution is a field programmable gate array or an application specific integrated circuit. In a preferred embodiment, the vertex processing device 510 and the image processing device 520 are implemented using a graphics processing unit and a first program memory; the splicing decision unit 530 and the correspondence generator 53 are implemented using a first general-purpose processor and a second program memory. The first program memory stores a first processor executable program, and the second program memory stores a second processor executable program. When the graphics processing unit executes the first processor executable program, the graphics processing unit is configured to operate as: the vertex processing device 510 and the image processing device 520. When the first general-purpose processor executes the second processor executable program, the first general-purpose processor is configured to operate as: the splicing decision unit 530 and the correspondence generator 53.

另一實施例中,該補償裝置52及對應性產生器53利用一第二一般用途處理器以及一第三程式記憶體來實施。該第三程式記憶體儲存一第三處理器可執行程式。當該第二一般用途處理器執行該第三處理器可執行程式時,該第二一般用途處理器被組態以運作有如:該頂點處理裝置510、該拼接決策單元530、該對應性產生器53以及該影像處理裝置520。In another embodiment, the compensation device 52 and the correspondence generator 53 are implemented using a second general purpose processor and a third program memory. The third program memory stores a third processor executable program. When the second general purpose processor executes the third processor executable program, the second general purpose processor is configured to operate as: the vertex processing device 510, the splicing decision unit 530, the correspondence generator 53 and the image processing device 520.

上述僅為本發明之較佳實施例而已,而並非用以限定本發明的申請專利範圍;凡其他未脫離本發明所揭示之精神下所完成的等效改變或修飾,均應包含在下述申請專利範圍內。The above are only preferred embodiments of the present invention and are not intended to limit the scope of the patent application of the present invention; any other equivalent changes or modifications that do not deviate from the spirit disclosed by the present invention should be included in the scope of the following patent application.

11A 立方體架構 11B~11E 架構 12、62  球體 22、32、33 機身 41、A(1)~A(4) 重疊區域 42、43 非重疊區域 70 理想成像位置 74  鏡頭影像的影像中心 73 影像擷取模組51的相機系統中心 75 物體 76 鏡頭中心 78 實際成像位置 81交叉點/頂點 500 投影影像處理系統 51 影像擷取模組 52 補償裝置 53 對應性產生器 510 頂點處理裝置 520 影像處理裝置 530 拼接決策單元 521 柵格化引擎 522 紋理映射電路 52a、52b 紋理映射引擎 523 混合單元 524 目的緩衝器 525 量測單元 D1~D7 鏡頭之光軸 R(1)~R(20) 控制點 M(1)~M(20) 量測區 第一角度 第二角度 V1 第一向量 V2 第二向量 V3 第三向量 O1~O3 鏡頭光軸的交叉點 11A Cube structure 11B~11E Structure 12, 62 Sphere 22, 32, 33 Body 41, A(1)~A(4) Overlapping area 42, 43 Non-overlapping area 70 Ideal imaging position 74 Image center of lens image 73 Camera system center of image acquisition module 51 75 Object 76 Lens center 78 Actual imaging position 81 Intersection point/vertex 500 Projection image processing system 51 Image acquisition module 52 Compensation device 53 Correspondence generator 510 Vertex processing device 520 Image processing device 530 Stitching decision unit 521 Rasterization engine 522 Texture mapping circuit 52a, 52b Texture mapping engine 523 Mixing unit 524 Destination buffer 525 Measuring unit D1~D7 Optical axis of lens R(1)~R(20) Control point M(1)~M(20) Measuring area First Angle Second angle V1 First vector V2 Second vector V3 Third vector O1~O3 Intersection of lens optical axis

[圖1A] 顯示一立方體架構11A與一球體12之間的關係,已揭露於中華民國第I 728620號專利文獻。 [圖1B]顯示一等距長方全景影像,係源自於架設在該立方體架構11A上的六個工作面的六鏡頭相機的六個鏡頭影像(頂面、底面、背面、左面、右面、正面)的等距長方投影。 [圖2A]例示架設在圖1A架構11A上的外向式六鏡頭相機。 [圖2B]例示架設在架構11B上且突出於機身22之外的外向式三鏡頭相機。 [圖3A]係根據本發明一實施例,例示架設在架構11C上、位在機身32的內部且鏡頭免於磨損的內向式三鏡頭相機的二個側視圖。 [圖3B]係根據本發明一實施例,例示架設在架構11D上的內向式雙鏡頭相機的二個側視圖。 [圖4A]例示一內向式三鏡頭相機及一外向式三鏡頭相機的架設方式。 [圖4B]例示包含三個鏡頭影像(由圖4A的外向式三鏡頭相機所輸出)之一廣角影像。 [圖4C]例示包含三個鏡頭影像(由圖4A的內向式三鏡頭相機所輸出)之一廣角影像。 [圖4D]例示包含二個鏡頭影像(由圖3B的內向式雙鏡頭相機所輸出)之一廣角影像。 [圖5A]係根據本發明一實施例,顯示一投影影像處理系統之方塊圖。 [圖5B]係根據本發明一實施例,顯示影像處理裝置520的示意圖。 [圖6A]顯示一架構11C與一球體62之間的關係。 [圖6B]顯示一個三角形網格(mesh),係用以模型化一球體表面。 [圖6C]顯示一個多邊形網格,係用以組成/模型化一等距長方投影影像。 [圖7A]顯示頂點處理裝置510根據最佳接合係數,修改所有頂點於各鏡頭影像的紋理座標後,改善了因偏移量ofs(鏡頭中心76偏離其相機系統中心73的距離)所引起的不匹配的影像缺陷。 [圖7B]顯示由左至右掃描各列像素,以得到各列的左邊界點,以及由右至左掃描各列像素,以得到各列的右邊界點。 [圖7C]顯示由上至下掃描各行像素,以得到各行的上邊界點,以及由下至上掃描各行像素,以得到各行的下邊界點。 [圖8A]例示一網格(grid)用以模型化一等距長方廣角影像,且該等距長方廣角影像包含二個重疊區A(1)~A(2)與來自圖3A內向式三鏡頭相機的三鏡頭影像。 [圖8B]例示具有不同數量四邊形的二個預設行且P1=3。 [圖8C]係與圖8A有關,例示一等距長方廣角影像包含二十個控制點R(1)~R(20)及十個量測區M(1)~M(10)且各控制行包含5個控制點(P1=5)。 [圖8D]例示於鏡頭C影像中一目標頂點P及十個控制點R(1)~R(10)間的位置關係。 [圖9A]係根據本發明一實施例,顯示決定控制點之最佳接合係數之方法流程圖。 [圖9B]係根據本發明一實施例,顯示進行步驟S912之係數決策操作之方法流程圖。 [圖9C]顯示在該連結計量之所有路徑中,路徑0-1-1(實線路徑)的連結計量值總和是最小的。 [圖10A]例示架設在架構11E的外向式四鏡頭相機。 [圖10B]例示一網格用以模型化一等距長方全景影像,且該等距長方全景影像包含四個重疊區A(1)~A(4)及由圖10A的外向式四鏡頭相機產生的四個鏡頭影像。[圖10C]係根據圖10B,例示一等距長方全景影像包含二十個控制點R(1)~ R(20)及二十個量測區M(1)~M(20)且各控制行包含5個控制點(P1=5)。 [FIG. 1A] shows the relationship between a cubic structure 11A and a sphere 12, which has been disclosed in the patent document No. 1 728620 of the Republic of China. [FIG. 1B] shows an equidistant rectangular panoramic image, which is an equidistant rectangular projection of six lens images (top, bottom, back, left, right, front) of a six-lens camera mounted on six working surfaces of the cubic structure 11A. [FIG. 2A] illustrates an outward-facing six-lens camera mounted on the structure 11A of FIG. 1A. [FIG. 2B] illustrates an outward-facing three-lens camera mounted on the structure 11B and protruding from the fuselage 22. [FIG. 3A] is a diagram showing two side views of an inward-facing three-lens camera mounted on a frame 11C, located inside a body 32 and having a lens free from wear and tear according to an embodiment of the present invention. [FIG. 3B] is a diagram showing two side views of an inward-facing dual-lens camera mounted on a frame 11D according to an embodiment of the present invention. [FIG. 4A] shows a mounting method of an inward-facing three-lens camera and an outward-facing three-lens camera. [FIG. 4B] shows a wide-angle image including three lens images (output by the outward-facing three-lens camera of FIG. 4A). [FIG. 4C] shows a wide-angle image including three lens images (output by the inward-facing three-lens camera of FIG. 4A). [FIG. 4D] illustrates a wide-angle image including two lens images (output by the inward-facing dual-lens camera of FIG. 3B). [FIG. 5A] is a block diagram of a projection image processing system according to an embodiment of the present invention. [FIG. 5B] is a schematic diagram of an image processing device 520 according to an embodiment of the present invention. [FIG. 6A] shows the relationship between a structure 11C and a sphere 62. [FIG. 6B] shows a triangular mesh for modeling a sphere surface. [FIG. 6C] shows a polygonal mesh for forming/modeling an equirectangular projection image. [FIG. 7A] shows that the vertex processing device 510 improves the mismatched image defects caused by the offset ofs (the distance of the lens center 76 from the camera system center 73) after modifying the texture coordinates of all vertices in each lens image according to the optimal joint coefficient. [FIG. 7B] shows that each column of pixels is scanned from left to right to obtain the left boundary point of each column, and each column of pixels is scanned from right to left to obtain the right boundary point of each column. [FIG. 7C] shows that each row of pixels is scanned from top to bottom to obtain the upper boundary point of each row, and each row of pixels is scanned from bottom to top to obtain the lower boundary point of each row. [FIG. 8A] illustrates a grid for modeling an equidistant rectangular wide-angle image, and the equidistant rectangular wide-angle image includes two overlapping areas A(1)-A(2) and a three-lens image from the inward-facing three-lens camera of FIG. 3A. [FIG. 8B] illustrates two preset rows with different numbers of quadrilaterals and P1=3. [FIG. 8C] is related to FIG. 8A, illustrating an equidistant rectangular wide-angle image including twenty control points R(1)-R(20) and ten measurement areas M(1)-M(10) and each control row includes five control points (P1=5). [FIG. 8D] illustrates the positional relationship between a target vertex P and ten control points R(1)-R(10) in the lens C image. [FIG. 9A] is a flow chart showing a method for determining the optimal stitching coefficient of control points according to an embodiment of the present invention. [FIG. 9B] is a flowchart showing a method for performing the coefficient decision operation of step S912 according to an embodiment of the present invention. [FIG. 9C] shows that among all the paths of the link measurement, the sum of the link measurement values of path 0-1-1 (solid line path) is the smallest. [FIG. 10A] illustrates an external-facing four-lens camera mounted on frame 11E. [FIG. 10B] illustrates a grid for modeling an equidistant rectangular panoramic image, and the equidistant rectangular panoramic image includes four overlapping areas A(1) to A(4) and four lens images generated by the external-facing four-lens camera of FIG. 10A. [Figure 10C] is based on Figure 10B, illustrating an equidistant rectangular panoramic image including twenty control points R(1)~R(20) and twenty measurement areas M(1)~M(20), and each control row includes 5 control points (P1=5).

74 鏡頭C影像的影像中心 R(1)~R(10) 控制點 1第一角度 2第二角度 V1 第一向量 V2 第二向量 V3 第三向量 74 Image center R(1)~R(10) of lens C image Control point 1First Angle 2 Second angle V1 First vector V2 Second vector V3 Third vector

Claims (25)

一種影像處理系統,包含:一部具M個鏡頭的相機,用以捕捉一個涵蓋X度水平視域以及Y度垂直視域的視野,以產生M個鏡頭影像;一補償裝置,用以根據一第一頂點列表及該M個鏡頭影像,產生一投影影像;以及一對應性產生器,用來產生一組操作,包含:對多個頂點進行校正,以在該M個鏡頭影像及該投影影像之間定義多個第一頂點映射;水平地及垂直地掃描各鏡頭影像的像素,以決定各鏡頭影像的影像中心;根據該些第一頂點映射及該投影影像中各重疊區內的P1個控制點,決定所有控制點的紋理座標;以及根據所有控制點的紋理座標及各鏡頭影像的影像中心,決定各頂點於各鏡頭影像的二個鄰近控制點及一係數混合權重,以產生該第一頂點列表,其中X<=360,Y<180,M>=2以及P1>=3。 An image processing system includes: a camera with M lenses, for capturing a field of view covering an X-degree horizontal field of view and a Y-degree vertical field of view to generate M lens images; a compensation device, for generating a projection image according to a first vertex list and the M lens images; and a correspondence generator, for generating a set of operations, including: correcting a plurality of vertices to define a plurality of first vertex mappings between the M lens images and the projection image; horizontally and vertically Scan the pixels of each lens image to determine the image center of each lens image; determine the texture coordinates of all control points based on the first vertex mappings and P1 control points in each overlapping area in the projection image; and determine the two neighboring control points of each vertex in each lens image and a coefficient mixing weight based on the texture coordinates of all control points and the image center of each lens image to generate the first vertex list, where X<=360, Y<180, M>=2 and P1>=3. 如請求項1之系統,其中該第一頂點列表包含具有第一資料結構的該些頂點,且各頂點的第一資料結構包含在該M個鏡頭影像及該投影影像之間的第一頂點映射、於各鏡頭影像的該二個鄰近控制點的二個索引以及該係數混合權重。 A system as claimed in claim 1, wherein the first vertex list includes the vertices having a first data structure, and the first data structure of each vertex includes a first vertex mapping between the M lens images and the projection image, two indices of the two neighboring control points in each lens image, and the coefficient blending weight. 如請求項1之系統,其中該投影影像是由該些頂點定義的多個四邊形的集合,以及該決定所有控制點的紋理座標操作包含: 於該投影影像的各重疊區內,選擇其中一行四邊形當作一預設行;當該投影影像為一廣角影像時,將各預設行、該投影影像的最左側一行四邊形以及最右側一行四邊形定義為控制行;當該投影影像為一全景影像時,將各預設行定義為控制行;將一頂部控制點以及一底部控制點放置在各控制行的最上方四邊形及最下方四邊形的中心,以決定該投影影像各控制行中該P1個控制點的x座標;將該頂部控制點以及該底部控制點之間的距離除以(P1-2),以決定該投影影像各控制行中該P1個控制點的y座標;以及根據該些第一頂點映射以及各控制點的x座標及y座標,進行內插,以得到各控制點的紋理座標。 The system of claim 1, wherein the projection image is a set of multiple quadrilaterals defined by the vertices, and the texture coordinate operation for determining all control points includes: In each overlapping area of the projection image, select one row of quadrilaterals as a default row; when the projection image is a wide-angle image, define each default row, the leftmost row of quadrilaterals and the rightmost row of quadrilaterals of the projection image as control rows; when the projection image is a panoramic image, define each default row as a control row; The top control point and a bottom control point are placed at the center of the topmost quadrilateral and the bottommost quadrilateral of each control row to determine the x coordinates of the P1 control points in each control row of the projection image; the distance between the top control point and the bottom control point is divided by (P1-2) to determine the y coordinates of the P1 control points in each control row of the projection image; and interpolation is performed based on the first vertex mappings and the x coordinates and y coordinates of each control point to obtain the texture coordinates of each control point. 如請求項3之系統,其中該決定所有控制點的紋理座標操作更包含:對於各預設行的一四邊形i,搜尋最接近的一控制點j;以及將該四邊形i分類為一量測區j;其中1<=i<=n1,1<=j<=P1,以及n1代表各預設行中四邊形的數量。 The system of claim 3, wherein the operation of determining the texture coordinates of all control points further comprises: for a quadrilateral i in each default row, searching for a closest control point j; and classifying the quadrilateral i into a measurement area j; wherein 1<=i<=n1, 1<=j<=P1, and n1 represents the number of quadrilaterals in each default row. 如請求項1之系統,其中該投影影像是源自於該M個鏡頭影像之一預設投影,以及該投影影像為一全景影像以及一廣角影像之其一。 A system as claimed in claim 1, wherein the projected image is derived from a preset projection of the M lens images, and the projected image is one of a panoramic image and a wide-angle image. 如請求項5之系統,其中該預設投影是等距長方投影、米勒投影、墨卡托投影、蘭伯特圓柱等面積投影以及帕尼尼投影之其一。 A system as claimed in claim 5, wherein the default projection is one of the following: equirectangular projection, Miller projection, Mercator projection, Lambert cylindrical equal area projection, and Panini projection. 如請求項1之系統,其中該水平地及垂直地掃描各鏡頭影像操作包含:對一目標鏡頭影像,根據一亮度臨界值,分別由左至右及由左至右掃描各列像素,以決定各列的一左邊界點及一右邊界點;根據該亮度臨界值,分別由上至下及由下至上掃描各行像素,以決定各行的一上邊界點及一下邊界點;根據各列的該左邊界點及該右邊界點,計算各列的列中心;根據各行的該上邊界點及該下邊界點,計算各行的行中心;將所有列的列中心平均,以得到該目標鏡頭影像的該影像中心的x座標;及將所有行的行中心平均,以得到該目標鏡頭影像的該影像中心的y座標。 The system of claim 1, wherein the operation of scanning each lens image horizontally and vertically includes: for a target lens image, scanning each row of pixels from left to right and from left to right respectively according to a brightness threshold value to determine a left boundary point and a right boundary point of each row; scanning each row of pixels from top to bottom and from bottom to top respectively according to the brightness threshold value to determine an upper boundary point and a lower boundary point of each row; calculating the row center of each row according to the left boundary point and the right boundary point of each row; calculating the row center of each row according to the upper boundary point and the lower boundary point of each row; averaging the row centers of all rows to obtain the x coordinate of the image center of the target lens image; and averaging the row centers of all rows to obtain the y coordinate of the image center of the target lens image. 如請求項1之系統,其中於一特定鏡頭影像的一目標頂點的該係數混合權重係有關於一第一角度及一第二角度,其中該第一角度形成於一第一向量及一第二向量之間,以及該第二角度形成於一第三向量及該第二向量之間,其中該第一向量以該特定鏡頭影像的影像中心為起點且以該二個鄰近控制點之一第一控制點為終點,而該第二向量以該特定鏡頭影像的影像中心為起點且以該目標頂點為終點,以及其中該第三向量以該特定鏡頭影像的影像中心為起點且以該二個鄰近控制點之一第二控制點為終點。 A system as claimed in claim 1, wherein the coefficient blending weight at a target vertex in a specific lens image is related to a first angle and a second angle, wherein the first angle is formed between a first vector and a second vector, and the second angle is formed between a third vector and the second vector, wherein the first vector starts from the image center of the specific lens image and ends at a first control point of the two neighboring control points, and the second vector starts from the image center of the specific lens image and ends at the target vertex, and wherein the third vector starts from the image center of the specific lens image and ends at a second control point of the two neighboring control points. 如請求項1之系統,其中該補償裝置包含:一拼接決策單元,於量測模式下,根據一個二維誤差表格,決定該些控制點的一組最佳接合係數,其中該二維誤差表格包含多個測試 接合係數以及各重疊區中對應該些控制點的多個量測區的多個累積像素值誤差量;一頂點處理裝置,根據量測模式下的該些測試接合係數或顯像模式下的該組最佳接合係數,修正來自該第一頂點列表的所有頂點於各鏡頭影像的紋理座標,以產生一第二頂點列表;以及一影像處理裝置,接收該M個鏡頭影像及該第二頂點列表,於顯像模式下,形成該投影影像,於量測模式下,量測該些量測區的該些累積像素值誤差量;其中,該些測試接合係數的值係有關於該M個鏡頭的一鏡頭中心相對於該具M個鏡頭的相機之相機系統中心的偏移量;以及其中,該第二頂點列表包含具有第二資料結構的該些頂點,各頂點的第第二資料結構定義該M個鏡頭影像及該投影影像之間的第二頂點映射。 The system of claim 1, wherein the compensation device comprises: a splicing decision unit, in a measurement mode, determining a set of optimal stitching coefficients for the control points according to a two-dimensional error table, wherein the two-dimensional error table comprises a plurality of test stitching coefficients and a plurality of accumulated pixel value errors of a plurality of measurement areas corresponding to the control points in each overlap area; a vertex processing device, in accordance with the test stitching coefficients in the measurement mode or the set of optimal stitching coefficients in the imaging mode, correcting the texture coordinates of all vertices from the first vertex list in each lens image to generate a second vertex. point list; and an image processing device, receiving the M lens images and the second vertex list, forming the projection image in the display mode, and measuring the accumulated pixel value errors of the measurement areas in the measurement mode; wherein the values of the test joint coefficients are related to the offset of the center of a lens of the M lenses relative to the center of the camera system of the camera having the M lenses; and wherein the second vertex list includes the vertices having a second data structure, and the second data structure of each vertex defines the second vertex mapping between the M lens images and the projection image. 如請求項9之系統,其中該頂點處理裝置於顯像模式下更用來執行以下操作:對於該第一頂點列表的一目標頂點,(1)於各鏡頭影像中,根據該目標頂點的第一資料結構中的該二個相鄰控制點的二個索引,從該些最佳接合係數中取出二個選定係數;(2)於各鏡頭影像中,根據該二個選定係數及該目標頂點的第一資料結構中的該係數混合權重,計算一內插接合係數;(3)於各鏡頭影像中,根據該內插接合係數及該目標頂點的第一資料結構中的紋理座標,計算一修正紋理座標;以及 (4)重複步驟(1)~(3),直到計算完所有頂點的修正紋理座標為止,以產生該第二頂點列表。 The system of claim 9, wherein the vertex processing device is further used to perform the following operations in the display mode: for a target vertex in the first vertex list, (1) in each lens image, based on the two indices of the two adjacent control points in the first data structure of the target vertex, two selected coefficients are taken from the optimal joint coefficients; (2) in each lens image, based on the two selected coefficients and (3) in each lens image, according to the interpolation joint coefficient and the texture coordinates in the first data structure of the target vertex, calculate a modified texture coordinate; and (4) repeat steps (1) to (3) until the modified texture coordinates of all vertices are calculated, so as to generate the second vertex list. 如請求項1之系統,其中該具M個鏡頭的相機為一部內向式M鏡頭相機,且該內向式M個鏡頭的光軸的交叉點係形成於該內向式M個鏡頭的上方。 A system as claimed in claim 1, wherein the camera having M lenses is an inward-facing M-lens camera, and the intersection of the optical axes of the inward-facing M lenses is formed above the inward-facing M lenses. 如請求項1之系統,其中該具M個鏡頭的相機為一部外向式M鏡頭的相機,且該外向式M個鏡頭的光軸的交叉點係形成於該外向式M個鏡頭的下方。 A system as claimed in claim 1, wherein the camera having M lenses is an outward-facing M-lens camera, and the intersection of the optical axes of the outward-facing M lenses is formed below the outward-facing M lenses. 一種影像處理方法,包含:對多個頂點進行校正,以在M個鏡頭影像及一投影影像之間定義多個第一頂點映射,其中該M個鏡頭影像是由一個具M個鏡頭的相機捕捉到一個涵蓋X度水平視域以及Y度垂直視域的視野而產生;水平地及垂直地掃描各鏡頭影像的像素,以決定各鏡頭影像的影像中心;根據該些第一頂點映射及該投影影像中各重疊區內的P1個控制點,決定所有控制點的紋理座標;以及根據所有控制點的紋理座標及各鏡頭影像的影像中心,決定各頂點於各鏡頭影像的二個鄰近控制點及一係數混合權重,以產生一第一頂點列表;以及根據該第一頂點列表及該M個鏡頭影像,產生該投影影像,其中X<=360,Y<180,M>=2以及P1>=3。 An image processing method includes: calibrating a plurality of vertices to define a plurality of first vertex mappings between M lens images and a projected image, wherein the M lens images are generated by a camera having M lenses capturing a field of view covering an X-degree horizontal field of view and a Y-degree vertical field of view; horizontally and vertically scanning pixels of each lens image to determine an image center of each lens image; and determining a plurality of first vertex mappings between the first vertex mappings and the projected image. The P1 control points in each overlapping area determine the texture coordinates of all control points; and based on the texture coordinates of all control points and the image center of each lens image, determine the two neighboring control points of each vertex in each lens image and a coefficient mixing weight to generate a first vertex list; and based on the first vertex list and the M lens images, generate the projected image, where X<=360, Y<180, M>=2 and P1>=3. 如請求項13之方法,其中該第一頂點列表包含具有第一資料結構的該些頂點,且各頂點的第一資料結構包含在該M個鏡頭影像及該投影影 像之間的第一頂點映射、於各鏡頭影像的該二個鄰近控制點的二個索引以及該係數混合權重。 The method of claim 13, wherein the first vertex list includes the vertices having a first data structure, and the first data structure of each vertex includes a first vertex mapping between the M lens images and the projection image, two indices of the two neighboring control points in each lens image, and the coefficient blending weight. 如請求項13之方法,其中該投影影像是由該些頂點定義的多個四邊形的集合,以及該決定所有控制點的紋理座標步驟包含:於該投影影像的各重疊區內,選擇其中一行四邊形當作一預設行;當該投影影像為一廣角影像時,將各預設行、該投影影像的最左側一行四邊形以及最右側一行四邊形定義為控制行;當該投影影像為一全景影像時,將各預設行定義為控制行;將一頂部控制點以及一底部控制點放置在各控制行的最上方四邊形及最下方四邊形的中心,以決定該投影影像的各控制行中該P1個控制點的x座標;將該頂部控制點以及該底部控制點之間的距離除以(P1-2),以決定該投影影像的各控制行中該P1個控制點的y座標;以及根據該些第一頂點映射以及各控制點的x座標及y座標,進行內插,以得到各控制點的紋理座標。 The method of claim 13, wherein the projection image is a set of multiple quadrilaterals defined by the vertices, and the step of determining the texture coordinates of all control points includes: selecting a row of quadrilaterals in each overlapping area of the projection image as a default row; when the projection image is a wide-angle image, defining each default row, the leftmost row of quadrilaterals and the rightmost row of quadrilaterals of the projection image as control rows; when the projection image is a panoramic image, defining each default row as a control row; defining a vertex as a default row. The top control point and a bottom control point are placed at the center of the topmost quadrilateral and the bottommost quadrilateral of each control row to determine the x coordinates of the P1 control points in each control row of the projection image; the distance between the top control point and the bottom control point is divided by (P1-2) to determine the y coordinates of the P1 control points in each control row of the projection image; and interpolation is performed based on the first vertex mappings and the x coordinates and y coordinates of each control point to obtain the texture coordinates of each control point. 如請求項15之方法,其中該決定所有控制點的紋理座標步驟更包含:對於各預設行的一四邊形i,搜尋最接近的一控制點j;以及將該四邊形i分類為一量測區j;其中1<=i<=n1,1<=j<=P1,以及n1代表各預設行中四邊形的數量。 The method of claim 15, wherein the step of determining the texture coordinates of all control points further comprises: for a quadrilateral i in each default row, searching for a closest control point j; and classifying the quadrilateral i into a measurement area j; wherein 1<=i<=n1, 1<=j<=P1, and n1 represents the number of quadrilaterals in each default row. 如請求項13之方法,其中該投影影像是源自於該M個鏡頭影像之一預設投影,以及該投影影像為一全景影像以及一廣角影像之其一。 The method of claim 13, wherein the projected image is derived from a preset projection of the M lens images, and the projected image is one of a panoramic image and a wide-angle image. 如請求項17之方法,其中該預設投影是等距長方投影、米勒投影、墨卡托投影、蘭伯特圓柱等面積投影以及帕尼尼投影之其一。 The method of claim 17, wherein the default projection is one of the equirectangular projection, the Miller projection, the Mercator projection, the Lambert cylindrical equal area projection, and the Panini projection. 如請求項13之方法,其中該水平地及垂直地掃描各鏡頭影像步驟包含:對一目標鏡頭影像,根據一亮度臨界值,分別由左至右及由左至右掃描各列像素,以決定各列的一左邊界點及一右邊界點;根據該亮度臨界值,分別由上至下及由下至上掃描各行像素,以決定各行的一上邊界點及一下邊界點;根據各列的該左邊界點及該右邊界點,計算各列的列中心;根據各行的該上邊界點及該下邊界點,計算各行的行中心;將所有列的列中心平均,以得到該目標鏡頭影像的該影像中心的x座標;及將所有行的行中心平均,以得到該目標鏡頭影像的該影像中心的y座標。 As in the method of claim 13, the step of scanning each lens image horizontally and vertically includes: for a target lens image, scanning each row of pixels from left to right and from left to right respectively according to a brightness threshold value to determine a left boundary point and a right boundary point of each row; scanning each row of pixels from top to bottom and from bottom to top respectively according to the brightness threshold value to determine an upper boundary point and a lower boundary point of each row; calculating the row center of each row according to the left boundary point and the right boundary point of each row; calculating the row center of each row according to the upper boundary point and the lower boundary point of each row; averaging the row centers of all rows to obtain the x coordinate of the image center of the target lens image; and averaging the row centers of all rows to obtain the y coordinate of the image center of the target lens image. 如請求項13之方法,其中於一特定鏡頭影像的一目標頂點的該係數混合權重係有關於一第一角度及一第二角度,其中該第一角度形成於一第一向量及一第二向量之間,以及該第二角度形成於一第三向量及該第二向量之間,其中該第一向量以該特定鏡頭影像的影像中心為起點且以該二個鄰近控制點之一第一控制點為終點,而該第二向量以該特定鏡頭影像的影像中心為起 點且以該目標頂點為終點,以及其中該第三向量以該特定鏡頭影像的影像中心為起點且以該二個鄰近控制點之一第二控制點為終點。 The method of claim 13, wherein the coefficient blending weight of a target vertex in a specific lens image is related to a first angle and a second angle, wherein the first angle is formed between a first vector and a second vector, and the second angle is formed between a third vector and the second vector, wherein the first vector starts from the image center of the specific lens image and ends at a first control point of the two neighboring control points, and the second vector starts from the image center of the specific lens image and ends at the target vertex, and wherein the third vector starts from the image center of the specific lens image and ends at a second control point of the two neighboring control points. 如請求項13之方法,其中該產生該投影影像步驟包含:根據一個二維誤差表格,決定該些控制點的一組最佳接合係數,其中該二維誤差表格包含多個測試接合係數以及各重疊區中對應該些控制點的多個量測區的多個累積像素值誤差量;根據該組最佳接合係數,修正來自該第一頂點列表的所有頂點於各鏡頭影像的紋理座標,以產生一第二頂點列表;以及接收該M個鏡頭影像及該第二頂點列表以形成該投影影像;其中,該些測試接合係數的值係有關於該M個鏡頭的一鏡頭中心相對於該具M個鏡頭的相機之相機系統中心的偏移量;以及其中,該第二頂點列表包含具有第二資料結構的該些頂點,各頂點的第二資料結構定義該M個鏡頭影像及該投影影像之間的第二頂點映射。 The method of claim 13, wherein the step of generating the projection image comprises: determining a set of optimal stitching coefficients for the control points based on a two-dimensional error table, wherein the two-dimensional error table comprises a plurality of test stitching coefficients and a plurality of accumulated pixel value errors of a plurality of measurement areas corresponding to the control points in each overlap area; and correcting the texture coordinates of all vertices from the first vertex list in each lens image based on the set of optimal stitching coefficients to generate a second Vertex list; and receiving the M lens images and the second vertex list to form the projection image; wherein the values of the test joint coefficients are related to the offset of a lens center of the M lenses relative to the camera system center of the camera having the M lenses; and wherein the second vertex list includes the vertices having a second data structure, and the second data structure of each vertex defines a second vertex mapping between the M lens images and the projection image. 如請求項21之方法,其中該修正來自該第一頂點列表的所有頂點步驟包含:對於該第一頂點列表的一目標頂點,(1)對於各鏡頭影像,根據該目標頂點的第一資料結構中的該二個相鄰控制點的二個索引,從該些最佳接合係數中取出二個選定係數;(2)對於各鏡頭影像,根據該二個選定係數及該目標頂點的第一資料結構中的該係數混合權重,計算一內插接合係數;(3)對於各鏡頭影像,根據該內插接合係數及該目標頂點的第一資料結構中的紋理座標,計算一修正紋理座標;以及 (4)重複步驟(1)~(3),直到計算完所有頂點的修正紋理座標為止,以產生該第二頂點列表。 The method of claim 21, wherein the step of modifying all vertices from the first vertex list comprises: for a target vertex in the first vertex list, (1) for each lens image, based on two indices of the two adjacent control points in the first data structure of the target vertex, taking two selected coefficients from the optimal joint coefficients; (2) for each lens image, based on the two selected coefficients and (3) for each lens image, a modified texture coordinate is calculated according to the interpolation joint coefficient and the texture coordinate in the first data structure of the target vertex; and (4) steps (1) to (3) are repeated until the modified texture coordinates of all vertices are calculated to generate the second vertex list. 如請求項21之方法,其中該決定該些控制點的該組最佳接合係數步驟包含:(a)根據該M個鏡頭的該鏡頭中心相對於該具M個鏡頭的相機之相機系統中心的偏移量,將該些測試接合係數的值設定都等於一預設值範圍內的多個預設值之一;(b)根據該些測試接合係數的值,修正來自該第一頂點列表的所有頂點於各鏡頭影像的紋理座標;(c)計算該些量測區的累積像素值誤差量;(d)重複步驟(a)~(c),直到處理完該預設值範圍內的該些預設值為止,以產生該二維誤差表格;以及(e)根據各決策群組中各控制點的該些累積像素值誤差量間的局部最小值,以逐群組方式,決定各目標控制點的最佳接合係數,其中,各決策群組包含該目標控制點及一個或二個鄰近控制點。 The method of claim 21, wherein the step of determining the set of optimal stitching coefficients of the control points comprises: (a) setting the values of the test stitching coefficients to be equal to one of a plurality of preset values within a preset value range according to the offset of the lens center of the M lenses relative to the camera system center of the camera having the M lenses; (b) correcting the texture coordinates of all vertices from the first vertex list in each lens image according to the values of the test stitching coefficients; (c) ) calculate the accumulated pixel value errors of the measurement areas; (d) repeat steps (a) to (c) until the preset values within the preset value range are processed to generate the two-dimensional error table; and (e) determine the optimal joint coefficient of each target control point in a group-by-group manner according to the local minimum value between the accumulated pixel value errors of each control point in each decision group, wherein each decision group includes the target control point and one or two neighboring control points. 如請求項13之方法,其中該具M個鏡頭的相機為一部內向式M個鏡頭的相機,且該內向式M個鏡頭的光軸的交叉點係形成於該內向式M個鏡頭的上方。 As in the method of claim 13, the camera with M lenses is an inward-facing M lens camera, and the intersection of the optical axes of the inward-facing M lenses is formed above the inward-facing M lenses. 如請求項13之方法,,其中該具M個鏡頭的相機為一部外向式M個鏡頭的相機,且該外向式M個鏡頭的光軸的交叉點係形成於該外向式M個鏡頭的下方。 As in the method of claim 13, wherein the camera with M lenses is an outward-facing M lens camera, and the intersection of the optical axes of the outward-facing M lenses is formed below the outward-facing M lenses.
TW110149260A 2021-12-29 Image processing system and method thereof for generating projection images based on a multiple-lens camera TWI837563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110149260A TWI837563B (en) 2021-12-29 Image processing system and method thereof for generating projection images based on a multiple-lens camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110149260A TWI837563B (en) 2021-12-29 Image processing system and method thereof for generating projection images based on a multiple-lens camera

Publications (2)

Publication Number Publication Date
TW202327349A TW202327349A (en) 2023-07-01
TWI837563B true TWI837563B (en) 2024-04-01

Family

ID=

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202101372A (en) 2019-03-05 2021-01-01 信驊科技股份有限公司 Method of adjusting texture coordinates based on control regions in a panoramic image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202101372A (en) 2019-03-05 2021-01-01 信驊科技股份有限公司 Method of adjusting texture coordinates based on control regions in a panoramic image

Similar Documents

Publication Publication Date Title
TWI728620B (en) Method of adjusting texture coordinates based on control regions in a panoramic image
US10104288B2 (en) Method and apparatus for generating panoramic image with stitching process
CN106875339B (en) Fisheye image splicing method based on strip-shaped calibration plate
US9661257B2 (en) Projection system, image processing device, and projection method
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
TWI783423B (en) Method of compensating for color differences between adjacent lens images in a panoramic image
KR101915729B1 (en) Apparatus and Method for Generating 360 degree omni-directional view
TW201717613A (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
KR101912396B1 (en) Apparatus and Method for Generating Image at any point-view based on virtual camera
CN113301274B (en) Ship real-time video panoramic stitching method and system
NL2016660B1 (en) Image stitching method and device.
CN106534670B (en) It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group
CN110519528B (en) Panoramic video synthesis method and device and electronic equipment
CN108269234B (en) Panoramic camera lens attitude estimation method and panoramic camera
CN115830103A (en) Monocular color-based transparent object positioning method and device and storage medium
CN116437165A (en) Image processing system and method thereof
TWI837563B (en) Image processing system and method thereof for generating projection images based on a multiple-lens camera
TWI762353B (en) Method for generating projection image with scaling adjustment and seam cut stitching
RU2579532C2 (en) Optoelectronic stereoscopic range-finder
JP4548228B2 (en) Image data creation method
JP5446285B2 (en) Image processing apparatus and image processing method
TW202327349A (en) Image processing system and method thereof for generating projection images based on a multiple-lens camera
US11875473B2 (en) Method for generating projection image with scaling adjustment and seam cut stitching
CN115942103A (en) Multiprocessor system suitable for multi-lens camera and image processing method
TWI807845B (en) System and method of generating projection image with region of interest