AU2016202168A1 - Image geometry calibration using multiscale alignment pattern - Google Patents

Image geometry calibration using multiscale alignment pattern Download PDF

Info

Publication number
AU2016202168A1
AU2016202168A1 AU2016202168A AU2016202168A AU2016202168A1 AU 2016202168 A1 AU2016202168 A1 AU 2016202168A1 AU 2016202168 A AU2016202168 A AU 2016202168A AU 2016202168 A AU2016202168 A AU 2016202168A AU 2016202168 A1 AU2016202168 A1 AU 2016202168A1
Authority
AU
Australia
Prior art keywords
ring
image
calibration pattern
captured
pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2016202168A
Inventor
Matthew Raphael Arnison
Peter Alleine Fletcher
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2016202168A priority Critical patent/AU2016202168A1/en
Publication of AU2016202168A1 publication Critical patent/AU2016202168A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

IMAGE GEOMETRY CALIBRATION USING A MULTISCALE ALIGNMENT A method of decoding a calibration pattern, the method comprising: obtaining an image of the calibration pattern, the calibration pattern having a plurality of concentric rings in a frequency domain, wherein at least two of the plurality of concentric rings have a different radius; locating, in the obtained image, at least one ring of the plurality of concentric rings in the frequency domain; matching the located at least one ring in the obtained image to a corresponding ring of the plurality of concentric rings in the calibration pattern; determining a scale of the obtained image according to the match of the located at least one ring, and decoding the calibration pattern using the determined scale of the obtained image. Start Fourier transform Angular projection 830 Detect nings Match ring~sr0 Estimate scale Estimate rotation Estimate translation End Fig. 8

Description

IMAGE GEOMETRY CALIBRATION USING A MULTISCALE ALIGNMENT
PATTERN
Technical Field [0001] The current invention relates to image geometry calibration using a multiscale alignment pattern, and in particular to a method of decoding a calibration pattern, and an electronic device, computer and computer program arranged to decode a calibration pattern.
Background [0002] In many imaging and display system applications, it can be advantageous to calibrate the imaging geometry using calibration patterns. Imaging and display systems which benefit from calibration include projectors and cameras.
[0003] For example, when using a projector, it is desirable for the projected image to appear rectangular on the projection surface. If the projection surface is not flat or is at an angle to the projector, then the projection will appear distorted to the viewer. This distortion can be considered in each local region as an affine transform, including rotation, scale, translation, shear and aspect ratio. If the distortion can be accurately estimated, then the projected image can be compensated for the distortion, and the viewer will see a high quality projected image. It is desirable that the distortion can be estimated over a wide range of conditions including at multiple scales. Large scale variations can be caused by changes in the overall distance between the projector and the projection surface, local changes in the projection distance caused by the shape and orientation of the projection surface, or changes in the focal length of the projector. It is also desirable that the distortion can be estimated quickly, so that the compensation can adapt quickly to changes in the projector, such as movements of the projector or temperature changes, or changes in the geometrical relationship between the projector and the projection surface, such as movements of the projection screen.
[0004] A projector may include a camera, which is used to calibrate the projection image (i.e. output image) geometry that causes distortion of the projected or output image. The projector-camera geometry can be calibrated in the factory as part of manufacturing the projector. The calibration is performed by projecting a known calibration pattern on to a projection surface, imaging the calibration pattern (i.e. capturing an image of the calibration pattern) using the camera, estimating the translation alignment of the calibration pattern at multiple local positions in the global camera image that would be required to calibrate the projection image geometry, and using the local alignment estimates (i .e. the estimated translation alignment of the calibration pattern) and the projector-camera geometry to estimate the shape of the projection surface.
[0005] For accurate calibration pattern alignment, a calibration pattern with high spatial frequency content is desired. The projected pattern may appear at a large range of different scales depending on the distance of the projection surface from the projector. For high accuracy calibration patterns, the scale of the calibration pattern needs to be determined before the translation can be estimated.
[0006] One method for estimating the scale of a calibration pattern is to place scale invariant marks in the corner of the projected image. Scale invariant corner marks include circles, lines or crosses. The positions of the corner marks in the camera image are detected, for example using correlation or edge detection, and the global image scale is estimated. However, if the projection surface is not orthogonal to the optical axis, or is curved or has discontinuities, then the local scale at each point in the image can be significantly different from the global scale, and the local translation alignment will have low accuracy.
[0007] Another method for estimating the scale change between 2 images, which can be applied to either natural images or images of a calibration pattern, is to apply a Fourier transform to both images, and then take the modulus. The Fourier modulus is then translation invariant. A 2D log-polar transform is then applied to the Fourier modulus of both images, which converts scale and rotation into translation. A correlation between the transformed images is used to estimate the scale and rotation. However, this method has limited tolerance to large changes in scale, and the 2D (two-dimensional) transforms and correlations involved are relatively expensive to compute if real-time distortion compensation performance is required.
[0008] Another method for estimating the scale of a calibration pattern can be applied when the calibration pattern is tiled in a regular grid to form a calibration chart. The captured image is windowed to a size which is large enough to include several tiled copies of the calibration pattern. A Fourier transform is applied to the windowed captured image, which produces a grid of peaks in the modulus at positons corresponding to the spatial frequencies of the tiling grid in the calibration chart. A 2D polar transform is applied to the Fourier transform modulus, the polar transform is projected along the radial axis, and a peak is detected. The peak position in the polar transform can be used to estimate the scale of the calibration chart. However, this method requires a large image window with a consistent scale across the window, which reduces the ability to measure local changes in the calibration chart distortion. This method also requires multiple 2D transforms which are relatively expensive to compute.
[0009] Another method for estimating the scale of a calibration pattern is to create a calibration pattern which consists of a circular structure of peaks in the Fourier domain, where the peaks have the same amplitude and a pseudo-random phase. The calibration pattern is the inverse Fourier transform of the circular structure of peaks. To detect the scale of calibration pattern in a captured image, a Fourier transform is applied to the captured image, and 2D peak finding is applied to the modulus to create a set of detected peaks. An ellipse is fitted to the detected peak positions, and the estimated ellipse parameters are used to estimate an affine transform, which includes a scaling factor. However, a Fourier circular structure of peaks has a relatively small bandwidth for estimating calibration pattern translation, which means it is vulnerable to imaging noise. If the bandwidth is increased by adding additional peaks around the Fourier circular structure, then the 2D peak finding will become unreliable. In addition, the alignment accuracy is reduced when imaging the calibration pattern at multiple scales. If the calibration pattern is imaged at a small scale, so that the spatial frequencies in the calibration pattern are increased in the captured image, then the alignment accuracy will decrease due the reduction of calibration pattern signal at high spatial frequencies caused by the modulation transfer function of the projector lens and camera lens. If the calibration pattern is imaged at a large scale, so that the spatial frequencies in the calibration pattern are lower in the captured image, then the alignment accuracy will be reduced because a low spatial frequency signal is less sensitive to small changes in alignment.
Summary [0010] It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
[0011] This disclosure describes a method for estimating the scale of a calibration pattern in a captured image. The method improves the accuracy and speed of scale estimation at multiple scales by using a calibration pattern with multiple concentric rings in a frequency domain. At least one ring in the frequency domain of the captured image is matched to a corresponding ring in the calibration pattern in order to determine the scale of the captured image relative to the calibration pattern.
[0012] The alignment accuracy of a calibration pattern is highest when the pattern has midrange spatial frequencies in the captured image. Because the calibration pattern has multiple rings, the optimum alignment accuracy can be achieved at multiple image scales. The scale can be detected quickly using ID (one-dimensional) peak finding in an angular projection of the rings in the frequency domain.
[0013] According to a first aspect of the present disclosure, there is provided a method of decoding a calibration pattern, the method comprising: obtaining an image of the calibration pattern, the calibration pattern having a plurality of concentric rings in a frequency domain, wherein at least two of the plurality of concentric rings have a different radius; locating, in the obtained image, at least one ring of the plurality of concentric rings in the frequency domain; matching the located at least one ring in the obtained image to a corresponding ring of the plurality of concentric rings in the calibration pattern; determining a scale of the obtained image according to the match of the located at least one ring, and decoding the calibration pattern using the determined scale of the obtained image.
[0014] According to a second aspect of the present disclosure, there is provided an electronic device or computer system arranged to decode a calibration pattern, the device or system arranged to: obtain an image of the calibration pattern, the calibration pattern having a plurality of concentric rings in a frequency domain, wherein at least two of the plurality of concentric rings have a different radius; locate, in the obtained image, at least one ring of the plurality of concentric rings in the frequency domain; match the located at least one ring in the obtained image to a corresponding ring of the plurality of concentric rings in the calibration pattern; determine a scale of the obtained image according to the match of the located at least one ring, and decode the calibration pattern using the determined scale of the obtained image.
[0015] According to another aspect of the present disclosure, there is provided a computer readable storage medium having a computer program recorded therein, the program being executable by a computer apparatus to make the computer perform a method of decoding a calibration pattern, said program comprising: code for obtaining an image of the calibration pattern, the calibration pattern having a plurality of concentric rings in a frequency domain, wherein at least two of the plurality of concentric rings have a different radius; code for locating, in the obtained image, at least one ring of the plurality of concentric rings in the frequency domain; code for matching the located at least one ring in the obtained image to a corresponding ring of the plurality of concentric rings in the calibration pattern; code for determining a scale of the obtained image according to the match of the located at least one ring, and code for decoding the calibration pattern using the determined scale of the obtained image.
[0016] According to another aspect of the present disclosure there is provided a computer program product including a computer readable medium having recorded thereon a computer program for implementing any one of the methods described above.
[0017] Other aspects are also disclosed.
Brief Description of the Drawings [0018] One or more embodiments of the present invention will now be described with reference to the drawings, in which: [0019] Figs. 1A and IB are schematic block diagrams of a computer and/or server on which the embodiments of the invention may be practised; [0020] Figs. 1C and ID are schematic block diagrams of an embedded electronic device, scuh as a projector or camera, on which the embodiments of the invention may be practised; [0021] Fig. 2 is a diagram of a projector including a camera and a projection surface; [0022] Fig. 3 is an example of the calibration pattern including the intermediate images used to create the calibration pattern; [0023] Fig. 4 shows an example of calibration pattern matching and scale alignment including intermediate images and results; [0024] Fig. 5 shows an example of calibration pattern rotation alignment including intermediate images used to estimate the alignment; [0025] Fig. 6 is a schematic flow diagram illustrating a method for projection surface calibration; [0026] Fig. 7 is a schematic flow diagram illustrating a method for projection surface distortion estimation; [0027] Fig. 8 is a schematic flow diagram illustrating a method for alignment estimation; [0028] Fig. 9 is a schematic flow diagram illustrating a method for ring matching; [0029] Fig. 10 is a schematic flow diagram illustrating an alternative method for ring matching; [0030] Fig. 11 is a schematic flow diagram illustrating a method for rotation alignment estimation; [0031] Fig. 12 is a multi-projector configuration in which multiple scale alignment can be used for projection distortion compensation and projection stitching; [0032] Fig. 13 is a schematic flow diagram illustrating a process for decoding a calibration pattern.
Detailed Description including Best Mode [0033] Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
[0034] The present disclosure provides methods for high accuracy alignment of a calibration pattern imaged at multiple scales. The methods seek to optimise the accuracy, speed and scale tolerance of the scale estimation and alignment accuracy of images of the calibration pattern. The method only requires a single captured image to estimate the alignment.
Context [0035] The arrangements presently disclosed may be implemented on a variety of hardware platforms, including, for example, in a display device such as a projector, in an imaging device such as a camera, on a general purpose computer (PC) or in a cloud computing implementation, such as on a server. When implemented on a general purpose computer, the arrangements disclosed cause the general purpose computer to operate in a non-standard manner.
Computer Description [0036] Figs. 1A and IB depict a general-purpose computer system 1300, upon which the various arrangements described can be practiced.
[0037] As seen in Fig. 1A, the computer system 1300 includes: a computer module 1301; input devices such as a keyboard 1302, a mouse pointer device 1303, a scanner 1326, a camera 1327, a projector 1329 and a microphone 1380; and output devices including a printer 1315, a display device 1314 and loudspeakers 1317. According to various embodiments, the arrangements described herein may be implemented on a combination of various equipment including, for example, a projection system that includes a projector (where the projector includes a camera incorporated therein) and a computer, a projection system that includes a projector, a separate camera in communication with the projector and a computer, a projection system that includes a projector (where the projector includes a camera incorporated therein) and a server in communication with the projector, a projection system that includes a projector, a separate camera in communication with the projector and a server in communication with the projector and/or camera. Where the projector has a camera incorporated therein, the built-in camera is arranged to capture images output from the projector and displayed on a projection surface.
Also, where the camera is separate to the projector, the camera may be arranged to capture images output from the projector and displayed on a projection surface. The camera 1327 and/or projector 1329 may have a zoom lens in which the focal length of the lens may be varied. The camera may be a stills camera or a video camera.
[0038] An external Modulator-Demodulator (Modem) transceiver device 1316 may be used by the computer module 1301 for communicating to and from a communications network 1320 via a connection 1321. The communications network 1320 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 1321 is a telephone line, the modem 1316 may be a traditional “dial-up” modem. Alternatively, where the connection 1321 is a high capacity (e.g., cable) connection, the modem 1316 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 1320.
[0039] The computer module 1301 typically includes at least one processor unit 1305, and a memory unit 1306. For example, the memory unit 1306 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 1301 also includes an number of input/output (I/O) interfaces including: an audio-video interface 1307 that couples to the video display 1314, loudspeakers 1317 and microphone 1380; an I/O interface 1313 that couples to the keyboard 1302, mouse 1303, scanner 1326, camera 1327, projector 1329 and optionally a joystick or other human interface device (not illustrated); and an interface 1308 for the external modem 1316 and printer 1315. The projector produces a projected output image. The camera may also produce an output image in the form of images generated by the camera. The scanner may also produce output images in the form of scanned images. In some implementations, the modem 1316 may be incorporated within the computer module 1301, for example within the interface 1308. The computer module 1301 also has a local network interface 1311, which permits coupling of the computer system 1300 via a connection 1323 to a local-area communications network 1322, known as a Local Area Network (LAN). As illustrated in Fig. 1 A, the local communications network 1322 may also couple to the wide network 1320 via a connection 1324, which would typically include a so-called “firewall” device or device of similar functionality. The local network interface 1311 may comprise an Ethernet circuit card, a Bluetooth® wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 1311.
[0040] The I/O interfaces 1308 and 1313 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 1309 are provided and typically include a hard disk drive (HDD) 1310. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 1312 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1300.
[0041] The components 1305 to 1313 of the computer module 1301 typically communicate via an interconnected bus 1304 and in a manner that results in a conventional mode of operation of the computer system 1300 known to those in the relevant art. For example, the processor 1305 is coupled to the system bus 1304 using a connection 1318. Likewise, the memory 1306 and optical disk drive 1312 are coupled to the system bus 1304 by connections 1319. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple Mac™ or like computer systems.
[0042] The method of decoding a calibration pattern and various other methods described herein may be implemented using the computer system 1300 wherein the processes of Figs. 6 to 11, to be described, may be implemented as one or more software application programs 1333 executable within the computer system 1300. In particular, the steps of the method of decoding a calibration pattern and various other methods described herein are effected by instructions 1331 (see Fig. IB) in the software 1333 that are carried out within the computer system 1300. The software instructions 1331 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the calibration pattern decoding method and various other methods described herein and a second part and the corresponding code modules manage a user interface between the first part and the user.
[0043] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 1300 from the computer readable medium, and then executed by the computer system 1300. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 1300 preferably effects an advantageous apparatus for decoding a calibration pattern and various other methods described herein.
[0044] The software 1333 is typically stored in the HDD 1310 or the memory 1306. The software is loaded into the computer system 1300 from a computer readable medium, and executed by the computer system 1300. Thus, for example, the software 1333 may be stored on an optically readable disk storage medium (e g., CD-ROM) 1325 that is read by the optical disk drive 1312. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 1300 preferably effects an apparatus for decoding a calibration pattern and various other methods described herein.
[0045] In some instances, the application programs 1333 may be supplied to the user encoded on one or more CD-ROMs 1325 and read via the corresponding drive 1312, or alternatively may be read by the user from the networks 1320 or 1322. Still further, the software can also be loaded into the computer system 1300 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1300 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1301. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1301 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[0046] The second part of the application programs 1333 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1314. Through manipulation of typically the keyboard 1302 and the mouse 1303, a user of the computer system 1300 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1317 and user voice commands input via the microphone 1380.
[0047] Fig. IB is a detailed schematic block diagram of the processor 1305 and a “memory” 1334. The memory 1334 represents a logical aggregation of all the memory modules (including the HDD 1309 and semiconductor memory 1306) that can be accessed by the computer module 1301 in Fig. 1A.
[0048] When the computer module 1301 is initially powered up, a power-on self-test (POST) program 1350 executes. The POST program 1350 is typically stored in a ROM 1349 of the semiconductor memory 1306 of Fig. 1A. A hardware device such as the ROM 1349 storing software is sometimes referred to as firmware. The POST program 1350 examines hardware within the computer module 1301 to ensure proper functioning and typically checks the processor 1305, the memory 1334 (1309, 1306), and a basic input-output systems software (BIOS) module 1351, also typically stored in the ROM 1349, for correct operation. Once the POST program 1350 has run successfully, the BIOS 1351 activates the hard disk drive 1310 of Fig. 1 A. Activation of the hard disk drive 1310 causes a bootstrap loader program 1352 that is resident on the hard disk drive 1310 to execute via the processor 1305. This loads an operating system 1353 into the RAM memory 1306, upon which the operating system 1353 commences operation. The operating system 1353 is a system level application, executable by the processor 1305, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface.
[0049] The operating system 1353 manages the memory 1334 (1309, 1306) to ensure that each process or application running on the computer module 1301 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1300 of Fig. 1A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 1334 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1300 and how such is used.
[0050] As shown in Fig. IB, the processor 1305 includes a number of functional modules including a control unit 1339, an arithmetic logic unit (ALU) 1340, and a local or internal memory 1348, sometimes called a cache memory. The cache memory 1348 typically includes a number of storage registers 1344 - 1346 in a register section. One or more internal busses 1341 functionally interconnect these functional modules. The processor 1305 typically also has one or more interfaces 1342 for communicating with external devices via the system bus 1304, using a connection 1318. The memory 1334 is coupled to the bus 1304 using a connection 1319.
[0051] The application program 1333 includes a sequence of instructions 1331 that may include conditional branch and loop instructions. The program 1333 may also include data 1332 which is used in execution of the program 1333. The instructions 1331 and the data 1332 are stored in memory locations 1328, 1329, 1330 and 1335, 1336, 1337, respectively. Depending upon the relative size of the instructions 1331 and the memory locations 1328-1330, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1330. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1328 and 1329.
[0052] In general, the processor 1305 is given a set of instructions which are executed therein. The processor 1305 waits for a subsequent input, to which the processor 1305 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1302, 1303, data received from an external source across one of the networks 1320, 1302, data retrieved from one of the storage devices 1306, 1309 or data retrieved from a storage medium 1325 inserted into the corresponding reader 1312, all depicted in Fig. 1 A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 1334.
[0053] The disclosed calibration pattern decoding arrangements, and other arrangements described herein, use input variables 1354, which are stored in the memory 1334 in corresponding memory locations 1355, 1356, 1357. These arrangements produce output variables 1361, which are stored in the memory 1334 in corresponding memory locations 1362, 1363, 1364. Intermediate variables 1358 maybe stored in memory locations 1359, 1360, 1366 and 1367.
[0054] Referring to the processor 1305 of Fig. IB, the registers 1344, 1345, 1346, the arithmetic logic unit (ALU) 1340, and the control unit 1339 work together to perform sequences of micro-operations needed to perform “fetch, decode, and execute” cycles for every instruction in the instruction set making up the program 1333. Each fetch, decode, and execute cycle comprises: [0055] a fetch operation, which fetches or reads an instruction 1331 from a memory location 1328, 1329, 1330; [0056] a decode operation in which the control unit 1339 determines which instruction has been fetched; and [0057] an execute operation in which the control unit 1339 and/or the ALU 1340 execute the instruction.
[0058] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 1339 stores or writes a value to a memory location 1332.
[0059] Each step or sub-process in the processes of Figs. 6 to 11 is associated with one or more segments of the program 1333 and is performed by the register section 1344, 1345, 1347, the ALU 1340, and the control unit 1339 in the processor 1305 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1333.
[0060] The method of decoding a calibration pattern and various other methods described herein may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of calibration pattern decoding and various other methods. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
Embedded Electronic Device Description [0061] Figs. 1C and ID collectively form a schematic block diagram of a general purpose electronic device 1401 including embedded components, upon which the decoding a calibration pattern and various other methods described herein are desirably practiced.
[0062] The electronic device 1401 may be, for example, a mobile phone, a portable media player, a projector or a digital camera, in which processing resources are limited. According to various embodiments, the arrangements described herein may be implemented on a combination of various embedded components with additional computers and/or servers. For example, a projection system may include a projector (where the projector includes a camera incorporated therein) and a computer. A projection system may alternatively include a projector, a separate camera in communication with the projector and a computer. As a further alternative, a projection system may include a projector (where the projector includes a camera incorporated therein) and a server in communication with the projector. As a further alternative, a projection system may include a projector, a separate camera in communication with the projector and a server in communication with the projector and/or camera. In each case, the projector and camera may be a general purpose electronic device 401 as described. The projector and/or camera may be in communication with a computer or server as depicted in Figs 1A and IB. As discussed above, where the projector has a camera incorporated therein, the built-in camera is arranged to capture images output from the projector and displayed on a projection surface.
Also, where the camera is separate to the projector, the camera may be arranged to capture images output from the projector and displayed on a projection surface. The camera 1327 and/or projector 1329 may have a zoom lens in which the focal length of the lens may be varied. The camera may be a stills camera or a video camera. Nevertheless, the methods to be described may also be performed on higher-level devices such as desktop computers, server computers, and other such devices with significantly larger processing resources as discussed above with reference to Figs 1A and ID.
[0063] As seen in Fig. 1C, the electronic device 1401 comprises an embedded controller 1402. Accordingly, the electronic device 1401 may be referred to as an “embedded device.” In the present example, the controller 1402 has a processing unit (or processor) 1405 which is bidirectionally coupled to an internal storage module 1409. The storage module 1409 may be formed from non-volatile semiconductor read only memory (ROM) 1460 and semiconductor random access memory (RAM) 1470, as seen in Fig. ID. The RAM 1470 may be volatile, nonvolatile or a combination of volatile and non-volatile memory.
[0064] The electronic device 1401 includes a display controller 1407, which is connected to a video display 1414, such as a liquid crystal display (LCD) panel or the like. The display controller 1407 is configured for displaying graphical images on the video display 1414 in accordance with instructions received from the embedded controller 1402, to which the display controller 1407 is connected.
[0065] The electronic device 1401 also includes user input devices 1413 which are typically formed by keys, a keypad or like controls. In some implementations, the user input devices 1413 may include a touch sensitive panel physically associated with the display 1414 to collectively form a touch-screen. Such a touch-screen may thus operate as one form of graphical user interface (GUI) as opposed to a prompt or menu driven GUI typically used with keypad-display combinations. Other forms of user input devices may also be used, such as a microphone (not illustrated) for voice commands or a joystick/thumb wheel (not illustrated) for ease of navigation about menus.
[0066] As seen in Fig. 1C, the electronic device 1401 also comprises a portable memory interface 1406, which is coupled to the processor 1405 via a connection 1419. The portable memory interface 1406 allows a complementary portable memory device 1425 to be coupled to the electronic device 1401 to act as a source or destination of data or to supplement the internal storage module 1409. Examples of such interfaces permit coupling with portable memory devices such as Universal Serial Bus (USB) memory devices, Secure Digital (SD) cards,
Personal Computer Memory Card International Association (PCMIA) cards, optical disks and magnetic disks.
[0067] The electronic device 1401 also has a communications interface 1408 to permit coupling of the device 1401 to a computer or communications network 1320 via a connection 1421. The connection 1421 may be wired or wireless. For example, the connection 1421 may be radio frequency or optical. An example of a wired connection includes Ethernet. Further, an example of wireless connection includes Bluetooth™ type local interconnection, Wi-Fi (including protocols based on the standards of the IEEE 802.11 family), Infrared Data Association (IrDa) and the like.
[0068] Typically, the electronic device 1401 is configured to perform some special function. The embedded controller 1402, possibly in conjunction with further special function components 1410, is provided to perform that special function. For example, where the device 1401 is a digital camera, the components 1410 may represent a lens, focus control and image sensor of the camera. The special function components 1410 is connected to the embedded controller 1402. As another example, the device 1401 may be a mobile telephone handset. In this instance, the components 1410 may represent those components required for communications in a cellular telephone environment. Where the device 1401 is a portable device, the special function components 1410 may represent a number of encoders and decoders of a type including Joint Photographic Experts Group (JPEG), (Moving Picture Experts Group) MPEG, MPEG-1 Audio Layer 3 (MP3), and the like.
[0069] The methods described hereinafter may be implemented using the embedded controller 1402, where the processes of Figs. 6 to 11 may be implemented as one or more software application programs 1333 executable within the embedded controller 1402. The electronic device 1401 of Fig. 1C implements the described methods. In particular, with reference to Fig. ID, the steps of the described methods are effected by instructions in the software 1333 that are carried out within the controller 1402. The software instructions may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
[0070] The software 1333 of the embedded controller 1402 is typically stored in the nonvolatile ROM 1460 of the internal storage module 1409. The software 1333 stored in the ROM 1460 can be updated when required from a computer readable medium. The software 1333 can be loaded into and executed by the processor 1405. In some instances, the processor 1405 may execute software instructions that are located in RAM 1470. Software instructions may be loaded into the RAM 1470 by the processor 1405 initiating a copy of one or more code modules from ROM 1460 into RAM 1470. Alternatively, the software instructions of one or more code modules may be pre-installed in a non-volatile region of RAM 1470 by a manufacturer. After one or more code modules have been located in RAM 1470, the processor 1405 may execute software instructions of the one or more code modules.
[0071] The application program 1333 is typically pre-installed and stored in the ROM 1460 by a manufacturer, prior to distribution of the electronic device 1401. However, in some instances, the application programs 1333 may be supplied to the user encoded on one or more CD-ROM (not shown) and read via the portable memory interface 1406 of Fig. 1C prior to storage in the internal storage module 1409 or in the portable memory 1425. In another alternative, the software application program 1333 may be read by the processor 1405 from the network 1320, or loaded into the controller 1402 or the portable storage medium 1425 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that participates in providing instructions and/or data to the controller 1402 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, flash memory, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the device 1401. Examples of transitory or nontangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the device 1401 include radio or infrared transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. A computer readable medium having such software or computer program recorded on it is a computer program product.
[0072] The second part of the application programs 1333 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 1414 of Fig. 1C. Through manipulation of the user input device 1413 (e.g., the keypad), a user of the device 1401 and the application programs 1333 may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via loudspeakers (not illustrated) and user voice commands input via the microphone (not illustrated).
[0073] Fig. ID illustrates in detail the embedded controller 1402 having the processor 1405 for executing the application programs 1333 and the internal storage 1409. The internal storage 1409 comprises read only memory (ROM) 1460 and random access memory (RAM) 1470. The processor 1405 is able to execute the application programs 1333 stored in one or both of the connected memories 1460 and 1470. When the electronic device 1401 is initially powered up, a system program resident in the ROM 1460 is executed. The application program 1333 permanently stored in the ROM 1460 is sometimes referred to as “firmware”.
Execution of the firmware by the processor 1405 may fulfil various functions, including processor management, memory management, device management, storage management and user interface.
[0074] The processor 1405 typically includes a number of functional modules including a control unit (CU) 1451, an arithmetic logic unit (ALU) 1452, a digital signal processor (DSP) 1453 and a local or internal memory comprising a set of registers 1454 which typically contain atomic data elements 1456, 1457, along with internal buffer or cache memory 1455. One or more internal buses 1459 interconnect these functional modules. The processor 1405 typically also has one or more interfaces 1458 for communicating with external devices via system bus 1481, using a connection 1461.
[0075] The application program 1333 includes a sequence of instructions 1462 through 1463 that may include conditional branch and loop instructions. The program 1333 may also include data, which is used in execution of the program 1333. This data may be stored as part of the instruction or in a separate location 1464 within the ROM 1460 or RAM 1470.
[0076] In general, the processor 1405 is given a set of instructions, which are executed therein. This set of instructions may be organised into blocks, which perform specific tasks or handle specific events that occur in the electronic device 1401. Typically, the application program 1333 waits for events and subsequently executes the block of code associated with that event. Events may be triggered in response to input from a user, via the user input devices 1413 of Fig. 1C, as detected by the processor 1405. Events may also be triggered in response to other sensors and interfaces in the electronic device 1401.
[0077] The execution of a set of the instructions may require numeric variables to be read and modified. Such numeric variables are stored in the RAM 1470. The disclosed method uses input variables 1471 that are stored in known locations 1472, 1473 in the memory 1470. The input variables 1471 are processed to produce output variables 1477 that are stored in known locations 1478, 1479 in the memory 1470. Intermediate variables 1474 may be stored in additional memory locations in locations 1475, 1476 of the memory 1470. Alternatively, some intermediate variables may only exist in the registers 1454 of the processor 1405.
[0078] The execution of a sequence of instructions is achieved in the processor 1405 by repeated application of a fetch-execute cycle. The control unit 1451 of the processor 1405 maintains a register called the program counter, which contains the address in ROM 1460 or RAM 1470 of the next instruction to be executed. At the start of the fetch execute cycle, the contents of the memory address indexed by the program counter is loaded into the control unit 1451. The instruction thus loaded controls the subsequent operation of the processor 1405, causing for example, data to be loaded from ROM memory 1460 into processor registers 1454, the contents of a register to be arithmetically combined with the contents of another register, the contents of a register to be written to the location stored in another register and so on. At the end of the fetch execute cycle the program counter is updated to point to the next instruction in the system program code. Depending on the instruction just executed this may involve incrementing the address contained in the program counter or loading the program counter with a new address in order to achieve a branch operation.
[0079] Each step or sub-process in the processes of the methods described below is associated with one or more segments of the application program 1333, and is performed by repeated execution of a fetch-execute cycle in the processor 1405 or similar programmatic operation of other independent processor blocks in the electronic device 1401.
Overview of the Invention
Calibration pattern generation [0080] The goals of the disclosed arrangements are to generate (e.g. develop, produce, provide, display or create) a calibration pattern and utilise a detection algorithm which can be used for high accuracy and high speed image alignment of projected output images at multiple image scales. The image processing for the alignment may be carried out, for example, in a projector and/or a camera as described with reference to Figs 1C and ID, or, in a computer or a server accessible via a communications network (1320 or 1322) as described with reference to Figs 1A and IB.
[0081] The calibration pattern is created by starting with a 2D real-valued pseudo-random noise pattern with a uniform distribution and random seed s in the spatial domain fn(x, y) where x and y are the Cartesian spatial domain co-ordinates. The noise pattern is then Fourier transformed to create a complex spatial frequency domain noise pattern FN(u, v) where u and v are the spatial frequency domain Cartesian co-ordinates. The calibration pattern is created to be used at a set of K different image scales mk where k is an index within the set of K image scales. For each image scale mk, a mid-range radial spatial frequency qk is selected which provides the best translation alignment accuracy at that scale.
[0082] The mid-range radial spatial frequency qk for each image scale is selected according to the constraint that low spatial frequencies in the captured image are relatively insensitive to small changes in translation alignment, and the constraint that the amplitudes of high spatial frequencies will be reduced by the lens modulation transfer function (MTF) of the camera used to capture an image of the calibration pattern. An additional constraint is that very low and very high spatial frequencies have low visibility for human observers, and therefore low and high spatial frequencies may be preferred in order to reduce the visibility of the calibration pattern, for example if the pattern is embedded into a content image as a watermark. An additional but optional constraint is establishing a non-redundant set of K -1 reference sequential ratios rk = Vk+i/Vk which can be used at capture time to match the captured ring radii to the corresponding reference ring radii. An example set of ring radii q = {2qrs, 4qs, 6qs, 8qs}, where qs is a constant base spatial frequency radius, has a non-redundant set of sequential ring ratios r = {2,1.5,1.3}.
[0083] The set of mid-range spatial frequencies are used to create a set of concentric rings in the spatial frequency domain of the calibration pattern. For each mid-range spatial frequency qk, a 2D ring mask image FMk(u, v) is created with a Gaussian function of radial spatial frequency co-ordinate
with ring radius qk and ring width ak
[E0] [0084] The spatial frequency domain noise pattern FN(u, v) is multiplied by the 2D ring mask image FMk(u, v) to create a spatial frequency domain noise ring image FRk(u, v)
[El] [0085] The set of spatial frequency domain noise ring images FRk (it, v) are generated and then summed to create a spatial frequency domain concentric ring pattern FR(u, v)
[E2] [0086] where wk is a weight factor for each ring. The spatial frequency domain concentric ring pattern FR(u, v) is then inverse Fourier transformed, the imaginary part is discarded, and the real part is normalised to the intensity range of the chart display, for example from 0 to 255, to create the spatial domain noise ring calibration pattern fr(x,y). If the calibration pattern is output (e.g. displayed or printed) using a device which can only create binary intensities (i.e. each pixel is either black or white) then the calibration pattern can be binarised by setting a binary threshold at pixel value 128. If the calibration pattern needs to be sparse to reduce the human visibility of the pattern, then the binary threshold should be set to a lower value, such as 90, for a sparsity of 15%.
[0087] The ring width ak for each ring is chosen according to the constraints that a small ring width will improve scale estimation accuracy but reduce translation estimation tolerance to imaging noise, and a large ring width will reduce scale estimation accuracy but increase translation estimation tolerance to imaging noise. In addition, ring overlap needs to be avoided, which further constrains the selection of ring radii qk and ring width ak. The weight factor wk for each ring is selected according to the relative importance of the corresponding imaging scale mk, for example if one of the image scales requires higher accuracy alignment then the weight can be increased relative to the other weights. Alternatively, the weight factor may be increased at higher spatial frequencies to compensate for a reduced lens MTF at those spatial frequencies.
[0088] Therefore, a test chart (including multiple calibration patterns) for calibrating an image projecting system is produced. The test chart includes a test pattern (calibration pattern) at each measurement position, the test pattern having a plurality of grey levels and a plurality of concentric rings in a frequency domain, each of the plurality of concentric rings having a different radius.
[0089] Fig. 3 shows an example noise ring calibration pattern fr(x,y) 330 and intermediate images used to generate the calibration pattern including the spatial domain noise pattern fn(x, y) 310, an example 2D ring mask image FMk(u, v) for k = 4 320 and the modulus of the spatial frequency domain concentric ring pattern \FR(u,v) \ 340. Therefore, the calibration pattern has a number of concentric rings in a frequency domain, where at least two of the concentric rings have a different radius. The parameters used to create the example noise ring calibration pattern in Fig. 3 are shown in the following table:
Calibration pattern alignment [0090] The calibration pattern is used to create a reference calibration chart by repeating a reference noise ring calibration pattern (generated as described herein) over the projection surface in a tiling pattern. The reference pattern image may be stored in the projector memory. Alternatively, the parameters used to generate the reference pattern image may be stored in the projector memory, including the image size, number of rings, ring radii, ring widths, ring weight factors and ring noise seeds. The parameters may be used directly during alignment estimation or the parameters may be used to regenerate the reference pattern image.
[0091] The reference calibration chart is projected on to a projection surface in a projected image, and that projected image of the calibration pattern is then captured with a camera creating a captured calibration chart image g{pc,y) with an unknown capture alignment. It will be understood that the device being used to perform the decoding of the calibration pattern may be separate to the device that captures the projected image of the calibration pattern. For example, the camera that captures the image may communicate the image to the device being used to perform the decoding of the calibration pattern. The calibration pattern is designed so that it is not visible (or at least not clearly visible) to the human eye when being projected along with the other images being projected. The calibration chart may be displayed on a liquid crystal display (LCD), printed using an inkjet printer, an electro-photographic printer or an offset printer, or fabricated using lithography. When the calibration chart is displayed it may be considered to be an output image of the computer system 1300.
[0092] A set of J measurement points (Xj, yj) are selected in the captured calibration chart image, where j is an index within the set of J measurement points. A small window is extracted from the captured chart image centred at each measurement point (xj, yj) creating a set of captured pattern images gj(x, y), and the captured pattern images are used to estimate the alignment between the reference calibration pattern and the captured calibration pattern. The alignment includes decoding the captured calibration pattern in order to align the scale as part of the alignment of the captured pattern image compared with the reference pattern image and is used to adjust the projected image accordingly. Further, the alignment may also include decoding the captured calibration pattern in order to align the rotation as part of the alignment of the captured pattern image and is used to adjust the projected image accordingly. Further, the alignment may also include decoding the captured calibration pattern in order to align the translation as part of the alignment of the captured pattern image and is used to adjust the projected image accordingly.
[0093] The first stage in alignment is finding a match between the spatial frequency domain rings in the captured image and the reference pattern. This is carried out by locating, within the captured pattern image, one or more rings from the concentric rings in the frequency domain, and subsequently matching the located ring(s) in the captured pattern image to a corresponding ring from the concentric rings within the calibration pattern.
[0094] The captured pattern image gj(x,y) is Fourier transformed to create a captured pattern spectrum Gj(u, v). To improve accuracy by reducing edge effects, the captured pattern image may have a windowing function applied, such as a Hann or a Hamming window, before performing the Fourier transform. To make low frequency peaks easier to detect, a low frequency mask can be applied to the spatial frequency domain captured image. An example low ak frequency mask can be generated using equation [E0] with the ring width parameter set to 5% of the captured image width and the ring radius parameter qk set to 0. It will be understood that other percentage values may be used.
[0095] A ID angular projection GAj(q) is created using a discrete approximation to the angular projection integral of the modulus of the captured pattern spectrum
[E3] [0096] where Θ = atan2(y, u) is the polar co-ordinate in the spatial frequency domain and atan2 is the two-argument variant of the arctangent. The modulus is used because it is invariant to translation alignment. The input to the angular projection is interpolated to compensate for the fact that a regular grid in Cartesian axes does not create an evenly spaced set of samples under radial slicing at different angles Θ. A ID peak finding algorithm is used to find K peaks, then any peaks which are below a predetermined threshold are discarded, and the remaining L peak positions are used to estimate L captured image radial spatial frequencies qCii. The L captured image radial spatial frequencies qcl should correspond to the subset of the K radial spatial frequencies from the reference calibration pattern which can be resolved by the camera at the unknown capture alignment, where L < K.
[0097] If at least 2 peaks are detected (L > 2) then the captured ring radii can be matched to the reference ring radii using the sequential ratios rk established during calibration pattern generation. It has been assumed that all reference rings within a range of radial spatial frequencies qm < q < qx are detected in the captured image, where qm and qx are respectively the lowest and highest radial spatial frequencies of the L detected rings. The L - 1 captured sequential ratios rc l = qc,i+i/qc,i are then calculated. The captured sequential ratios rct are then compared with the reference sequential ratios rk by taking the difference and are considered matched if the difference is less than a pre-determined threshold. If a captured ratio matches a reference ratio then the 2 captured rings that were used to estimate the ratio are matched to the corresponding 2 reference rings. Because it has been assumed that the rings are detected across a sequential range, this then identifies all L captured rings.
[0098] The scale alignment of the captured image relative to the reference image can be estimated by taking the ratio of corresponding ring radii mc,k = qc,i/qk The ring ratio matching can be made more robust by combining multiple ratio comparisons, for example by performing a discrete correlation between the reference ratios and the captured ratios and searching for the ratio index offset which produces the strongest correlation. The scale estimation can be made more accurate and robust by combining scale estimates from multiple corresponding reference and captured ring radii, for example using an average or median operation over the scale estimates. The determined scale may then be used to decode the calibration pattern in order to adjust the projected image.
[0099] Fig. 4 shows an example of calibration pattern ring matching and scale alignment including intermediate images and results. The reference pattern image 410 and Fourier transform of the reference pattern image 420 are shown, where the calibration pattern parameters are the same as for Fig. 3. A simulated captured pattern image 430 and Fourier transform of the captured pattern image 440 are shown, where the capture simulation has the following parameters:
[00100] Additive imaging noise and an alignment change was included in the simulation. The alignment change was performed using sine interpolation. A camera lens MTF was not included in the simulation.
[00101] The angular projection integral is shown in 450 for the reference pattern image 451 and the captured pattern image 452. Four peaks are clearly visible in the angular projection. The spatial domain scaling of 1.05 results in a scaling of 1/1.05 in the spatial frequency domain, which has the effect of scaling the captured peak positions by 1/1.05 relative to the reference peak positions. The peak positions were detected by searching for 4 maximum values in the angular projection, upsampling each peak by interpolating the values around each peak using a chirp-z transform, and storing the upsampled peak position as a real-valued number. The sequential ratios were then calculated for the captured pattern image and could then be compared with the reference pattern image for matching corresponding rings. The sequential ratios for this example were:
[00102] Any one of these 3 ratios could be used for ring matching with a ratio match threshold of 0.02. In this example, the radius of ring 4 was used to obtain a scale alignment estimate of 1.05, which is accurate to within 0.01 scale factor.
[00103] The rotation alignment of the captured pattern image may be estimated by performing a ID cross-correlation between a captured ring with radius qci and the corresponding matched reference ring qk. An angular slice of the modulus of the spatial frequency domain is calculated for the reference ring from the reference pattern image and the corresponding matched captured ring from the captured pattern image at the respective constant ring radii qk and qCii to create a reference angular slice Fsk(9) and a captured angular slice GS ;(0)
[E4] [E5] [00104] The modulus is used because it is invariant to translation alignment. Both angular slices are interpolated to compensate for the fact that a regular grid in Cartesian axes does not create an evenly spaced set of samples under angular slicing. The reference angular slice is also interpolated so that it is the same length as the captured angular slice, which compensates for the scale alignment, creating an interpolated reference angular slice F'sk(9). A ID cross-correlation is performed between the interpolated reference angular slice F's k(9) and a captured angular slice GS i(9) creating a ID correlation image. Alternatively the slice image can be created from multiple pixels over a radius band which relates to the width of the rings. For example, the slice band width can be 6 times the corresponding reference ring width ak. This width may be adjusted to cover the likely change in ring width according to a pre-determined scale tolerance of the alignment process. In this alternative, 2D correlation is applied to the angular slices instead of ID correlation, however the radial dimension is small compared with the angular dimension, which means that the 2D angular slice correlation is still significantly faster than a full 2D correlation of the reference and captured pattern images.
[00105] Peak finding is used to find the strongest peak in the ID correlation image. The position of the peak relative to the centre of the correlation image gives an estimate of the rotation alignment of the captured pattern relative to the reference pattern, assuming that the centre of the correlation image corresponds to zero offset between the 2 functions input to the correlation. To increase the accuracy of the rotation alignment estimate, the rotation estimates from multiple corresponding matched pairs of reference and captured rings can be combined, for example using an average or median of the rotation estimates.
[00106] Cross-correlation of an angular slice can also be used as an alternative method of matching a reference ring to a captured ring, instead of using the sequential ratio matching method. In the cross-correlation method for ring matching, a ID cross-correlation is applied to a candidate interpolated reference ring and a selected captured ring. If the strongest peak in the ID correlation image is above a predetermined threshold then the candidate reference ring is matched to the selected reference ring. The matching can be used to estimate the scale of the captured pattern image by taking the ratio of corresponding ring radii mc k = qc,i/qk, and the position of the correlation peak in the ID correlation image can be used to estimate the rotation of the captured pattern image. If the candidate reference ring does not match, then the next reference ring from the K rings from the reference calibration pattern is selected as a candidate for ID cross-correlation with the selected captured ring. If all of the reference rings fail to match, then the next detected ring from the L detected rings from the captured pattern image is selected. The initial selection of the captured ring can be improved by selecting the rings in order of highest to lowest peak strength in the angular projection integral of the modulus of the captured pattern spectrum. This selection order increases the accuracy and speed of the matching process because stronger rings are more likely to give a correct match.
[00107] Fig. 5 shows an example of calibration pattern rotation alignment including intermediate images used to estimate the alignment, for the same calibration pattern design as Fig. 3 and simulation parameters as Fig. 4. The reference angular slice for ring 4 is shown in 510 and the corresponding captured angular slice is shown in 520. In this example the angular slices have a radial width of 6 pixels. The result of the cross-correlation between the modulus of the angular slices is shown in 530. The cross-correlation performed was a phase correlation, where the Fourier modulus was set to 1 for both angular slices during correlation. Phase correlation increases the tolerance to amplitude variations during image capture. The angular slices were also padded by 20 pixels with a pixel value of 0 in the radial direction, to improve peak finding accuracy by reducing edge effects during peak interpolation. A clear peak is visible in the crosscorrelation image. The position along the angular spatial frequency axis of the peak relative to the centre of the correlation image can be used to estimate the rotation alignment. In this example, the peak position was used to obtain a rotation alignment estimate of 14.9° which is accurate to within 0.1°.
[00108] The rotation alignment estimate and the scale alignment estimate are used to apply an interpolated scale and rotation transform to the reference pattern image, creating an intermediate reference pattern image. For high accuracy the interpolation should be performed using sine interpolation. Alternatively, for faster interpolation with reduced accuracy, other methods could be used including bilinear or cubic interpolation. The intermediate reference pattern image is cross-correlated with the captured pattern image, then 2D peak finding is used to detect the strongest peak in the correlation image, and the position of the peak in the correlation image is used to estimate the translation alignment. This correlation uses information from all of the rings with significant information in the intermediate reference pattern image, and all of the rings with significant information in the captured pattern image, resulting in high accuracy (sub-pixel) translation alignment for captured images over a wide range of image scales.
[00109] The alignment accuracy of the calibration pattern is highest when the pattern has midrange spatial frequencies in the captured image. Because the calibration pattern has multiple rings, the optimum alignment accuracy can be achieved at multiple image scales. The scale and rotation can be detected quickly from captured images at multiple scales using ID operations in the frequency domain. The translation can then be estimated accurately using a 2D crosscorrelation from captured images at multiple scales. The method requires only a single captured image to estimate the alignment which means the measurement is fast to capture and it can be used to estimate the depth of moving objects.
[00110] Fig. 2 is a diagram of a projector system 210 including a projection lens 220, a camera 230 and a curved projection surface 240. The projector is projecting an image on to the projector surface. Normally a projector will project an image from a planar display inside the projector, such as a liquid crystal on silicon (LCOS) display, on to a flat projection surface such as a wall or a projection screen. In this case, the projection appears flat and rectangular to the viewer. However, if the projection surface is curved, then the projection will appear distorted to the viewer.
[00111] The projection distortion can be compensated by measuring the shape of the projection surface and using the shape measurement to adjust the projected image. This can be accomplished by projecting a projection surface calibration chart onto the projection surface and measuring the alignment within regions of the calibration chart if the projector system has been previously calibrated. In a factory projector system calibration operation, or a user projector system calibration operation, the imaging geometry of the projector 210 and the camera 230 are calibrated according to a pinhole imaging model using a flat projector system calibration chart to create a set of intrinsic parameters for the projector and the camera, and also a set of extrinsic parameters for the projector relative to the camera. The projector system calibration chart can be a well-known pattern such as a grid of black squares. The intrinsic parameters describe the mapping between 3D (three-dimensional) points on any given plane in object space to 2D pixel positions on the internal projector LCOS display or the camera sensor image. The extrinsic parameters describe the relative position and orientation between the projector and the camera. If the projector lens or camera lens has significant barrel or pincushion distortion, then the distortion of each lens can also be calibrated as additional intrinsic parameters for the projector and camera calibration.
[00112] The projector system calibration process starts by calibrating the camera by capturing multiple images of a flat printed camera calibration chart with the chart at different angles and positions within the camera field of view. The camera calibration parameters are calculated from the corner features in the captured images and the parameters are stored in the memory of the projector. Projector calibration is then performed by capturing multiple images of a projection of a projector calibration chart on to a flat printed camera calibration chart with the chart at different angles and positions within the camera field of view. The comer features on the printed chart and the comer features on the projected chart are detected in the captured images and the correspondences are used together with the camera calibration to estimate the intrinsic parameters of the projector and the extrinsic parameters that describe the relative position and orientation between the projector and the camera. The projector system calibration intrinsic and extrinsic parameters are stored in the memory of the projector. The projector calibration chart can consist of a grid of squares for which the positions of each projected corner feature in the camera image can be detected. Alternatively, the projector calibration chart can be a sequence of Gray codes, from which the position of each projected pixel in the camera image can be estimated. Alternatively, the projector calibration chart can be a sequence of phase-shifted transverse sinusoidal patterns, from which the position of each projected pixel in the camera image can be estimated.
[00113] When the user of the projector system projects a content image onto a curved projection surface, then the projection surface calibration operation 600 begins as shown in Fig. 6. At step 610, the projector 210 projects a projection surface calibration chart on to the projection surface 240. The projection surface calibration chart is a tiling of a reference noise ring calibration pattern generated using calibration pattern parameters that are stored in the projector memory, where the parameters include multiple associated reference ring radii, widths, noise seeds, angular slices, and sequential ratios. At step 620 the camera 230 captures an image of the calibration chart. At step 630 the processor on the projector estimates the distortion in the projected chart caused by the shape and position of the projection surface. At step 640 the projector compensates the projected content image for the distortion caused by the projection surface and projects the compensated content image on to the projection surface. For example, an inverse warp is applied to the content image using the estimated projection surface shape and position, so that after projection on to the curved projection surface, the projected content appears undistorted to the user of the projector.
[00114] Alternatively, the estimated projection distortion includes an estimation of the projector lens MTF, and the projected image is compensated for the MTF. For example, the MTF can be reduced in some projection surface regions due to defocus if the projection surface is curved, and the estimate of the MTF can be used to sharpen the content image in the defocused regions to compensate for the defocus blur and make the image appear to be uniformly sharp across the projection. Alternatively, the image can be sharpened to increase the MTF in the comers so that it matches the MTF in the projected image centre. Alternatively, in the sharpest regions where the MTF is highest the content image can be slightly blurred to reduce the sharpness to match the most blurry regions so that the entire projected image has uniform sharpness. Alternatively, if the projected image is being stitched with a second projection, then the projection lens MTF can be estimated for both projectors in the overlap projection region, and the content image of the first projector can be sharpened or blurred to match the sharpness of the second projector so that the overlap region appears more consistent. At step 699 the projection surface calibration operation ends.
[00115] Fig. 7 shows a method 700 for the distortion estimation step 630. At step 710 a measurement point in the camera chart image is selected from a list of measurement points. For example the measurement points may be arranged in a grid. At step 720 a window is extracted from the camera chart image centred on the selected measurement point to create a camera pattern image. The window size should be small to increase the resolution of the distortion estimation, but the window size should be large enough to ensure sufficient alignment accuracy. Example window sizes are 64x64 pixels or 128x128 pixels. At step 730 the alignment is estimated using the captured pattern image. At step 740 a decision is made whether to measure more points. The process flow returns to step 710 if there are more points, and the process 700 ends at 799 if there are no more points.
[00116] Fig. 8 shows a method 800 for the alignment estimation step 730. At step 810 a Fourier transform is applied to the captured pattern image creating a captured pattern spectrum. At step 820 a ID angular projection is calculated from an interpolated captured pattern spectrum according to Equation [E3] using interpolation. Alternatively, the interpolation could be performed more efficiently by creating a ID buffer image and a ID counter image which is scaled to be significantly wider (for example, 10 times wider) than the circumference of the largest ring which could fit inside the captured pattern spectrum, binning pixel values into the nearest neighbour in the ID buffer image and incrementing the ID counter image, then dividing the ID buffer image by the ID counter image to create an interpolated ID angular projection.
[00117] If the reference pattern image has reference rings which are higher spatial frequency than the camera sensor sampling spatial frequency, then the reference rings will appear aliased in the captured pattern spectrum. To compensate for aliasing, the captured pattern spectrum can be padded by extending pixel values from opposite sides of the spectrum. For example, to compensate for undersampling by a factor of 2, the captured pattern spectrum can be extended by a factor of 2. The angular projection can then be performed on the extended captured pattern spectrum. Alternatively, an optical low pass filter can be used in the camera to reduce the effect of aliasing on the captured pattern image.
[00118] At step 830 rings in the spatial frequency domain are detected by applying peak detection to the ID angular projection, and peaks exceeding a pre-determined detection threshold are selected, or the top LN peaks are selected where LN is a pre-determined number. The accuracy of peak detection can be improved by detecting a set of maximum values and then interpolating the values close to the maximum values, fitting a parabolic function and taking the maximum of the parabolic fit as the peak value and position. The radial spatial frequencies of the selected peaks are stored in memory forming a set of L captured ring radii. At step 840 at least one ring is matched between the reference pattern and the captured pattern using the reference ring radii associated with the reference pattern image and the captured ring radii associated with the captured pattern image.
[00119] At step 850 the scale alignment of the captured pattern image relative to the reference pattern image is estimated by taking the ratio of a matched captured ring radius and the corresponding matched reference ring radius. As an optional step, the projection lens MTF can be estimated at each radial spatial frequency corresponding to a matched captured ring, by dividing the angular projection peak strength of each matched captured ring by the angular projection peak strength of each associated reference ring. This assumes that the projection lens MTF is approximately isotropic with angular spatial frequency. At step 860 the rotation alignment of the captured pattern image relative to the reference pattern image is estimated by cross-correlating a matched reference ring angular slice of the reference pattern spectrum with the corresponding matched captured ring angular slice of the captured pattern spectrum.
[00120] At step 870 the translation alignment is estimated by applying the estimated scale and rotation alignment to the reference pattern image creating an intermediate reference pattern image, applying cross-correlation to the intermediate reference pattern image and the captured pattern image, detecting a peak in the correlation image, and using the peak position to estimate the translation alignment of the captured pattern image relative to the reference pattern image. Alternatively, the inverse of the estimated scale and rotation alignment can be applied to the captured pattern image, in which case the estimated translation alignment would need to be transformed by the estimated scale and rotation alignment so that the translation alignment is in the captured image space. Alternatively, to improve robustness to variations in captured pattern image intensity, a phase correlation is performed, in which the Fourier modulus of the input images to the correlation is set to 1. Alternatively, to improve robustness to defocus in the captured pattern image, a blur invariant phase correlation is performed, in which the Fourier phase is doubled during phase correlation, and the estimated translation alignment is halved to compensate for the phase doubling. To improve accuracy of the correlation, the input images to the correlation can be windowed, for example using a Hann or Hamming window to reduce edge effects, before performing the correlation. Alternatively, to improve the speed of translation alignment estimation, the cross-correlation between the reference pattern image and the captured pattern image can be performed in the spatial frequency domain by only taking the product at pixels which lie within the widths of the pattern rings. At step 899 the alignment estimation operation ends.
[00121] Fig. 9 shows a method 900 for the ring matching step 840. At step 910 captured sequential ratios rc l = qc,i+i/Qc,i 920 are calculated for the set of L rings detected in step 830.
In step 930 the rings are matched by comparing the captured sequential ratios 920 with the reference sequential ratios 921 associated with the reference calibration pattern. Each captured sequential ratio is compared with all of the reference ring ratios, and the rings corresponding to the sequential ratios are considered to be matched between the reference rings and the captured rings if the difference between the ratios is less than a pre-determined threshold. Alternatively, to improve the speed of matching, the matching process can stop after the first successful match is found. Alternatively, to improve the reliability of matching, the rings can be considered matched after multiple adjacent sequential ratios have a difference less than a pre-determined threshold, or the weighted sum of the differences of multiple adjacent sequential ratios can be compared with a pre-determined threshold, where the weights are determined using the corresponding ring peak strengths estimated during the ring detection step 830.
[00122] Optionally, after one or more captured rings and the corresponding reference rings are matched, the remaining detected captured rings can also be matched using sequential ordering by assuming that all of the detected captured rings are in the same radial sequence as the reference rings. This match assumption can be used when the captured pattern image is in reasonable focus. If the captured pattern image has strong defocus, then there may be a circular zero in the camera lens MTF and if this circular zero coincides with a reference pattern ring, then the corresponding captured pattern ring may be very weak and detection of that ring may fail. In this case, sequential ratio matching and sequential order matching may become unreliable.
[00123] Alternatively, to improve the reliability of matching, during reference pattern creation, reference ring radii can be selected such that all ratios from all pairs of reference ring radii are unique, and a set of non-sequential reference ratios can be stored in the projector memory as parameters associated with the reference pattern. In this case, non-sequential captured ratios can be calculated from all pairs of captured ring radii, then all non-sequential reference ratios can be pairwise compared with all non-sequential captured ratios and a ring match identified if the difference between the ratios is less than a pre-determined threshold. Alternatively, a predetermined threshold for matching can be applied to the ratio of the reference ring ratio over the captured ring ratio. A list of matched captured ring radii and the corresponding reference ring radii are stored in memory where the ring radii are associated with the captured pattern image and the reference pattern image. At step 999 the ring matching step ends.
[00124] Alternatively, the ring matching step 840 can be performed by applying a logarithmic transform of the radial spatial frequency co-ordinate to the reference angular projection and the captured angular projection to produce a logarithmic angular projection of the calibration pattern and a captured logarithmic angular projection, respectively. This operation is followed by a cross-correlation of the logarithmic angular projection of the calibration pattern with the captured logarithmic angular projection to create a radial log correlation image. Peak finding is applied to the radial log correlation image, and the highest peak is compared against a predetermined threshold. If the threshold is reached, then the position of the highest peak is used to find an offset between the reference angular projection and the captured angular projection. This offset can be used to estimate the scale alignment. The scale alignment can be applied to the reference angular projection to create an intermediate reference angular projection, and peak finding applied to the intermediate reference angular projection and the captured angular projection. Peak positions which are closer than a pre-determined threshold are recorded as matched and the corresponding captured rings and associated reference rings are recorded in the projector memory in a list of matched rings.
[00125] Fig. 11 shows a method 1100 for the rotation estimation step 860. At step 1110a matched captured ring is selected. The matched captured ring with the highest peak strength from ring detection step 830 is selected, which improves accuracy because the selected ring has the highest signal to noise ratio. Alternatively, the matched captured ring with the largest ring radius is selected, which improves accuracy because the large ring circumference is more sensitive to rotation alignment and more robust to noise. At step 1120 an angular slice operation in the spatial frequency domain is performed on a selected matched captured ring and the associated reference ring according to Equations [E4] and [E5] to create a captured angular slice and a reference angular slice. The angular slice for the associated reference ring may instead be retrieved from a memory associated with the reference pattern image. The reference angular slice is also interpolated so that it is the same length as the captured angular slice, which compensates for the scale alignment, creating an interpolated reference angular slice. At step 1130 the captured angular slice and the interpolated reference angular slice are cross-correlated, to create a slice correlation image. If the slices are ID, then a ID cross-correlation is performed. If the slices are 2D, for example if the slices have a width of 6ak, then a 2D cross-correlation is performed. Alternatively, to improve robustness to variations in captured pattern image intensity, a phase correlation is performed, in which the Fourier modulus of the input images to the correlation is set to 1.
[00126] At step 1140 peak finding is performed on the slice correlation image resulting in a slice correlation peak position, which is stored in memory in a list of slice correlation peak positions. The accuracy of peak finding can be improved by searching for the maximum pixel intensity in the slice correlation image, extracting a small window around the maximum peak intensity, performing upsampling using sine interpolation, and finding the maximum pixel intensity within the upsampled window, and then adjusting the peak position back into the co-ordinates of the slice correlation image by accounting for the small window position and upsampling factor. At step 1150, a decision is made whether to select additional matched captured rings for rotation alignment estimation. If additional unmeasured matched captured rings are available, they can be used to further improve the signal to noise ratio of the rotation alignment at the cost of extra computation time. If additional rings are to be selected, control returns to step 1110. Otherwise, control continues to step 1160 where the rotation alignment is calculated. The rotation alignment is calculated using the list of slice correlation peak positions. The rotation alignment estimate for each peak position is calculated for each peak using the relative offset from the peak to the centre of the slice correlation image. If there are multiple slice correlation peak positions, then the accuracy can be improved by calculating a weighted average of the peak positions, where the weights are determined using the corresponding ring peak strengths estimated during the ring detection step 830, and using the weighted average peak position to estimate the rotation alignment. Alternatively, the peak position weights are determined using the corresponding slice correlation peak strengths. At step 1199 the rotation alignment estimation step ends.
[00127] Fig. 10 shows a method 1000 for the ring matching step 840 that is used in an alternative embodiment. Method 1000 can be used to improve the reliability of ring matching at the cost of increased processing time per ring tested for a match. Method 1000 can also be used when the reference pattern image is associated with more than 2 rings, and the rings do not have unique sequential ratios or unique non-sequential ratios. MethodlOOO begins with step 1010 in which a captured ring and a reference ring are selected for match testing, where the captured ring and the reference ring have not yet been tested together. The captured ring is selected from the set of L rings detected in step 830. A reference ring is selected from the K rings associated with the reference pattern image. The captured ring may be selected by giving priority to captured rings with high ring peak strengths estimated during the ring detection step 830, which improves accuracy because the selected ring has the highest signal to noise ratio. The reference ring may be selected by giving priority to reference rings which have approximately the same radial sequential position as the selected capture ring. Alternatively, the matched captured ring with the largest ring radius is selected, which improves accuracy because the large ring circumference is more sensitive to rotation alignment and more robust to noise.
[00128] At step 1020 an angular slice operation in the spatial frequency domain is performed on a selected matched captured ring using the captured pattern spectrum and the associated reference ring using the reference pattern spectrum according to Equations [E4] and [E5], to create a captured angular slice and a reference angular slice. At step 1030 the captured angular slice and the reference angular slice are cross-correlated to create a slice correlation image. At step 1040 peak finding is performed on the slice correlation image by searching for the maximum intensity peak resulting in a peak position and peak strength. At step 1050 a decision is made whether to match the selected captured ring with the selected reference ring. If the peak strength is higher than a pre-determined threshold, then the rings are matched and the selected captured ring and the selected reference ring are stored in a list of matched captured rings and associated reference rings. To improve match accuracy, the peak strength can be normalised by dividing by the root mean square of the pixel values of the slice correlation image, and further normalised by dividing by the square root of the number of pixels in the slice correlation image.
Alternatively, the peak strength can be normalised by taking the ratio of the highest detected peak over the second highest detected peak. Alternatively, the match threshold can be set using an estimate of the image sensor noise, for example using a look up table of expected noise for the camera capture conditions including ISO speed and exposure length. If the selected rings are matched, then flow continues to step 1060, otherwise flow returns to 1010.
[00129] At step 1060 the rotation alignment is calculated using the peak position relative to the centre of the slice correlation image. This is the same operation as step 1160 in process 1100, however performing the calculation at step 1060 is more computationally efficient because it avoids having to repeat operations in the rest of process 1100. At step 1099 the ring matching process ends. When flow returns to process 800, the rotation estimation step 860 is skipped as the rotation alignment has already been calculated at step 1060.
[00130] In an alternative embodiment, the projection surface has significant curvature, such that the local alignment cannot be accurately described using only rotation, scale and translation. In this case, the local alignment can be described using an affine transform which includes shear and aspect ratio as well as rotation, scale and translation. The affine transform transforms the concentric circular rings in the calibration pattern spatial frequency domain spectrum into concentric elliptical rings in the captured pattern spatial frequency domain spectrum. The affine transform can be estimated by fitting an ellipse to the rings in the captured pattern spectrum. The elliptical parameters can be estimated using a coarse to fine estimation approach. In the coarse estimation step, the modulus of the captured pattern spectrum is multiplied by a set of candidate concentric circular rings with associated candidate radii and pre-determined ring widths and the sum of the product for each ring is taken as a coarse fit quality estimate. The ring with the best fit is used to initialise fine estimation by multiplying the captured pattern spectrum with multiple candidate ellipses with different elliptical parameters, which include the length of the major and minor axes of the ellipse and the polar angle of the major axis of the ellipse, where the candidate elliptical parameters are perturbations of the coarse fit ring. The sum of the elliptical products is used as a fine fit quality estimate and the elliptical parameters of the highest fine fit quality estimate are used to estimate the affine alignment excluding rotation and match a captured pattern ellipse with a calibration pattern ring. The affine alignment excluding rotation includes the scale.
[00131] The inverse of the estimated affine alignment excluding rotation is applied to the captured pattern image, which changes the captured pattern ellipse to a captured pattern circle, creating an adjusted captured pattern image. The rotation of the matched ring can be estimated using ID correlation and peak finding of an angular slice around the ring between the matched ring in the adjusted captured pattern image and the corresponding ring in the calibration pattern. The rotation estimate can be combined with the affine alignment estimate excluding rotation to create an affine alignment estimate including rotation. The affine alignment estimate including rotation can then be applied to the calibration pattern image, and then the translation alignment can be estimated using 2D cross-correlation with the captured pattern image.
[00132] Alternatively, the coarse fit can be performed by calculating a sequence of radial slices on the captured pattern spectrum and then performing peak finding to estimate the radii of elliptical rings in the captured pattern image for each angle, creating a list of radii for each angle. To improve the reliability of elliptical ring detection, the radial slices can be binned over a range of angles, for example 15°. Elliptical parameters can be optimised to find the best fit to the list of radii versus angle.
[00133] The method according to various embodiments is described with reference to the flow process 1400 shown in Fig. 13. At step 1410 an image of the calibration pattern is obtained, the calibration pattern having a plurality of concentric rings in a frequency domain, wherein at least two of the plurality of concentric rings have a different radius. At step 1420 at least one ring of the plurality of concentric rings in the frequency domain is located in the obtained image. At step 1430 the located at least one ring in the obtained image is matched to a corresponding ring of the plurality of concentric rings in the calibration pattern. At step 1440 a scale of the obtained image is determined according to the match of the located at least one ring. At step 1450 the calibration pattern is decoded using the determined scale of the obtained image. The decoding may also include rotation, translation and transformation steps as described herein. The process ends at step 1460.
[00134] In an example use case for display system geometry calibration, the geometry calibration is used to compensate for projection distortion caused by a curved projection screen. The user sets up a projector in front of the curved surface and initiates a projection screen calibration mode on the projector. The projector projects a calibration pattern chart on to the projection surface where the chart consists of a tiling of a calibration pattern image.
Alternatively, the calibration chart is added with low amplitude to a content image and projected such that the content is visible to the user but the chart has low visibility to the user. A camera captures an image of the calibration chart. Because the projection surface is curved, the calibration pattern image appears at multiple scales across the captured chart image. In addition, the distance between the projector and the projection surface is unknown, and the focal length of the projector lens may also be unknown. The projector estimates the calibration pattern alignment at a set of measurement points covering the projection surface. The alignment at each point is used together with a projector system calibration stored in the projector memory to estimate the depth of the projection surface, and therefore the shape and distance of the projection surface, and the focal length of the projector lens. The user then sets a content projection mode on the projector and the user supplies a content image or sequence of images to be projected, for example using a signal over an HDMI cable. The projector compensates for the projection distortion by using the estimated shape of the projection screen to distort the content image such that the projection appears undistorted to the user. Alternatively, the herein described arrangements can be used to estimate the depth of objects in a scene in front of the projector for other applications of depth estimation, such as gesture control or 3D shape measurement.
[00135] In another example use case for display system geometry calibration, multiple projectors are used together to display a large high resolution projection on a curved projection screen. The projectors have different projection resolutions and the cameras in each projector have different capture resolutions. Fig. 12 shows an example combined projection using 4 projectors where 1210 is a 2K projection at 1920x1080 pixels, 1220 is a 4K projection at 3840x2160 pixels, and 1230 and 1240 are 8K projections at 7680x4320 pixels. The cameras in each projector have captured resolution which is sufficient to provide accurate projection screen calibration at the projection resolution. In this example, the user has set up the projectors so that the projections overlap. Region 1250 is an overlap region between a 2K and 8K projection, region 1270 is an overlap region between a 4K and an 8K projection, region 1280 is an overlap region between two 8K projections, and region 1260 is an overlap region between a 2K projection, a 4K projection and two 8K projections. The projection regions are shown in Fig. 12 using rectangles, however before the projection screen is calibrated, the projections are distorted and the content images in the overlap regions are misaligned and the user sees a ghosted image.
[00136] The user initiates a projection screen calibration mode on the projectors. Each projector displays a calibration pattern chart according to the projector resolution. The 2K projector projects a calibration pattern with support for 2K resolution, 4K calibration chart has multiple scale support for 2K and 4K resolution, and the 8K calibration chart has multiple scale support for 2K, 4K and 8K resolution. The calibration charts are generated using unique pseudo-random seeds for each projector, so that the projection source of each captured pattern can be identified in the overlap regions. Optionally, the reference pattern ring radii are set differently on each projector so that the spectral rings are less likely to overlap in regions of the projection surface where the projections overlap. Using the multiple scale projected calibration charts, the 2K projector corresponding to region 1210 is able to calibrate the 2K projection surface using the 2K calibration chart projected by the 2K projector and also calibrate the relative positions of the adjacent 4K and 8K projectors using the 4K and 8K multiple scale calibration charts projected by the adjacent projectors. In addition, the 8K projector corresponding to region 1230 is able to calibrate the 8K projection surface using the 8K multiple scale calibration chart projected by the 8K projector and also calibrate the relative positions of the adjacent 2K, 4K and 8K projectors using the corresponding multiple scale projected calibration charts. In this way, the calibration accuracy is maximised for the highest resolution possible for each region in the combined projection surface. The user then sets a content projection mode on the projector and the user supplies a content image or sequence of images to be projected, for example using a signal over an HDMI cable. The projectors use the combined projection surface calibration to project a combined content image which appears undistorted to the user and where the projections are aligned within the overlap regions so that the user does not see any ghosting of the content image.
[00137] In another example use case for calibrating imaging system geometry, the concentric ring pattern is used to calibrate a camera. The concentric ring pattern is tiled to create a calibration chart. The calibration chart is printed or displayed on a monitor, and multiple images of the calibration chart are captured on the camera being calibrated, with the camera at different view angles and distances with respect to the chart for each captured image. The concentric ring pattern is decoded at multiple positions on the chart for each captured image to create correspondences between physical positions on the test chart and pixel positions on the captured image. The correspondences are used to estimate the intrinsic parameters of the camera, which can include the focal length, principal point and geometric distortion co-efficients. Because the concentric ring pattern has high scale tolerance and high translation accuracy, accurate sub-pixel correspondences can be created over a wide range of view positions and angles with respect to the test chart. This wide range of views increases the accuracy of the camera calibration by creating stronger constraints on the intrinsic parameter estimates over a large volume of possible object positions.
[00138] The estimated camera calibration can then be used for depth estimation using stereo disparity between a pair of calibrated cameras, or depth estimation using the calibrated camera together with a projection of structured light on to the object for which the depth is being measured, or to initialise the camera calibration in shape measurement using photogrammetry, or for projection surface calibration using a camera together with a projector. Camera calibration can also be used for image quality measurements, for example to measure the reproduction quality of test patterns in captured images of a printed test chart, where the camera calibration is used to align a captured image of the test pattern with an ideal test pattern stored in the computer memory. If the printer performance is known, this method can be used to measure the imaging performance of the camera. If the camera imaging performance is known, this method can be used to measure the performance of the printer.
Industrial Applicability [00139] The arrangements described are applicable to the computer and data processing industries and particularly for the image generation and calibration industries.
[00140] According to the various implementations, depth estimation or projection surface calibration may be provided by using estimated alignment with a curved surface, surface discontinuities or a surface not orthogonal to the projector optical axis. Further, projection distortion compensation is provided by using projection surface calibration. Also, the herein described implementations enable the stitching of content images from multiple projectors with different projection resolutions.
[00141] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
[00142] In the context of this specification, the word “comprising” means “including principally but not necessarily solely” or “having” or “including”, and not “consisting only of’. Variations of the word "comprising", such as “comprise” and “comprises” have correspondingly varied meanings.

Claims (17)

  1. Claims:
    1. A method of decoding a calibration pattern, the method comprising: obtaining an image of the calibration pattern, the calibration pattern having a plurality of concentric rings in a frequency domain, wherein at least two of the plurality of concentric rings have a different radius; locating, in the obtained image, at least one ring of the plurality of concentric rings in the frequency domain; matching the located at least one ring in the obtained image to a corresponding ring of the plurality of concentric rings in the calibration pattern; determining a scale of the obtained image according to the match of the located at least one ring, and decoding the calibration pattern using the determined scale of the obtained image.
  2. 2. The method according to claim 1 wherein a ID projection of the plurality of concentric rings in the obtained image is formed and a ratio of the radii is determined between at least two of the plurality of concentric rings using the ID projection.
  3. 3. The method according to claim 1 wherein the plurality of concentric rings have non-redundant sequential ring ratios.
  4. 4. The method according to claim 1 wherein the matching further comprises the steps of determining a captured logarithmic angular projection of the located at least one ring in the obtained image and correlating the captured logarithmic angular projection with a logarithmic angular projection of the calibration pattern to determine the scale of the obtained image.
  5. 5. The method according to claim 1 wherein the matching further comprises the steps of determining an angular slice of the located at least one ring in the obtained image and correlating the angular slice with an angular slice of the calibration pattern to determine the scale of the obtained image.
  6. 6. The method according to claim 1 further comprising the step of determining a rotation of the obtained image according to the match of the located at least one ring.
  7. 7. The method according to claim 1 further comprising the step of transforming the concentric circular rings in the calibration pattern into concentric elliptical rings and matching at least one of the concentric elliptical rings with an elliptical ring in the obtained image.
  8. 8. The method according to claim 1, wherein the image of the calibration pattern is obtained from an output image and the decoding of the calibration pattern using the determined scale of the obtained image is used to adjust the output image.
  9. 9. An electronic device or computer system arranged to decode a calibration pattern, the device or system arranged to: obtain an image of the calibration pattern, the calibration pattern having a plurality of concentric rings in a frequency domain, wherein at least two of the plurality of concentric rings have a different radius; locate, in the obtained image, at least one ring of the plurality of concentric rings in the frequency domain; match the located at least one ring in the obtained image to a corresponding ring of the plurality of concentric rings in the calibration pattern; determine a scale of the obtained image according to the match of the located at least one ring, and decode the calibration pattern using the determined scale of the obtained image.
  10. 10. The electronic device or computer system according to claim 9, wherein the device or system is arranged to form a ID projection of the plurality of concentric rings in the obtained image, and determine a ratio of the radii between at least two of the plurality of concentric rings using the ID projection.
  11. 11. The electronic device or computer system according to claim 9, wherein the plurality of concentric rings have non-redundant sequential ring ratios.
  12. 12. The electronic device or computer system according to claim 9, wherein the device or system is arranged to determine a captured logarithmic angular projection of the located at least one ring in the obtained image and correlate the captured logarithmic angular projection with a logarithmic angular projection of the calibration pattern to determine the scale of the obtained image.
  13. 13. The electronic device or computer system according to claim 9, wherein the device or system is arranged to determine an angular slice of the located at least one ring in the obtained image and correlate the angular slice with an angular slice of the calibration pattern to determine the scale of the obtained image.
  14. 14. The electronic device or computer system according to claim 9, wherein the device or system is arranged to determine a rotation of the obtained image according to the match of the located at least one ring.
  15. 15. The electronic device or computer system according to claim 9, wherein the device or system is arranged to transform the concentric circular rings in the calibration pattern into concentric elliptical rings and match at least one of the concentric elliptical rings with an elliptical ring in the obtained image.
  16. 16. The electronic device or computer system according to claim 9, wherein the device or system is arranged to obtain an image of the calibration pattern from an output image and decode the calibration pattern using the determined scale of the obtained image to adjust the output image.
  17. 17. A computer readable storage medium having a computer program recorded therein, the program being executable by a computer apparatus to make the computer perform a method of decoding a calibration pattern, said program comprising: code for obtaining an image of the calibration pattern, the calibration pattern having a plurality of concentric rings in a frequency domain, wherein at least two of the plurality of concentric rings have a different radius; code for locating, in the obtained image, at least one ring of the plurality of concentric rings in the frequency domain; code for matching the located at least one ring in the obtained image to a corresponding ring of the plurality of concentric rings in the calibration pattern; code for determining a scale of the obtained image according to the match of the located at least one ring, and code for decoding the calibration pattern using the determined scale of the obtained image.
AU2016202168A 2016-04-07 2016-04-07 Image geometry calibration using multiscale alignment pattern Abandoned AU2016202168A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2016202168A AU2016202168A1 (en) 2016-04-07 2016-04-07 Image geometry calibration using multiscale alignment pattern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2016202168A AU2016202168A1 (en) 2016-04-07 2016-04-07 Image geometry calibration using multiscale alignment pattern

Publications (1)

Publication Number Publication Date
AU2016202168A1 true AU2016202168A1 (en) 2017-10-26

Family

ID=60118990

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2016202168A Abandoned AU2016202168A1 (en) 2016-04-07 2016-04-07 Image geometry calibration using multiscale alignment pattern

Country Status (1)

Country Link
AU (1) AU2016202168A1 (en)

Similar Documents

Publication Publication Date Title
US9117277B2 (en) Determining a depth map from images of a scene
JP5362087B2 (en) Method for determining distance information, method for determining distance map, computer apparatus, imaging system, and computer program
US10401716B2 (en) Calibration of projection systems
US10887519B2 (en) Method, system and apparatus for stabilising frames of a captured video sequence
US10026183B2 (en) Method, system and apparatus for determining distance to an object in a scene
US10916033B2 (en) System and method for determining a camera pose
Jeon et al. Accurate depth map estimation from a lenslet light field camera
US10445616B2 (en) Enhanced phase correlation for image registration
US8989517B2 (en) Bokeh amplification
US10019810B2 (en) Method, system and apparatus for determining a depth value of a pixel
JP2017117462A5 (en)
US9639948B2 (en) Motion blur compensation for depth from defocus
EP2859528A1 (en) A multi-frame image calibrator
US20160050372A1 (en) Systems and methods for depth enhanced and content aware video stabilization
US10121262B2 (en) Method, system and apparatus for determining alignment data
JP2010237177A (en) Mtf measurement instrument and mtf measurement program
JP2016541028A (en) Calibration of 3D microscope
JP2013005258A (en) Blur correction apparatus, blur correction method, and business form
WO2018005278A1 (en) Camera testing using reverse projection
US20220405968A1 (en) Method, apparatus and system for image processing
AU2016202168A1 (en) Image geometry calibration using multiscale alignment pattern
AU2012258429B2 (en) Correlation using overlayed patches
JP2015005200A (en) Information processing apparatus, information processing system, information processing method, program, and memory medium
AU2019201822A1 (en) BRDF scanning using an imaging capture system

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted