US20230319246A1 - Systems and methods for calibrating display systems - Google Patents
Systems and methods for calibrating display systems Download PDFInfo
- Publication number
- US20230319246A1 US20230319246A1 US18/206,219 US202318206219A US2023319246A1 US 20230319246 A1 US20230319246 A1 US 20230319246A1 US 202318206219 A US202318206219 A US 202318206219A US 2023319246 A1 US2023319246 A1 US 2023319246A1
- Authority
- US
- United States
- Prior art keywords
- patch
- blob
- test pattern
- address
- blobs
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012360 testing method Methods 0.000 claims abstract description 140
- 239000003086 colorant Substances 0.000 claims description 48
- 238000012795 verification Methods 0.000 description 6
- 238000012937 correction Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3191—Testing thereof
- H04N9/3194—Testing thereof including sensor feedback
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2003—Display of colours
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00002—Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
- H04N1/00071—Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for characterised by the action taken
- H04N1/00082—Adjusting or controlling
- H04N1/00087—Setting or calibrating
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/603—Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer
- H04N1/6033—Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer using test pattern analysis
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0233—Improving the luminance or brightness uniformity across the screen
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0238—Improving the black level
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/02—Improving the quality of display appearance
- G09G2320/0242—Compensation of deficiencies in the appearance of colours
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0666—Adjustment of display parameters for control of colour parameters, e.g. colour temperature
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0693—Calibration of display systems
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/006—Electronic inspection or testing of displays and display drivers, e.g. of LED or LCD displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/02—Diagnosis, testing or measuring for television systems or their details for colour television signals
Definitions
- the specification relates generally to display systems, and more particularly to systems and methods for calibrating display systems.
- Display systems such as systems with one or more projectors, cameras, or display devices, may be employed to project videos and images on a variety of different surfaces.
- the surfaces may be uneven, have their own coloring and/or imperfections, or the display devices may be misaligned and/or otherwise introduce imperfections and distortions which causes the projected image to appear distorted or otherwise inaccurate to the desired image.
- an example method of calibrating a display system includes: displaying a test pattern including a plurality of blobs; detecting one or more base blobs in the displayed test pattern; identifying, based on the detected base blobs, patches of the test pattern, wherein each patch comprises one of the base blobs and a subset of additional blobs detected in the displayed test pattern; determining a patch location for at least one patch within the test pattern based on the subset of the additional blobs in the patch to determine a blob location for at least one detected blob; determining a calibration parameter for the display system based on the blob location and a detected attribute of the at least one detected blob; and calibrating the display system using the calibration parameter.
- an example system includes: a display system configured to display a test pattern onto a surface, the test pattern including a plurality of blobs; a camera configured to capture an image of at least a portion of the displayed test pattern; and a processor configured to: detect one or more base blobs in the test pattern; identify, based on the detected base blobs, a patch of the test pattern, wherein the patch comprises one of the base blobs and a subset of additional blobs detected in the test pattern; determine a patch location for the patch within the test pattern based on the subset of the additional blobs in the patch to determine a blob location for at least one detected blob; determine a calibration parameter for the display system based on the blob location and a detected attribute of at least one detected blob; and calibrate the display system using the calibration parameters.
- another example method of calibrating a display system includes: displaying a test pattern including a plurality of blobs; identifying patches of the test pattern, wherein each patch comprises a subset of blobs detected in the displayed test pattern; determining a patch location for at least one patch within the test pattern based on the blobs in the patch; determining a blob location for each at least one detected blob in the patch based on the patch location; determining a calibration parameter for the display system based on the blob location and a detected attribute of each the at least one detected blob; and calibrating the display system using the calibration parameter.
- FIG. 1 depicts a block diagram of an example system for calibrating a display system
- FIG. 2 depicts a schematic diagram of an example patch of a test pattern for calibrating a display system
- FIG. 3 depicts a block diagram of certain internal components of the projector of FIG. 1 ;
- FIG. 4 depicts a flowchart of an example method of calibrating a display system
- FIG. 5 depicts a flowchart of an example method of detecting base blobs at block 415 of the method of FIG. 4 ;
- FIG. 6 depicts a flowchart of an example method of determining a patch location at block 425 of the method of FIG. 4 ;
- FIG. 7 depicts a flowchart of an example method of verifying a patch address using hyper-addressing
- FIG. 8 depicts a schematic diagram of an example macro-patch of a test pattern for calibrating a display system
- FIG. 9 depicts a flowchart of an example method of determining a calibration parameter at block 430 of the method of FIG. 4 ;
- FIG. 10 depicts a flowchart of an example method of adjusting a camera parameter
- FIG. 11 depicts a flowchart of an example method of adjusting a display system parameter.
- the input data may be adjusted by the display system to calibrate the output to better approximate the target.
- the display system may display a test pattern onto the surface. An image of the displayed test pattern, or at least a portion of the displayed test pattern may be captured by a camera and the image analyzed to determine how to calibrate the display system.
- multiple test patterns are required, which may result in the calibration process being time-consuming and inconvenient when initially setting up a projector.
- An example test pattern in accordance with the present specification includes a plurality of blobs arranged in patches, with each patch having a white base blob defining the patch, and red, green, and blue reference blobs.
- the arrangement of the blobs in the patch and the inclusion of white, red, green and blue blobs allows the test pattern to be used for color compensation, geometric alignment, and luminance correction, with a single-shot test pattern.
- a processor may detect the white base blob in the projected pattern, identify patches of the test pattern based on the base blob, identify the reference blobs in the patches and use the reference blobs to decode the colors of the additional blobs in the patch.
- test pattern may be arranged such that the colors of each of the additional blobs in the patch defines a patch address that allows the processor to locate the patch within the test pattern.
- the processor may use the location of the patch to accurately compare a target attribute (i.e., the input to the test pattern) and a detected attribute (i.e., as displayed on the surface) of a given blob, and compensate or apply a calibration parameter as appropriate.
- FIG. 1 depicts a system 100 for calibrating a display system, such as a projector 104 .
- the present example will be described in conjunction with the projector 104 , however it will be understood that calibration of other suitable display systems and devices is also contemplated.
- the projector 104 is configured to project a test pattern 108 onto a surface 112 .
- the system 100 may further include a camera 116 (e.g., an optical camera) to capture an image of the projected test pattern 108 .
- the camera 116 may be a discrete component of the system 100 , as shown, or the camera 116 may be integrated into the projector 104 .
- the image of the projected test pattern 108 captured by the camera 116 may then be analyzed to identify calibration parameters for the projector 104 in order to calibrate the projector 104 with respect to the surface 112 .
- the calibration parameters may be to adjust the color, luminance, geometric alignment, distortion, color convergence, focus, or the like, to allow the projector 104 to subsequently project other images or videos with high clarity, contrast, and appropriate color onto the surface 112 .
- the test pattern 108 includes features to facilitate the calibration of the projector 104 with respect to color, luminance, geometric alignment, distortion, focus, color convergence, and the like.
- the test pattern 108 may further allow the projector 104 to be calibrated for focus and exposure.
- the test pattern 108 is formed of a plurality of blobs 120 , each of which is a region of a given color.
- the blobs 120 may be squares, circles, other geometric shapes, or other suitable forms. Further, each of the blobs 120 may have the same form as the other blobs 120 , or the blobs 120 may have different forms.
- the blobs 120 of the test pattern 108 may be organized to form patches 124 .
- Each patch 124 includes a subset of the blobs 120 and has certain properties for use in the calibration of the projector 104 , as will be further described below.
- the patch 124 includes nine blobs 120 , arranged in a three-by-three grid.
- the nine blobs 120 include a base blob 200 , three reference blobs 204 - 1 , 204 - 2 , 204 - 3 (referred to herein generically as a reference blob 204 or collectively as reference blobs 204 ; this nomenclature is also used elsewhere herein), and five additional blobs 208 - 1 , 208 - 2 , 208 - 3 , 208 - 4 , 208 - 5 .
- the base blob 200 is a blob which may be used to identify the patch 124 from the detected blobs from the projected test pattern 108 .
- the blobs 120 forming the patch 124 have a certain predefined spatial relationship to the base blob 200 .
- the patch 124 may be defined as the base blob 200 and the eight nearest neighbor blobs to the base blob 200 (i.e., the four blobs directly adjacent to the base blob 200 and the four blobs diagonally adjacent to the base blob 200 , such that the base blob 200 is in the center of the three-by-three array of blobs). That is, each patch 124 may include a base blob 200 at the center of the patch 124 . In other examples, other spatial relationships of the base blob 200 and the patch 124 are contemplated.
- the base blob 200 may be selected to have a distinctive color or other feature detectable in the projected test pattern 108 , and consistently distinguishable from the other blobs 120 in the test pattern 108 .
- the base blob 200 is white in color, and hence will be the brightest or most intense detected blob, in particular amongst its eight nearest neighbors.
- the base blob 200 may have a distinct shape, or may be additionally distinguished based on the surrounding blobs.
- the reference blobs 204 are blobs in the patch which may be used as points of reference to orient the patch 124 and/or as color reference to enable color calibration of the projector 104 , particularly on adverse surfaces, or for other reference purposes for further calibrating the projector 104 .
- the first reference blob 204 - 1 is located in the top left corner of the three-by-three array of blobs in the patch 124
- the second reference blob 204 - 2 is located in the top right corner of the three-by-three array of blobs in the patch 124
- the third reference blob 204 - 3 is located at the bottom center of the three-by-three array of blobs in the patch 124 .
- the first reference blob 204 - 1 is a red blob
- the second reference blob 204 - 2 is a green blob
- the third reference blob 204 - 3 is a blue blob.
- the reference blobs 204 may be selected to have other distinguishable colors or features.
- each of the reference blobs 204 causes the patch 124 to be rotationally and reflectively asymmetric, and hence the reference blobs 204 may be used, for example, to determine the orientation of the test pattern (i.e., since the red reference blob 204 - 1 is in the top left corner, relative to the white blob 200 , etc.), as well as whether the projector 104 is a front projector or a rear projector.
- the reference blobs 204 in the present example cover the three primary colors of red, green and blue
- the reference blobs 204 may be used as references for color identification and correction.
- the red, green and blue reference blobs 204 may be assumed to be the closest in hue to the original red, green and blue colors, and only suffer from intensity issues. Accordingly, their appearance on the surface 112 may be used as a reference for the appearance of other colors with red, green, and blue hues on the surface 112 .
- the five additional blobs 208 are other blobs which define a patch address 212 for the patch 124 .
- the five additional blobs 208 may be colored or greyscale blobs.
- the colors of the additional blobs 208 may be selected from a predefined list of blob colors.
- the blob colors may be, for example, the secondary and tertiary colors. Based on the spatial relationships of the additional blobs 208 to the reference blobs 204 , the additional blobs 208 may be ordered to form an ordered list.
- the additional blob 208 - 1 adjacent the blue reference blob 204 - 3 and in the same column as the red reference blob 204 - 1 is designated as the first additional blob 208 - 1 .
- the remaining additional blobs 208 may be sequentially ordered by moving clockwise (i.e., towards the red reference blob 204 - 1 and away from the blue reference blob 204 - 3 ) through the additional blobs 208 to achieve an ordered list.
- the colors, C 1 , C 2 , C 3 , C 4 , and C 5 of the additional blobs 208 in their given order define the patch address 212 .
- the patch 124 may include a different number of blobs 120 , different configurations of the blobs 120 , different colors or properties for the base blob 200 and the reference blobs 204 , and the like.
- the patch 124 could be an array of a different size, hexagonally tiled, or use an arrangement other than the base blob 200 and the three reference blobs 204 .
- the patch 124 need not include the base blob 200 and/or the three reference blobs 204 and may instead be identifiable based on another arrangement and/or relationship between the blobs 120 forming the patch 124 .
- the projector 104 includes a controller 300 and a memory 304 .
- the projector 104 may further include a communications interface 308 and optionally, an input/output device (not shown).
- the controller 300 may be a processor such as a central processing unit (CPU), a microcontroller, a processing core, or similar.
- the controller 300 may include multiple cooperating processors.
- the functionality implemented by the controller 300 may be implemented by one or more specially designated hardware and firmware components, such as a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), a graphics processing unit (GPU), or the like.
- the controller 300 may be a special purpose processor which may be implemented via the dedicated logic circuitry of an ASIC or FPGA to enhance the processing speed of the calibration operation discussed herein.
- the controller 300 is interconnected with a non-transitory computer-readable storage medium, such as the memory 304 .
- the memory 304 may include a combination of volatile memory (e.g., random access memory or RAM), and non-volatile memory (e.g., read only memory or ROM, electrically erasable programmable read only memory or EEPROM, flash memory).
- volatile memory e.g., random access memory or RAM
- non-volatile memory e.g., read only memory or ROM, electrically erasable programmable read only memory or EEPROM, flash memory.
- the controller 300 and the memory 304 may comprise one or more integrated circuits. Some or all of the memory 304 may be integrated with the controller 300 .
- the memory 304 stores a calibration application 316 which, when executed by the controller 300 , configures the controller 300 and/or the projector 104 to perform the various functions discussed below in greater detail and related to the calibration operation of the projector 104 .
- the application 316 may be implemented as a suite of distinct applications.
- the memory 304 also stores a repository 320 configured to store calibration data for the calibration operation, including a list of blob colors used in the test pattern, a list of valid addresses used in the test pattern, a list of hyper-addresses used in the test pattern, and other rules and data for use in the calibration operation of the projector 104 .
- the communications interface 308 is interconnected with the controller 300 and includes suitable hardware (e.g., transmitters, receivers, network interface controllers and the like) allowing the projector 104 to communicate with other computing devices.
- suitable hardware e.g., transmitters, receivers, network interface controllers and the like
- the specific components of the communications interface 308 are selected based on the type of network or other links that the projector 104 is to communicate over.
- the communications interface 308 may allow the projector 104 to receive images of the projected test pattern from the camera 116 , in examples where the camera 116 is not integrated with the projector 104 .
- FIG. 4 depicts a flowchart of an example method 400 of calibrating a projector.
- the method 400 will be described in conjunction with its performance in the system 100 , and in particular via execution of the application 316 by the processor 300 , with reference to the components illustrated in FIGS. 1 - 3 .
- some or all of the method 400 may be performed by other suitable devices, such as a media server, or the like, or in other suitable systems.
- the projector 104 projects the test pattern 108 onto the surface 112 and the camera 116 captures an image of the test pattern 108 as projected onto the surface 112 .
- the image captured by the camera 116 represents the appearance of the test pattern 108 on the surface 112 , including any geometric deformation, color distortion, and the like, which appear as a result of the properties of the surface 112 .
- the camera 116 may transmit the captured image to the projector 104 for further processing, and in particular to allow the processor 300 to compute calibration parameters for the projector 104 .
- the calibration parameters may be computed at a separate computing device, such as a connected laptop or desktop computer, a server, or the like. Accordingly, in such examples, the camera 116 may transmit the captured image to the given computing device to compute the calibration parameters for the projector 104 .
- the processor 300 analyzes the captured image to detect the blobs 120 of the test pattern 108 .
- the blobs 120 may be detected using standard computer vision techniques, using convolution, differential methods, local extrema, or the like.
- the processor 300 detects one or more base blobs in the projected test pattern 108 .
- the processor 300 may identify, from the blobs 120 detected at block 410 , which blobs 120 satisfy the criteria of the base blobs and designate a subset of the blobs 120 as base blobs.
- FIG. 5 an example method 500 of identifying base blobs from the projected test pattern 108 is depicted.
- the method 500 will be described in conjunction with identifying base blobs in the test pattern 108 having patches 124 , in particular, organized in the manner described in conjunction with FIG. 2 . It will be understood that in other examples where the base blobs has other identifying characteristics (e.g., shape), other methods of identifying the base blob are contemplated.
- the processor 300 selects a blob 120 to analyze to determine whether or not it is a base blob. Accordingly, the processor 300 may select a blob 120 detected at block 410 which has not been validated as a base blob or invalidated as a base blob.
- the processor 300 identifies neighboring blobs 120 of the blob 120 selected at block 505 . For example, when the base blobs are located in the center of a patch 124 , the processor 300 may retrieve the eight nearest neighbors of the selected blob 120 .
- the test pattern 108 may be arranged such that adjacent blobs 120 are spaced apart by a predefined amount. For example, the space between adjacent blobs 120 may be about half the width of a blob 120 . Accordingly, the processor 300 may look for blobs 120 detected at block 410 which are within 2.5 widths of the selected blob 120 to identify the neighbors of the selected blob 120 .
- the processor 300 selects one of the neighbors identified at block 510 for comparison against the blob 120 selected at block 505 .
- the processor 300 may select a neighboring blob 120 which has not yet been compared to the selected blob.
- the processor 300 compares the intensity of the selected neighboring blob 120 to the selected blob 120 .
- the processor 300 may sum the red, green and blue (RGB) components of the selected neighboring blob 120 and the selected blob 120 and compare the two sums.
- the processor 300 may sample RGB components of the given blob 120 at its center, at a predefined set of coordinates within the blob 120 , or the processor 300 may average the RGB components over the blob 120 , or other suitable methods of obtaining RGB component values over the blob 120 .
- the processor 300 may determine whether the intensity (i.e., the sum of the RGB components) of the selected blob 120 is greater than the intensity of the selected neighboring blob 120 .
- the processor 300 proceeds to block 535 .
- the processor 300 invalidates the blob 120 selected at block 505 as a potential base blob. That is, since there is at least one neighboring blob 120 which is more intense than the selected blob 120 , the processor 300 may deduce that the selected blob 120 is not a white blob, since neighboring blobs 120 are likely to suffer from similar color distortions, and hence the white blobs would remain more intense than their neighbors.
- the processor 300 may conclude that the selected blob 120 is not a base blob in the test pattern 108 .
- the processor 300 may subsequently return to block 505 to continue selecting blobs 120 to identify the base blobs in the test pattern 108 .
- the processor 300 proceeds to block 525 .
- the processor 300 may invalidate the neighboring blob 120 selected at block 515 as a base blob, since it has at least one neighboring blob 120 (namely, the selected blob 120 ) which is more intense than it. Further, the processor 300 determines whether or not the selected blob 120 has more neighboring blobs 120 .
- the processor 300 returns to block 515 to select a further neighboring blob 120 to compare intensities.
- the processor 300 proceeds to block 530 .
- the processor 300 validates the blob 120 selected at block 505 as a base blob. That is, having determined that the selected blob 120 has a higher intensity than each of its neighbors, the processor 300 may therefore determine that the selected blob 120 is white in color and therefore a base blob 200 . The processor 300 may then return to block 505 to continue assessing blobs 120 to find the base blobs 200 in the test pattern 108 .
- the processor 300 uses the base blobs 200 to identify the patches 124 .
- the processor 300 may define a patch 124 as a base blob 200 and the eight nearest neighboring blobs 120 of the base blob 200 , for each base blob 200 identified at block 415 .
- the processor 300 may additionally verify a candidate blob as a base blob 200 based on the arrangement of the other blobs 120 within the patch 124 defined by the candidate blob. For example, the processor 300 may identify, within the patch 124 , a red blob, a green blob, and a blue blob. In other examples, the processor 300 may select different color reference blobs. The processor 300 may additionally verify that the red, green and blue blobs are located in the patch 124 relative to the base blob 200 and to one another based on the predefined configurations of the patch 124 . In some examples, if the red, green and blue blobs identified in the patch 124 do not satisfy the predefined configuration of the patch 124 , the processor 300 may determine that the candidate white blob is not in fact a valid base blob 200 .
- the processor 300 may omit block 415 entirely, and hence at block 420 , may identify the patches solely on the basis of the blobs in the patch.
- the processor may select a group of blobs in a sliding window (e.g., a 3 ⁇ 3 array, a 2 ⁇ 2 array, or otherwise selected based on the size and shape of an expected patch).
- the group of blobs may be compared against a list of valid patches, and the groups whose colors and positions match a valid patch may be identified as a patch.
- the processor 300 may reject groups of blobs which are made from partial groups of multiple patches. In particular, such an identification mechanism when each patch in the test pattern is unique.
- the processor 300 determines the patch location for at least one of the patches 124 identified at block 420 .
- the processor 300 may identify the patch location for all the patches 124 identified at block 420 , while in other examples, the processor 300 may select a subset of patches 124 for which to identify the patch location. The subset may be selected, for example based on the spatial arrangement of the patches 124 (e.g., the location of each patch in an alternating or checkerboard pattern or the like) or other suitable selection criteria.
- the processor 300 may use the eight nearest neighboring blobs 120 in the patch 124 to determine the patch location based on predefined configurations and properties of each patch 124 .
- the processor 300 may use a suitable subset of blobs 120 in the patch 124 which uniquely identify the patch 124 and allow the patch 124 to be located in the test pattern. For example, referring to FIG. 6 , a flowchart of an example method 600 of determining a patch location is depicted.
- the processor 300 selects a patch 124 to locate.
- the patch 124 may be selected based on its base blob 200 .
- the processor 300 may identify the reference blobs 204 in the patch 124 .
- the processor 300 may identify the reference blobs 204 by selecting the blobs 124 which have RGB components which are closest to a red hue, a green hue, and a blue hue, respectively.
- the processor 300 may use a least-squares method, a cosine distance, or other suitable method to determine the distance of the color (i.e., based on its RGB components) of a given blob 120 to the RGB component values of a red blob.
- the blob 120 in the patch 124 which is closest to a red color may be determined by the processor 300 to be the red reference blob 204 - 1 .
- the processor 300 may identify the blobs 120 in the patch 124 which are closest to a green color and a blue color as the green reference blob 204 - 2 and the blue reference blob 204 - 3 , respectively.
- the processor 300 may also identify the remaining blobs 120 of the patch 124 as additional blobs 208 .
- the processor 300 orders the additional blobs 208 into an ordered list. To do so, the processor 300 may first orient the patch 124 using the reference blobs 204 . For example, the designated locations of the reference blobs 204 may cause the patch 124 to be rotationally and reflectively asymmetrical, and hence the processor 300 may use the red reference blob 208 - 1 to define the top left corner of the patch 124 , and the green reference blob 208 - 2 to define the top right corner of the patch 124 . The processor 300 may additionally confirm the orientation of the patch 124 by verifying that the blue reference blob 208 - 3 is in the bottom center.
- the processor 300 may sort the additional blobs 208 based on their location in the patch 124 to identify their position in the ordered list.
- the additional blob 208 in the bottom left corner may be designated as the first additional blob 208 - 1 .
- the additional blobs 208 may then be added to the ordered list proceeding in a clockwise direction, from the first additional blob 208 - 1 .
- the additional blob 208 immediately above the first additional blob 208 - 1 may be designated as the second additional blob 208 - 2 .
- the ordered list as generated from a specific, predefined orientation of the patch 124 allows the additional blobs 208 to encode a patch address, without risk of duplicates based on using the same additional blobs 208 in a different order for a different patch 124 .
- the ordered list of additional blobs 208 in the example patch 124 depicted in FIG. 2 is [ 208 - 1 , 208 - 2 , 208 - 3 , 208 - 4 , 208 - 5 ].
- the processor 300 may then determine the patch address 212 for the patch 124 .
- the processor 300 selects an additional blob 208 from the ordered list.
- the additional blob 208 may be the next additional blob 208 which has not yet been processed to generate the patch address 212 .
- the processor 300 may begin with the first additional blob 208 - 1 at the first iteration of block 620 .
- the processor 300 determines the color of the additional blob 208 selected at block 620 .
- the processor 300 predicts the intended target color (i.e., the input color) of the selected additional blob 208 . That is, rather than simply taking the color of the additional blob 208 as projected onto the surface 112 , the processor 300 may use the RGB component values of the white base blob 200 and the red, green and blue reference blobs 204 to predict the input color for the selected additional blob 208 .
- the processor 300 may predict that the input blue component value of the additional blob 208 is similar to the input blue component value of the blue reference blob 204 - 3 , that is, 255.
- the processor 300 may scale the other detected RGB component values of the additional blob 208 according to the detected RGB component values of the reference blobs 204 and the base blob 200 to predict the other input RGB component values of the additional blob and hence decode the input blob color of the additional blob. More specifically, the prediction may include scaling and/or adjusting the values of the detected blobs 208 to adjust for variations in background or ambient light to allow decoding of the input blob color to be more accurate.
- the processor 300 may additionally verify the predicted input color against a predefined list of blob colors used in the test pattern 108 stored in the memory 304 . That is, rather than using combinations of any and/or all colors (i.e., all RGB component values), the test pattern 108 may contain blobs 120 with colors selected from the predefined list of blob colors.
- the predefined list of blob colors may include the secondary and tertiary colors.
- the processor 300 may verify the predicted input color and/or corrected the blob color by selecting a new predicted input color based on the closest blob color on the predefined list of blob colors.
- the processor 300 may used a least-squares computation to determine the blob color on the predefined list of blob colors which is closest to the predicted input color and designate the closest blob color as the new predicted input color.
- the processor 300 may only designate the closest blob color as the new predicted input color if the distance to the new predicted input color is below a threshold distance.
- the processor 300 may defer prediction of the blob color for a more holistic verification of the patch 124 , as described below.
- the processor 300 adds the predicted input color for the blob 208 selected at block 620 to the patch address to build the patch address 212 .
- the patch address 212 is similarly ordered by the associated colors of the additional blobs 208 in the ordered list.
- the patch address 212 of the example patch 124 depicted in FIG. 2 is [C 1 , C 2 , C 3 , C 4 , C 5 ].
- the processor 300 determines whether there are any more additional blobs 208 in the ordered list.
- the processor 300 returns to block 620 to select the next additional blob 208 in the ordered list and add its associated color to the patch address 212 .
- the processor 300 proceeds to block 640 .
- the processor 300 uses the patch address 212 to determine the patch location of the patch 124 .
- the processor 300 may retrieve, from the memory 304 , a predefined look-up table or other suitable data structure which defines a patch location associated with each patch address 212 .
- the patch location may be the coordinates of the patch 124 within the test pattern 108 .
- the patch location may be expressed, for example, in terms of pixel coordinates of a given corner of the patch 124 (e.g., the top left corner), pixel coordinates of a center of the patch 124 , coordinates relative to other patches 124 (e.g., designating the top left patch as 0,0), or other suitable means.
- the processor 300 may directly compute the patch location of the patch 124 based on the patch address 212 and a predefined set of rules for computing the patch location.
- the processor 300 may additionally verify the patch address against a predefined list of valid patch addresses stored in the memory 304 .
- the predefined list of valid patch addresses includes patch addresses 212 actually employed in the test pattern 108 . That is, the list of valid patch addresses is generated based on the input colors to the test pattern 108 . Accordingly, the test pattern 108 is preferably arranged such that each valid patch address appears only once on the list of valid patch addresses. The patch addresses may thus be uniquely verified, as well as used to uniquely locate the patch 124 within the test pattern 108 .
- the processor 300 may perform verification of the patch address against the valid patch addresses, for example based on a full matching, a partial matching, a distance computation, or other suitable means. When the determined patch address is not a valid patch address, the processor 300 may correct the patch address based on the list of valid patch addresses and, for example, the closest partial matching.
- the processor 300 may verify the patch address 212 against the predefined list of valid patch addresses stored in the memory 304 based, in part or in whole, on the predicted input colors of each of the additional blobs (i.e., as opposed to the blob colors as selected from the predefined list of blob colors).
- the processor 300 may return to block 605 to determine the patch address for another patch 124 , until each patch 124 associated with each base blob 200 has been assigned a patch address.
- the processor 300 may additionally validate the patch addresses by forming macro-patches and validating the hyper-addresses of each macro-patch. For example, FIG. 7 depicts a flowchart of an example method 700 of validating patch addresses.
- the processor 300 defines a macro-patch.
- the macro-patch may be an array or subset of the patches 124 in the test pattern 108 .
- the macro-patch has a predefined configuration, such as a two-by-two array, or other configuration in which the spatial relationship between patches 124 in the macro-patch is predetermined.
- the processor 300 determines a hyper-address for the macro-patch.
- the hyper-address includes respective patch addresses of the patches forming the macro-patch.
- the hyper-address for the macro-patch may be an ordered list of the patch addresses of the patches forming the macro-patch.
- the processor 300 may first order the patches into an ordered list. Since each of the patches themselves have an orientation, the macro-patch may be oriented according to the orientations of the patches forming the macro-patch.
- the processor 300 may then select one of the patches as the first patch, according to a predefined criteria, and proceed to add patches to the list sequentially according to a predefined path between the patches of the macro-patch.
- the processor 300 may then define the ordered list of corresponding patch addresses of the patches to be the hyper-address.
- an example macro-patch 800 is depicted.
- the macro-patch 800 includes four patches, 804 - 1 , 804 - 2 , 804 - 3 , and 804 - 4 , arranged in a two-by-two array.
- other arrangements of patches 804 in the macro-patch 800 are contemplated.
- the macro-patch 800 may include a larger array of patches 804 , a line of patches 804 , or the like. Further, in some examples, different macro-patches may share one or more patches contained therein.
- each of the four patches 804 has a corresponding patch address, A 1 , A 2 , A 3 , and A 4 , respectively, defined by the blobs in the patch 804 .
- the processor 300 To generate a hyper-address 808 for the macro-patch 800 , the processor 300 generates an ordered list of the patches 804 . In the present example, the processor 300 begins at the top left patch, 804 - 1 , and proceeds clockwise through the patches 804 in the two-by-two array. Accordingly, the ordered list of patches is [ 804 - 1 , 804 - 2 , 804 - 3 , 804 - 4 ].
- the processor 300 may then generate a hyper-address 808 from the ordered list of patches 804 using the corresponding patch address for each patch 804 in the ordered list. Accordingly, the hyper-address 808 is [A 1 , A 2 , A 3 , A 4 ].
- the processor 300 determines whether the hyper-address generated at block 710 is a valid hyper-address. For example, the processor 300 may compare the hyper-address generated at block 710 to a predefined list of valid hyper-addresses stored in the memory 304 .
- the predefined list of valid hyper-addresses includes hyper-addresses actually employed in the test pattern 108 , based on the input colors and arrangement of blobs (and therefore patches) in the test pattern 108 .
- the valid hyper-addresses are defined based on the predefined path through the patches in the macro-patch.
- the processor 300 may perform the verification of the hyper-addresses against the valid hyper-addresses based on a full matching, a partial matching, distance computation, or the like.
- the hyper-addresses for each macro-patch will similarly be unique.
- the test pattern 108 is preferably arranged such that the hyper-addresses for each macro-patch is unique. Uniqueness of the hyper-addresses would therefore still allow the patch addresses to be uniquely verified and located (i.e., based on its relationship to adjacent patches in a macro-patch) within the test pattern 108 .
- the processor 300 proceeds to block 720 .
- the processor 300 validates each of the patch addresses which formed the hyper-address. That is, the processor 300 confirms that the patch addresses defined by the blobs in each of the patches, is in fact the correct patch address for that patch.
- the processor 300 proceeds to block 725 .
- the processor 300 may make a prediction as to which hyper-address is the correct hyper-address for the macro-patch and may correct the patch addresses for the patches of the macro-patch, as appropriate.
- the processor 300 may determine that the fourth patch address should be the patch address defined in the valid hyper-address and may correct the fourth patch address accordingly.
- recursively grouping macro-patches and obtaining addresses for the grouped macro-patches may also allow for repetition of patch addresses and hyper-addresses and/or provide further confirmation or verification of the correct patch addresses and hyper-addresses.
- the processor 300 may subsequently use the patch location to determine a blob location for at least one detected blob 120 detected at block 410 .
- the processor 300 may determine a blob location for all the blobs 120 detected at block 410 , while in other examples, the processor 300 may determine a blob location for a subset of the blobs 120 detected at block 410 . The selection of the subset may be based, for example, on a spatial arrangement of the blobs 120 within the test pattern. That is, since each patch location is known, and since the blobs 120 are located at predetermined positions within its corresponding patch, the processor 300 may determine the blob location for each blob 120 .
- the processor 300 determines a calibration parameter for the projector 104 .
- the processor 300 uses the blob location and a detected attribute of at least one blob 210 which was detected at block 410 and located at block 425 .
- the processor 300 may determine the calibration parameter based on all of the blobs 210 in order to allow the calibration parameters to be better localized and more accurate across the test pattern and the projection area for the projector 104 .
- the calibration parameter may be a color or luminance of the projector 104 .
- the processor 300 may use the blob location to determine the input parameters for a given blob 120 and compare the input parameters to the corresponding detected attributes (e.g., color, luminance, geometric alignment) and compute a correction to allow the projector 104 to project the given blob 120 such that the detected attribute better approximates the desired target parameter.
- detected attributes e.g., color, luminance, geometric alignment
- FIG. 9 depicts a flowchart of an example method 900 of determining calibration parameters for the projector 104 .
- the processor 300 selects a blob 120 of the test pattern 108 .
- the processor 300 may select a blob 120 for which a calibration parameter or compensation has not yet been computed.
- the processor 300 obtains the target attribute for the blob 120 selected at block 905 .
- the target attribute may be a target color or luminance, as defined by the input color or luminance of the blob 120 in the test pattern 108 , or a geometric alignment, as defined by the geometric properties of the test pattern 108 . That is, the processor 300 may use the blob location of the blob 120 within the test pattern 108 to identify the input attribute as the target attribute.
- the processor 300 obtains the detected attribute for the blob 120 selected at block 905 . That is, the processor 300 identifies the corresponding color or luminance of the blob 120 as detected by the camera 116 in the captured image representing the test pattern 108 as projected onto the surface 112 .
- the detected attribute may be sampled at the center of the blob 120 , or at a predefined point within the blob 120 (e.g., a predefined corner, etc.), while in other examples, the detected attribute may be an average of the detected attribute at each point, or a selected subset of points, across the blob 120 .
- the processor 300 computes calibration parameters for the selected blob 120 based on the target attribute(s) determined at block 910 and the detected attribute(s) determined at block 915 . That is, based on the differences between the input to the projector 104 and the detected output of the projection onto the surface 112 , the processor 300 may determine a compensation to adjust the input to the projector 104 to allow the detected output attribute (i.e., as projected onto the surface 112 ) to better approximate the target attribute. For example, the processor 300 may use standard radiometric or luminance compensation computations and/or geometric alignment computations, as will be understood by those of skill in the art, to define the calibration parameters for the blob 120 .
- the processor 300 determines whether there are any further blobs 120 in the test pattern 108 for which the calibration parameters have not yet been computed. If the determination at block 925 is affirmative, the processor 300 returns to block 905 to select a subsequent blob 120 for which the calibration parameters have not yet been computed.
- the processor 300 proceeds to block 930 .
- the processor 300 smooths the calibration parameters of each of the blobs 120 over the projection area (i.e., over the area of the test pattern 108 ).
- the calibration parameters computed at block 920 are individually computed per blob 120 . Accordingly, adjacent blobs 120 may have different calibration parameters, which may cause abrupt and jarring changes between blobs 120 in the projection if applied per blob 120 .
- the test pattern 108 may not produce a calibration parameter for the negative spaces between blobs 120 . Accordingly, rather than simply directly applying the calibration parameter over the blob area of the given blob 120 , the processor 300 may designate the calibration parameter at a given point of the blob 120 (e.g., the calibration parameter applies at the center of the blob 120 ) for each of the blobs 120 in the test pattern 108 and apply a smoothing function to generate calibration parameters for the intermediary points between the given points of the blobs 120 .
- the processor 300 may designate the calibration parameter at a given point of the blob 120 (e.g., the calibration parameter applies at the center of the blob 120 ) for each of the blobs 120 in the test pattern 108 and apply a smoothing function to generate calibration parameters for the intermediary points between the given points of the blobs 120 .
- the processor 300 applies the calibration parameters to calibrate the projector 104 . That is, during a subsequent projection operation, the processor 300 may receive input data representing an image or video to be projected by the projector 104 , apply the calibration parameters to the input data to generate calibrated input data, and control the light sources of the projector 104 to project the image or video in accordance with the calibrated input data. In other examples, the application of the calibration parameters may be applied to the input data to generate calibrated input data prior to being received at the projector 104 and/or the processor 300 . Thus, the projector 104 will project the image or video with the color, luminance, geometric alignment and/or other attributes adjusted to compensate for variations and imperfections in the surface 112 to allow the projection to better approximate the original input data.
- the patch addresses may be encoded simply based on the colors of the additional blobs in the patch, rather than based on an ordered list of the colors of the additional blobs in the patch.
- the test pattern may include greyscale blobs including a predefined number of grey levels (e.g., 3 grey levels).
- the grey blobs surrounding the white base blob may still encode the patch address, based on unique (unordered) combinations of the eight greyscale blobs.
- Such a test pattern may be advantageous, for example for applying only a luminance correction when a radiometric color compensation has already been performed against another projector.
- the camera 116 may additionally include an automatic exposure and/or focus adjustment capabilities.
- an automatic exposure and/or focus adjustment capabilities For example, referring to FIG. 10 , a flowchart of an example method 1000 of automatically adjusting camera parameters is depicted.
- the projector 104 projects a test pattern, such as the test pattern 108 .
- the camera 116 captures an image of the test pattern, at a first camera parameter.
- the camera 116 may select a first exposure and/or a first focus at which to capture the image.
- the camera 116 selects a new camera parameter.
- the camera 116 may select a different exposure and/or focus at which to capture a subsequent image.
- the camera parameter may be selected for example from a predefined list of camera parameters to test.
- the camera 116 may only adjust one camera parameter at a time to better control the variables (i.e., only changing exposure or focus, but not both).
- the camera 116 may then return to block 1010 to capture a subsequent image of the test pattern 108 at the new camera parameter.
- the method 1000 proceeds to block 1020 .
- the camera 116 and/or the processor 300 and/or another suitable computing device selects an optimal camera parameter.
- the focus may be computed by using a mean-squared gradient (MSG) technique to compute the strength of the edges within the test pattern.
- the test pattern 108 may include blobs 120 with high contrast at all edges with the negative space between the blobs 120 , as based on the selection of primary, secondary, and tertiary colors of the blobs 120 . Accordingly, the focus of the camera 116 may be automatically selected based on the focus with the highest MSG.
- the exposure of the camera 116 may be computed based on the RGB component values.
- the optimal exposure of the camera 116 may be selected based on the RGB component values of the white blobs.
- the target or optimal exposure may result in RGB component values of the white blobs within a range of 245 to 255. In other examples, other ranges of acceptable RGB component values for the white blobs may be used.
- the camera 116 and/or the processor 300 may obtain an image under the selected optimal camera parameters.
- said image may simply be retrieved.
- the camera 116 may capture a new image of the test pattern under the selected optimal camera parameters.
- the camera 116 may therefore capture at least one image with optimized focus, exposure and/or other camera parameters.
- the method 1000 may be performed during block 405 of the method 400 to allow the image with optimized focus and exposure to be used for the remainder of the calibration procedure.
- the test pattern may include other features to optimize camera exposure.
- the test pattern may include varying intensities of white (e.g., within a single blob, the outer edge may have 100% intensity while the center of the blob has 10% intensity, different blobs may have different intensities, the test pattern may include a 0 to 100% ramp-shaded region, or the like).
- the exposure of the camera may then be computed based on the relative number of 100% intense pixels, and/or the actual intensity of a designated exposure value (e.g., if the middle of the ramp-shaded region is supposed to be 50% intensity, and it is either higher or lower intensity, the corresponding exposure of the camera may be computed), or similar.
- the system 100 may additionally optimize the focus and/or other parameters of the projector 104 .
- FIG. 11 a flowchart of an example method 110 of automatically adjusting the focus of a projector or display system is depicted.
- the projector 104 projects a test pattern, such as the test pattern 108 , at a first focus.
- the projector 104 may select a first focus at which to project the test pattern.
- the camera 116 captures an image of the test pattern.
- the projector 104 selects a new focus and/or other projector parameter.
- the projector 104 may select a different focus at which to project the test pattern.
- the focus and/or other projector parameter may be selected from a predefined list of projector parameters to test.
- the projector 104 may only adjust one projector parameter at a time, if multiple projector parameters are being tested.
- the projector 104 may then return to block 1105 to project the test pattern at the new projector parameter.
- the method 1100 proceeds to block 1120 .
- the projector 104 and/or another suitable computing device selects an optimal focus and/or other projector parameter.
- the focus of the projector may similarly be computed using the MSG to determine the strength of the edges within the test pattern.
- the test pattern 108 provides high contrast edges to allow the focus of the projector 104 to be similarly optimized.
- the camera 116 and/or the processor 300 may obtain an image with the selected optimal focus and/or projector parameter, for example, by retrieving such an image if it has already been captured, or by projecting, using the projector 104 , the test pattern with the optimal focus and/or other projected parameter, and capturing another image.
- the method 1100 may similarly be performed during block 405 of the method 400 to allow the image used for the remainder of the calibration procedure to be optimized for projector focus. In other examples, the method 1100 may be performed after performance of the method 400 , since the projector focus may not materially affect the determination of the calibration parameters as much.
- an example system and method of calibrating projectors employs a test pattern which is organized into patches including a white blob, and red, green and blue blobs to allow calibration parameters, including radiometric or color compensation, luminance correction, and spatial alignment to be computed by projecting a single test pattern.
- the colors of the additional blobs in each patch define a patch address that allows the patch to be located within the test pattern.
- the location of each patch may then be used to compare the target attribute based on the input at the given location, with the detected attribute, to compute calibration parameters to calibrate the projector and allow the projector to compensate the projected image according to the target surface on which an image or video is projected.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
Description
- The specification relates generally to display systems, and more particularly to systems and methods for calibrating display systems.
- Display systems, such as systems with one or more projectors, cameras, or display devices, may be employed to project videos and images on a variety of different surfaces. However, the surfaces may be uneven, have their own coloring and/or imperfections, or the display devices may be misaligned and/or otherwise introduce imperfections and distortions which causes the projected image to appear distorted or otherwise inaccurate to the desired image.
- According to an aspect of the present specification, an example method of calibrating a display system includes: displaying a test pattern including a plurality of blobs; detecting one or more base blobs in the displayed test pattern; identifying, based on the detected base blobs, patches of the test pattern, wherein each patch comprises one of the base blobs and a subset of additional blobs detected in the displayed test pattern; determining a patch location for at least one patch within the test pattern based on the subset of the additional blobs in the patch to determine a blob location for at least one detected blob; determining a calibration parameter for the display system based on the blob location and a detected attribute of the at least one detected blob; and calibrating the display system using the calibration parameter.
- According to another aspect of the present specification, an example system includes: a display system configured to display a test pattern onto a surface, the test pattern including a plurality of blobs; a camera configured to capture an image of at least a portion of the displayed test pattern; and a processor configured to: detect one or more base blobs in the test pattern; identify, based on the detected base blobs, a patch of the test pattern, wherein the patch comprises one of the base blobs and a subset of additional blobs detected in the test pattern; determine a patch location for the patch within the test pattern based on the subset of the additional blobs in the patch to determine a blob location for at least one detected blob; determine a calibration parameter for the display system based on the blob location and a detected attribute of at least one detected blob; and calibrate the display system using the calibration parameters.
- According to another aspect of the present specification, another example method of calibrating a display system includes: displaying a test pattern including a plurality of blobs; identifying patches of the test pattern, wherein each patch comprises a subset of blobs detected in the displayed test pattern; determining a patch location for at least one patch within the test pattern based on the blobs in the patch; determining a blob location for each at least one detected blob in the patch based on the patch location; determining a calibration parameter for the display system based on the blob location and a detected attribute of each the at least one detected blob; and calibrating the display system using the calibration parameter.
- Implementations are described with reference to the following figures, in which:
-
FIG. 1 depicts a block diagram of an example system for calibrating a display system; -
FIG. 2 depicts a schematic diagram of an example patch of a test pattern for calibrating a display system; -
FIG. 3 depicts a block diagram of certain internal components of the projector ofFIG. 1 ; -
FIG. 4 depicts a flowchart of an example method of calibrating a display system; -
FIG. 5 depicts a flowchart of an example method of detecting base blobs atblock 415 of the method ofFIG. 4 ; -
FIG. 6 depicts a flowchart of an example method of determining a patch location atblock 425 of the method ofFIG. 4 ; -
FIG. 7 depicts a flowchart of an example method of verifying a patch address using hyper-addressing; -
FIG. 8 depicts a schematic diagram of an example macro-patch of a test pattern for calibrating a display system; -
FIG. 9 depicts a flowchart of an example method of determining a calibration parameter atblock 430 of the method ofFIG. 4 ; -
FIG. 10 depicts a flowchart of an example method of adjusting a camera parameter; and -
FIG. 11 depicts a flowchart of an example method of adjusting a display system parameter. - To compensate for the effects of the surface onto which projectors and/or other display devices display images and videos, the input data may be adjusted by the display system to calibrate the output to better approximate the target. To calibrate the display system and/or to align one or more projectors or display devices relative to each other, the display system may display a test pattern onto the surface. An image of the displayed test pattern, or at least a portion of the displayed test pattern may be captured by a camera and the image analyzed to determine how to calibrate the display system. Often, in order to better differentiate between different shades and hues of various colors and/or to gather data required for modelling, multiple test patterns are required, which may result in the calibration process being time-consuming and inconvenient when initially setting up a projector.
- An example test pattern in accordance with the present specification includes a plurality of blobs arranged in patches, with each patch having a white base blob defining the patch, and red, green, and blue reference blobs. The arrangement of the blobs in the patch and the inclusion of white, red, green and blue blobs allows the test pattern to be used for color compensation, geometric alignment, and luminance correction, with a single-shot test pattern. In particular, to calibrate the display system, a processor may detect the white base blob in the projected pattern, identify patches of the test pattern based on the base blob, identify the reference blobs in the patches and use the reference blobs to decode the colors of the additional blobs in the patch. Further, the test pattern may be arranged such that the colors of each of the additional blobs in the patch defines a patch address that allows the processor to locate the patch within the test pattern. Thus, the processor may use the location of the patch to accurately compare a target attribute (i.e., the input to the test pattern) and a detected attribute (i.e., as displayed on the surface) of a given blob, and compensate or apply a calibration parameter as appropriate.
-
FIG. 1 depicts asystem 100 for calibrating a display system, such as aprojector 104. The present example will be described in conjunction with theprojector 104, however it will be understood that calibration of other suitable display systems and devices is also contemplated. Theprojector 104 is configured to project atest pattern 108 onto asurface 112. Thesystem 100 may further include a camera 116 (e.g., an optical camera) to capture an image of the projectedtest pattern 108. Thecamera 116 may be a discrete component of thesystem 100, as shown, or thecamera 116 may be integrated into theprojector 104. The image of the projectedtest pattern 108 captured by thecamera 116 may then be analyzed to identify calibration parameters for theprojector 104 in order to calibrate theprojector 104 with respect to thesurface 112. For example, the calibration parameters may be to adjust the color, luminance, geometric alignment, distortion, color convergence, focus, or the like, to allow theprojector 104 to subsequently project other images or videos with high clarity, contrast, and appropriate color onto thesurface 112. - Accordingly, the
test pattern 108 includes features to facilitate the calibration of theprojector 104 with respect to color, luminance, geometric alignment, distortion, focus, color convergence, and the like. Thetest pattern 108 may further allow theprojector 104 to be calibrated for focus and exposure. In particular, thetest pattern 108 is formed of a plurality ofblobs 120, each of which is a region of a given color. Theblobs 120 may be squares, circles, other geometric shapes, or other suitable forms. Further, each of theblobs 120 may have the same form as theother blobs 120, or theblobs 120 may have different forms. Theblobs 120 of thetest pattern 108 may be organized to formpatches 124. Eachpatch 124 includes a subset of theblobs 120 and has certain properties for use in the calibration of theprojector 104, as will be further described below. - For example, referring to
FIG. 2 , anexample patch 124 is depicted. In the present example, thepatch 124 includes nineblobs 120, arranged in a three-by-three grid. In particular, the nineblobs 120 include abase blob 200, three reference blobs 204-1, 204-2, 204-3 (referred to herein generically as a reference blob 204 or collectively as reference blobs 204; this nomenclature is also used elsewhere herein), and five additional blobs 208-1, 208-2, 208-3, 208-4, 208-5. - The
base blob 200 is a blob which may be used to identify thepatch 124 from the detected blobs from the projectedtest pattern 108. In particular, theblobs 120 forming thepatch 124 have a certain predefined spatial relationship to thebase blob 200. For example, given thebase blob 200, thepatch 124 may be defined as thebase blob 200 and the eight nearest neighbor blobs to the base blob 200 (i.e., the four blobs directly adjacent to thebase blob 200 and the four blobs diagonally adjacent to thebase blob 200, such that thebase blob 200 is in the center of the three-by-three array of blobs). That is, eachpatch 124 may include abase blob 200 at the center of thepatch 124. In other examples, other spatial relationships of thebase blob 200 and thepatch 124 are contemplated. - Accordingly, since the
base blob 200 is used to identify thepatch 124, thebase blob 200 may be selected to have a distinctive color or other feature detectable in the projectedtest pattern 108, and consistently distinguishable from theother blobs 120 in thetest pattern 108. In the present example, thebase blob 200 is white in color, and hence will be the brightest or most intense detected blob, in particular amongst its eight nearest neighbors. In other examples, thebase blob 200 may have a distinct shape, or may be additionally distinguished based on the surrounding blobs. - The reference blobs 204 are blobs in the patch which may be used as points of reference to orient the
patch 124 and/or as color reference to enable color calibration of theprojector 104, particularly on adverse surfaces, or for other reference purposes for further calibrating theprojector 104. In the present example, the first reference blob 204-1 is located in the top left corner of the three-by-three array of blobs in thepatch 124, the second reference blob 204-2 is located in the top right corner of the three-by-three array of blobs in thepatch 124, and the third reference blob 204-3 is located at the bottom center of the three-by-three array of blobs in thepatch 124. Further, in the present example, the first reference blob 204-1 is a red blob, the second reference blob 204-2 is a green blob, and the third reference blob 204-3 is a blue blob. In other examples, the reference blobs 204 may be selected to have other distinguishable colors or features. The combination of the designated locations and colors of each of the reference blobs 204 causes thepatch 124 to be rotationally and reflectively asymmetric, and hence the reference blobs 204 may be used, for example, to determine the orientation of the test pattern (i.e., since the red reference blob 204-1 is in the top left corner, relative to thewhite blob 200, etc.), as well as whether theprojector 104 is a front projector or a rear projector. - Further, since the reference blobs 204 in the present example cover the three primary colors of red, green and blue, the reference blobs 204 may be used as references for color identification and correction. In particular, the red, green and blue reference blobs 204 may be assumed to be the closest in hue to the original red, green and blue colors, and only suffer from intensity issues. Accordingly, their appearance on the
surface 112 may be used as a reference for the appearance of other colors with red, green, and blue hues on thesurface 112. - The five additional blobs 208 are other blobs which define a
patch address 212 for thepatch 124. For example, the five additional blobs 208 may be colored or greyscale blobs. Preferably, the colors of the additional blobs 208 may be selected from a predefined list of blob colors. The blob colors may be, for example, the secondary and tertiary colors. Based on the spatial relationships of the additional blobs 208 to the reference blobs 204, the additional blobs 208 may be ordered to form an ordered list. For example, in the present example, the additional blob 208-1 adjacent the blue reference blob 204-3 and in the same column as the red reference blob 204-1 is designated as the first additional blob 208-1. The remaining additional blobs 208 may be sequentially ordered by moving clockwise (i.e., towards the red reference blob 204-1 and away from the blue reference blob 204-3) through the additional blobs 208 to achieve an ordered list. The colors, C1, C2, C3, C4, and C5 of the additional blobs 208 in their given order define thepatch address 212. In other examples, other pre-defined orders of the color blobs, as defined relative to thebase blob 200 and the reference blobs 294, including other sufficiently large subsets of the color blobs to uniquely identify thepatch 124, are also contemplated. - It will be appreciated that in other examples, the
patch 124 may include a different number ofblobs 120, different configurations of theblobs 120, different colors or properties for thebase blob 200 and the reference blobs 204, and the like. For example, thepatch 124 could be an array of a different size, hexagonally tiled, or use an arrangement other than thebase blob 200 and the three reference blobs 204. Additionally, in some examples, thepatch 124 need not include thebase blob 200 and/or the three reference blobs 204 and may instead be identifiable based on another arrangement and/or relationship between theblobs 120 forming thepatch 124. - Referring to
FIG. 3 , certain internal components of theprojector 104 are depicted in greater detail. Theprojector 104 includes acontroller 300 and amemory 304. Theprojector 104 may further include acommunications interface 308 and optionally, an input/output device (not shown). - The
controller 300 may be a processor such as a central processing unit (CPU), a microcontroller, a processing core, or similar. Thecontroller 300 may include multiple cooperating processors. In some examples, the functionality implemented by thecontroller 300 may be implemented by one or more specially designated hardware and firmware components, such as a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), a graphics processing unit (GPU), or the like. In some examples, thecontroller 300 may be a special purpose processor which may be implemented via the dedicated logic circuitry of an ASIC or FPGA to enhance the processing speed of the calibration operation discussed herein. - The
controller 300 is interconnected with a non-transitory computer-readable storage medium, such as thememory 304. Thememory 304 may include a combination of volatile memory (e.g., random access memory or RAM), and non-volatile memory (e.g., read only memory or ROM, electrically erasable programmable read only memory or EEPROM, flash memory). Thecontroller 300 and thememory 304 may comprise one or more integrated circuits. Some or all of thememory 304 may be integrated with thecontroller 300. - The
memory 304 stores acalibration application 316 which, when executed by thecontroller 300, configures thecontroller 300 and/or theprojector 104 to perform the various functions discussed below in greater detail and related to the calibration operation of theprojector 104. In other examples, theapplication 316 may be implemented as a suite of distinct applications. Thememory 304 also stores arepository 320 configured to store calibration data for the calibration operation, including a list of blob colors used in the test pattern, a list of valid addresses used in the test pattern, a list of hyper-addresses used in the test pattern, and other rules and data for use in the calibration operation of theprojector 104. - The
communications interface 308 is interconnected with thecontroller 300 and includes suitable hardware (e.g., transmitters, receivers, network interface controllers and the like) allowing theprojector 104 to communicate with other computing devices. The specific components of thecommunications interface 308 are selected based on the type of network or other links that theprojector 104 is to communicate over. For example, thecommunications interface 308 may allow theprojector 104 to receive images of the projected test pattern from thecamera 116, in examples where thecamera 116 is not integrated with theprojector 104. - The operation of the
system 100 will now be described in greater detail with reference toFIG. 4 .FIG. 4 depicts a flowchart of anexample method 400 of calibrating a projector. Themethod 400 will be described in conjunction with its performance in thesystem 100, and in particular via execution of theapplication 316 by theprocessor 300, with reference to the components illustrated inFIGS. 1-3 . In other examples, some or all of themethod 400 may be performed by other suitable devices, such as a media server, or the like, or in other suitable systems. - At
block 405, theprojector 104 projects thetest pattern 108 onto thesurface 112 and thecamera 116 captures an image of thetest pattern 108 as projected onto thesurface 112. In particular the image captured by thecamera 116 represents the appearance of thetest pattern 108 on thesurface 112, including any geometric deformation, color distortion, and the like, which appear as a result of the properties of thesurface 112. Additionally, atblock 405, in examples where thecamera 116 is distinct from theprojector 104, thecamera 116 may transmit the captured image to theprojector 104 for further processing, and in particular to allow theprocessor 300 to compute calibration parameters for theprojector 104. In further examples, rather than computing the calibration parameters at theprojector 104, the calibration parameters may be computed at a separate computing device, such as a connected laptop or desktop computer, a server, or the like. Accordingly, in such examples, thecamera 116 may transmit the captured image to the given computing device to compute the calibration parameters for theprojector 104. - At
block 410, theprocessor 300 analyzes the captured image to detect theblobs 120 of thetest pattern 108. Theblobs 120 may be detected using standard computer vision techniques, using convolution, differential methods, local extrema, or the like. - At
block 415, theprocessor 300 detects one or more base blobs in the projectedtest pattern 108. In particular, theprocessor 300 may identify, from theblobs 120 detected atblock 410, which blobs 120 satisfy the criteria of the base blobs and designate a subset of theblobs 120 as base blobs. For example, referring toFIG. 5 , anexample method 500 of identifying base blobs from the projectedtest pattern 108 is depicted. In particular, themethod 500 will be described in conjunction with identifying base blobs in thetest pattern 108 havingpatches 124, in particular, organized in the manner described in conjunction withFIG. 2 . It will be understood that in other examples where the base blobs has other identifying characteristics (e.g., shape), other methods of identifying the base blob are contemplated. - At
block 505, theprocessor 300 selects ablob 120 to analyze to determine whether or not it is a base blob. Accordingly, theprocessor 300 may select ablob 120 detected atblock 410 which has not been validated as a base blob or invalidated as a base blob. - At
block 510, theprocessor 300 identifies neighboringblobs 120 of theblob 120 selected atblock 505. For example, when the base blobs are located in the center of apatch 124, theprocessor 300 may retrieve the eight nearest neighbors of the selectedblob 120. Preferably, thetest pattern 108 may be arranged such thatadjacent blobs 120 are spaced apart by a predefined amount. For example, the space betweenadjacent blobs 120 may be about half the width of ablob 120. Accordingly, theprocessor 300 may look forblobs 120 detected atblock 410 which are within 2.5 widths of the selectedblob 120 to identify the neighbors of the selectedblob 120. - At
block 515, theprocessor 300 selects one of the neighbors identified atblock 510 for comparison against theblob 120 selected atblock 505. In particular, theprocessor 300 may select a neighboringblob 120 which has not yet been compared to the selected blob. - At
block 520, theprocessor 300 compares the intensity of the selected neighboringblob 120 to the selectedblob 120. For example, theprocessor 300 may sum the red, green and blue (RGB) components of the selected neighboringblob 120 and the selectedblob 120 and compare the two sums. To obtain the RGB component values, theprocessor 300 may sample RGB components of the givenblob 120 at its center, at a predefined set of coordinates within theblob 120, or theprocessor 300 may average the RGB components over theblob 120, or other suitable methods of obtaining RGB component values over theblob 120. In particular, since the base blobs in the present example are white in color, theprocessor 300 may determine whether the intensity (i.e., the sum of the RGB components) of the selectedblob 120 is greater than the intensity of the selected neighboringblob 120. - If the decision at
block 520 is negative, that is that the intensity of the selectedblob 120 is not greater than the intensity of the selected neighboringblob 120, then theprocessor 300 proceeds to block 535. Atblock 535, theprocessor 300 invalidates theblob 120 selected atblock 505 as a potential base blob. That is, since there is at least one neighboringblob 120 which is more intense than the selectedblob 120, theprocessor 300 may deduce that the selectedblob 120 is not a white blob, since neighboringblobs 120 are likely to suffer from similar color distortions, and hence the white blobs would remain more intense than their neighbors. Accordingly, theprocessor 300 may conclude that the selectedblob 120 is not a base blob in thetest pattern 108. Theprocessor 300 may subsequently return to block 505 to continue selectingblobs 120 to identify the base blobs in thetest pattern 108. - If the decision at
block 520 is affirmative, that is that the intensity of the selectedblob 120 is greater than the intensity of the selected neighboringblob 120, then theprocessor 300 proceeds to block 525. Atblock 525, theprocessor 300 may invalidate the neighboringblob 120 selected atblock 515 as a base blob, since it has at least one neighboring blob 120 (namely, the selected blob 120) which is more intense than it. Further, theprocessor 300 determines whether or not the selectedblob 120 has moreneighboring blobs 120. - If the decision at
block 525 is affirmative, that is, that the selectedblob 120 has moreneighboring blobs 120 against which its intensity has not yet been compared, theprocessor 300 returns to block 515 to select a further neighboringblob 120 to compare intensities. - If the decision at
block 525 is negative, that is, that the selectedblob 120 has no moreneighboring blobs 120 against which its intensity has not yet been compared, theprocessor 300 proceeds to block 530. Atblock 530, theprocessor 300 validates theblob 120 selected atblock 505 as a base blob. That is, having determined that the selectedblob 120 has a higher intensity than each of its neighbors, theprocessor 300 may therefore determine that the selectedblob 120 is white in color and therefore abase blob 200. Theprocessor 300 may then return to block 505 to continue assessingblobs 120 to find the base blobs 200 in thetest pattern 108. - Returning to
FIG. 4 , atblock 420, after detecting the base blobs 200 in thetest pattern 108, theprocessor 300 uses the base blobs 200 to identify thepatches 124. For example, theprocessor 300 may define apatch 124 as abase blob 200 and the eight nearest neighboringblobs 120 of thebase blob 200, for eachbase blob 200 identified atblock 415. - In some examples, in addition to identifying
base blobs 200 as being the most intense blobs amongst their eight nearest neighbors, theprocessor 300 may additionally verify a candidate blob as abase blob 200 based on the arrangement of theother blobs 120 within thepatch 124 defined by the candidate blob. For example, theprocessor 300 may identify, within thepatch 124, a red blob, a green blob, and a blue blob. In other examples, theprocessor 300 may select different color reference blobs. Theprocessor 300 may additionally verify that the red, green and blue blobs are located in thepatch 124 relative to thebase blob 200 and to one another based on the predefined configurations of thepatch 124. In some examples, if the red, green and blue blobs identified in thepatch 124 do not satisfy the predefined configuration of thepatch 124, theprocessor 300 may determine that the candidate white blob is not in fact avalid base blob 200. - In further examples, the
processor 300 may omit block 415 entirely, and hence atblock 420, may identify the patches solely on the basis of the blobs in the patch. Thus, for example, the processor may select a group of blobs in a sliding window (e.g., a 3×3 array, a 2×2 array, or otherwise selected based on the size and shape of an expected patch). The group of blobs may be compared against a list of valid patches, and the groups whose colors and positions match a valid patch may be identified as a patch. Theprocessor 300 may reject groups of blobs which are made from partial groups of multiple patches. In particular, such an identification mechanism when each patch in the test pattern is unique. - At
block 425, theprocessor 300 determines the patch location for at least one of thepatches 124 identified atblock 420. In some examples, theprocessor 300 may identify the patch location for all thepatches 124 identified atblock 420, while in other examples, theprocessor 300 may select a subset ofpatches 124 for which to identify the patch location. The subset may be selected, for example based on the spatial arrangement of the patches 124 (e.g., the location of each patch in an alternating or checkerboard pattern or the like) or other suitable selection criteria. Theprocessor 300 may use the eight nearest neighboringblobs 120 in thepatch 124 to determine the patch location based on predefined configurations and properties of eachpatch 124. Alternately, theprocessor 300 may use a suitable subset ofblobs 120 in thepatch 124 which uniquely identify thepatch 124 and allow thepatch 124 to be located in the test pattern. For example, referring toFIG. 6 , a flowchart of anexample method 600 of determining a patch location is depicted. - At
block 605, theprocessor 300 selects apatch 124 to locate. In particular, thepatch 124 may be selected based on itsbase blob 200. - At
block 610, theprocessor 300 may identify the reference blobs 204 in thepatch 124. For example, when the reference blobs 204 are red, green and blue blobs, theprocessor 300 may identify the reference blobs 204 by selecting theblobs 124 which have RGB components which are closest to a red hue, a green hue, and a blue hue, respectively. For example, theprocessor 300 may use a least-squares method, a cosine distance, or other suitable method to determine the distance of the color (i.e., based on its RGB components) of a givenblob 120 to the RGB component values of a red blob. Theblob 120 in thepatch 124 which is closest to a red color may be determined by theprocessor 300 to be the red reference blob 204-1. Similarly, theprocessor 300 may identify theblobs 120 in thepatch 124 which are closest to a green color and a blue color as the green reference blob 204-2 and the blue reference blob 204-3, respectively. Additionally, having identified the reference blobs 208, theprocessor 300 may also identify the remainingblobs 120 of thepatch 124 as additional blobs 208. - At
block 615, theprocessor 300 orders the additional blobs 208 into an ordered list. To do so, theprocessor 300 may first orient thepatch 124 using the reference blobs 204. For example, the designated locations of the reference blobs 204 may cause thepatch 124 to be rotationally and reflectively asymmetrical, and hence theprocessor 300 may use the red reference blob 208-1 to define the top left corner of thepatch 124, and the green reference blob 208-2 to define the top right corner of thepatch 124. Theprocessor 300 may additionally confirm the orientation of thepatch 124 by verifying that the blue reference blob 208-3 is in the bottom center. - Having oriented the
patch 124, theprocessor 300 may sort the additional blobs 208 based on their location in thepatch 124 to identify their position in the ordered list. In particular, since thepatch 124 is oriented, the additional blob 208 in the bottom left corner may be designated as the first additional blob 208-1. The additional blobs 208 may then be added to the ordered list proceeding in a clockwise direction, from the first additional blob 208-1. Thus, the additional blob 208 immediately above the first additional blob 208-1 may be designated as the second additional blob 208-2. In particular, the ordered list as generated from a specific, predefined orientation of thepatch 124 allows the additional blobs 208 to encode a patch address, without risk of duplicates based on using the same additional blobs 208 in a different order for adifferent patch 124. Thus, the ordered list of additional blobs 208 in theexample patch 124 depicted inFIG. 2 is [208-1, 208-2, 208-3, 208-4, 208-5]. - Having generated an ordered list of the additional blobs 208, the
processor 300 may then determine thepatch address 212 for thepatch 124. In particular, atblock 620, theprocessor 300 selects an additional blob 208 from the ordered list. The additional blob 208 may be the next additional blob 208 which has not yet been processed to generate thepatch address 212. Thus, theprocessor 300 may begin with the first additional blob 208-1 at the first iteration ofblock 620. - At
block 625, theprocessor 300 determines the color of the additional blob 208 selected atblock 620. In particular, theprocessor 300 predicts the intended target color (i.e., the input color) of the selected additional blob 208. That is, rather than simply taking the color of the additional blob 208 as projected onto thesurface 112, theprocessor 300 may use the RGB component values of thewhite base blob 200 and the red, green and blue reference blobs 204 to predict the input color for the selected additional blob 208. For example, if the blue component value of the additional blob 208 is similar to the blue component value of the blue reference blob 204-3, theprocessor 300 may predict that the input blue component value of the additional blob 208 is similar to the input blue component value of the blue reference blob 204-3, that is, 255. Similarly, theprocessor 300 may scale the other detected RGB component values of the additional blob 208 according to the detected RGB component values of the reference blobs 204 and thebase blob 200 to predict the other input RGB component values of the additional blob and hence decode the input blob color of the additional blob. More specifically, the prediction may include scaling and/or adjusting the values of the detected blobs 208 to adjust for variations in background or ambient light to allow decoding of the input blob color to be more accurate. - In some examples, the
processor 300 may additionally verify the predicted input color against a predefined list of blob colors used in thetest pattern 108 stored in thememory 304. That is, rather than using combinations of any and/or all colors (i.e., all RGB component values), thetest pattern 108 may containblobs 120 with colors selected from the predefined list of blob colors. For example, the predefined list of blob colors may include the secondary and tertiary colors. In such examples, theprocessor 300 may verify the predicted input color and/or corrected the blob color by selecting a new predicted input color based on the closest blob color on the predefined list of blob colors. For example, theprocessor 300 may used a least-squares computation to determine the blob color on the predefined list of blob colors which is closest to the predicted input color and designate the closest blob color as the new predicted input color. In some examples, theprocessor 300 may only designate the closest blob color as the new predicted input color if the distance to the new predicted input color is below a threshold distance. Thus, if the predicted input color is mid-way between two possible valid blob colors, theprocessor 300 may defer prediction of the blob color for a more holistic verification of thepatch 124, as described below. - At
block 630, theprocessor 300 adds the predicted input color for the blob 208 selected atblock 620 to the patch address to build thepatch address 212. In particular, since the additional blobs 208 are processed in the ordered list, thepatch address 212 is similarly ordered by the associated colors of the additional blobs 208 in the ordered list. Thus, thepatch address 212 of theexample patch 124 depicted inFIG. 2 is [C1, C2, C3, C4, C5]. - At
block 635, theprocessor 300 determines whether there are any more additional blobs 208 in the ordered list. - If the decision at
block 635 is affirmative, theprocessor 300 returns to block 620 to select the next additional blob 208 in the ordered list and add its associated color to thepatch address 212. - If the decision at
block 640 is negative, that is, that all the additional blobs 208 in the ordered list have been processed and their corresponding associated colors added to the patch address, then theprocessor 300 proceeds to block 640. Atblock 640, theprocessor 300 uses thepatch address 212 to determine the patch location of thepatch 124. For example, theprocessor 300 may retrieve, from thememory 304, a predefined look-up table or other suitable data structure which defines a patch location associated with eachpatch address 212. For example, the patch location may be the coordinates of thepatch 124 within thetest pattern 108. The patch location may be expressed, for example, in terms of pixel coordinates of a given corner of the patch 124 (e.g., the top left corner), pixel coordinates of a center of thepatch 124, coordinates relative to other patches 124 (e.g., designating the top left patch as 0,0), or other suitable means. In other examples, theprocessor 300 may directly compute the patch location of thepatch 124 based on thepatch address 212 and a predefined set of rules for computing the patch location. - In some examples, prior to comparing the
patch address 212 to the look-up table to determine the patch location, theprocessor 300 may additionally verify the patch address against a predefined list of valid patch addresses stored in thememory 304. The predefined list of valid patch addresses includes patch addresses 212 actually employed in thetest pattern 108. That is, the list of valid patch addresses is generated based on the input colors to thetest pattern 108. Accordingly, thetest pattern 108 is preferably arranged such that each valid patch address appears only once on the list of valid patch addresses. The patch addresses may thus be uniquely verified, as well as used to uniquely locate thepatch 124 within thetest pattern 108. Theprocessor 300 may perform verification of the patch address against the valid patch addresses, for example based on a full matching, a partial matching, a distance computation, or other suitable means. When the determined patch address is not a valid patch address, theprocessor 300 may correct the patch address based on the list of valid patch addresses and, for example, the closest partial matching. - Further, in some examples, additionally or alternately to verifying the blob color of each of the additional blobs 208 individually, the
processor 300 may verify thepatch address 212 against the predefined list of valid patch addresses stored in thememory 304 based, in part or in whole, on the predicted input colors of each of the additional blobs (i.e., as opposed to the blob colors as selected from the predefined list of blob colors). Thus, if a predicted input color of one of the additional blobs 208 is in between two (or more) possible blob colors on the predefined list, verification of thepatch address 212 as a whole may allow theprocessor 300 to more accurately predict the correct blob color, particularly if one blob color corresponds with a valid patch address, while the other does not. - After determining the patch address for a given
patch 124, theprocessor 300 may return to block 605 to determine the patch address for anotherpatch 124, until eachpatch 124 associated with eachbase blob 200 has been assigned a patch address. In some examples, after determining the patch addresses for eachpatch 124, theprocessor 300 may additionally validate the patch addresses by forming macro-patches and validating the hyper-addresses of each macro-patch. For example,FIG. 7 depicts a flowchart of anexample method 700 of validating patch addresses. - At
block 705, theprocessor 300 defines a macro-patch. The macro-patch may be an array or subset of thepatches 124 in thetest pattern 108. Preferably, the macro-patch has a predefined configuration, such as a two-by-two array, or other configuration in which the spatial relationship betweenpatches 124 in the macro-patch is predetermined. - At
block 710, theprocessor 300 determines a hyper-address for the macro-patch. The hyper-address includes respective patch addresses of the patches forming the macro-patch. In particular, the hyper-address for the macro-patch may be an ordered list of the patch addresses of the patches forming the macro-patch. Thus, to determine the hyper-address for the macro-patch, theprocessor 300 may first order the patches into an ordered list. Since each of the patches themselves have an orientation, the macro-patch may be oriented according to the orientations of the patches forming the macro-patch. Theprocessor 300 may then select one of the patches as the first patch, according to a predefined criteria, and proceed to add patches to the list sequentially according to a predefined path between the patches of the macro-patch. Theprocessor 300 may then define the ordered list of corresponding patch addresses of the patches to be the hyper-address. - For example, referring to
FIG. 8 , anexample macro-patch 800 is depicted. The macro-patch 800 includes four patches, 804-1, 804-2, 804-3, and 804-4, arranged in a two-by-two array. In other examples, other arrangements of patches 804 in the macro-patch 800 are contemplated. For example, the macro-patch 800 may include a larger array of patches 804, a line of patches 804, or the like. Further, in some examples, different macro-patches may share one or more patches contained therein. - In the present example, each of the four patches 804 has a corresponding patch address, A1, A2, A3, and A4, respectively, defined by the blobs in the patch 804. To generate a hyper-
address 808 for the macro-patch 800, theprocessor 300 generates an ordered list of the patches 804. In the present example, theprocessor 300 begins at the top left patch, 804-1, and proceeds clockwise through the patches 804 in the two-by-two array. Accordingly, the ordered list of patches is [804-1, 804-2, 804-3, 804-4]. Theprocessor 300 may then generate a hyper-address 808 from the ordered list of patches 804 using the corresponding patch address for each patch 804 in the ordered list. Accordingly, the hyper-address 808 is [A1, A2, A3, A4]. - Returning to
FIG. 7 , atblock 715, theprocessor 300 determines whether the hyper-address generated atblock 710 is a valid hyper-address. For example, theprocessor 300 may compare the hyper-address generated atblock 710 to a predefined list of valid hyper-addresses stored in thememory 304. The predefined list of valid hyper-addresses includes hyper-addresses actually employed in thetest pattern 108, based on the input colors and arrangement of blobs (and therefore patches) in thetest pattern 108. In particular, the valid hyper-addresses are defined based on the predefined path through the patches in the macro-patch. Theprocessor 300 may perform the verification of the hyper-addresses against the valid hyper-addresses based on a full matching, a partial matching, distance computation, or the like. - In examples where the
test pattern 108 has unique patch addresses for each patch, the hyper-addresses for each macro-patch will similarly be unique. However, in examples where patch addresses are re-used for different patches at different locations in thetest pattern 108, thetest pattern 108 is preferably arranged such that the hyper-addresses for each macro-patch is unique. Uniqueness of the hyper-addresses would therefore still allow the patch addresses to be uniquely verified and located (i.e., based on its relationship to adjacent patches in a macro-patch) within thetest pattern 108. - If the determination at
block 715 is affirmative, theprocessor 300 proceeds to block 720. Atblock 720, theprocessor 300 validates each of the patch addresses which formed the hyper-address. That is, theprocessor 300 confirms that the patch addresses defined by the blobs in each of the patches, is in fact the correct patch address for that patch. - If the determination at
block 720 is negative, that is, that the hyper-address is not a valid hyper-address, theprocessor 300 proceeds to block 725. Atblock 725, theprocessor 300 may make a prediction as to which hyper-address is the correct hyper-address for the macro-patch and may correct the patch addresses for the patches of the macro-patch, as appropriate. For example, if three patch addresses of the hyper-address match a valid hyper-address, and the fourth patch address is off by less than a threshold distance (e.g., as computed based on the differences in RGB components of the colors defining the patch address) of the fourth patch address of the valid hyper-address, then theprocessor 300 may determine that the fourth patch address should be the patch address defined in the valid hyper-address and may correct the fourth patch address accordingly. - As will be appreciated, other verification and matching scenarios and distance computations are also contemplated. Further, recursively grouping macro-patches and obtaining addresses for the grouped macro-patches may also allow for repetition of patch addresses and hyper-addresses and/or provide further confirmation or verification of the correct patch addresses and hyper-addresses.
- Returning to
FIG. 4 , after determining the patch location atblock 425, and optionally verifying the patch location, theprocessor 300 may subsequently use the patch location to determine a blob location for at least one detectedblob 120 detected atblock 410. In some examples, theprocessor 300 may determine a blob location for all theblobs 120 detected atblock 410, while in other examples, theprocessor 300 may determine a blob location for a subset of theblobs 120 detected atblock 410. The selection of the subset may be based, for example, on a spatial arrangement of theblobs 120 within the test pattern. That is, since each patch location is known, and since theblobs 120 are located at predetermined positions within its corresponding patch, theprocessor 300 may determine the blob location for eachblob 120. - At
block 430, theprocessor 300 determines a calibration parameter for theprojector 104. In particular, theprocessor 300 uses the blob location and a detected attribute of at least one blob 210 which was detected atblock 410 and located atblock 425. Preferably, theprocessor 300 may determine the calibration parameter based on all of the blobs 210 in order to allow the calibration parameters to be better localized and more accurate across the test pattern and the projection area for theprojector 104. For example, the calibration parameter may be a color or luminance of theprojector 104. Generally, theprocessor 300 may use the blob location to determine the input parameters for a givenblob 120 and compare the input parameters to the corresponding detected attributes (e.g., color, luminance, geometric alignment) and compute a correction to allow theprojector 104 to project the givenblob 120 such that the detected attribute better approximates the desired target parameter. -
FIG. 9 depicts a flowchart of anexample method 900 of determining calibration parameters for theprojector 104. - At
block 905, theprocessor 300 selects ablob 120 of thetest pattern 108. In particular, theprocessor 300 may select ablob 120 for which a calibration parameter or compensation has not yet been computed. - At
block 910, theprocessor 300 obtains the target attribute for theblob 120 selected atblock 905. For example, the target attribute may be a target color or luminance, as defined by the input color or luminance of theblob 120 in thetest pattern 108, or a geometric alignment, as defined by the geometric properties of thetest pattern 108. That is, theprocessor 300 may use the blob location of theblob 120 within thetest pattern 108 to identify the input attribute as the target attribute. - At
block 915, theprocessor 300 obtains the detected attribute for theblob 120 selected atblock 905. That is, theprocessor 300 identifies the corresponding color or luminance of theblob 120 as detected by thecamera 116 in the captured image representing thetest pattern 108 as projected onto thesurface 112. In some examples, the detected attribute may be sampled at the center of theblob 120, or at a predefined point within the blob 120 (e.g., a predefined corner, etc.), while in other examples, the detected attribute may be an average of the detected attribute at each point, or a selected subset of points, across theblob 120. - At
block 920, theprocessor 300 computes calibration parameters for the selectedblob 120 based on the target attribute(s) determined atblock 910 and the detected attribute(s) determined atblock 915. That is, based on the differences between the input to theprojector 104 and the detected output of the projection onto thesurface 112, theprocessor 300 may determine a compensation to adjust the input to theprojector 104 to allow the detected output attribute (i.e., as projected onto the surface 112) to better approximate the target attribute. For example, theprocessor 300 may use standard radiometric or luminance compensation computations and/or geometric alignment computations, as will be understood by those of skill in the art, to define the calibration parameters for theblob 120. - At
block 925, theprocessor 300 determines whether there are anyfurther blobs 120 in thetest pattern 108 for which the calibration parameters have not yet been computed. If the determination atblock 925 is affirmative, theprocessor 300 returns to block 905 to select asubsequent blob 120 for which the calibration parameters have not yet been computed. - If the determination at
block 925 is negative, that is that the calibration parameters have been computed for eachblob 120 in thetest pattern 108, then theprocessor 300 proceeds to block 930. Atblock 930, theprocessor 300 smooths the calibration parameters of each of theblobs 120 over the projection area (i.e., over the area of the test pattern 108). In particular, the calibration parameters computed atblock 920 are individually computed perblob 120. Accordingly,adjacent blobs 120 may have different calibration parameters, which may cause abrupt and jarring changes betweenblobs 120 in the projection if applied perblob 120. Further, since theblobs 120 may be spaced apart from one another, thetest pattern 108 may not produce a calibration parameter for the negative spaces betweenblobs 120. Accordingly, rather than simply directly applying the calibration parameter over the blob area of the givenblob 120, theprocessor 300 may designate the calibration parameter at a given point of the blob 120 (e.g., the calibration parameter applies at the center of the blob 120) for each of theblobs 120 in thetest pattern 108 and apply a smoothing function to generate calibration parameters for the intermediary points between the given points of theblobs 120. - Returning to
FIG. 4 , after determining the calibration parameters for theprojector 104, theprocessor 300 proceeds to block 435. Atblock 435, theprocessor 300 applies the calibration parameters to calibrate theprojector 104. That is, during a subsequent projection operation, theprocessor 300 may receive input data representing an image or video to be projected by theprojector 104, apply the calibration parameters to the input data to generate calibrated input data, and control the light sources of theprojector 104 to project the image or video in accordance with the calibrated input data. In other examples, the application of the calibration parameters may be applied to the input data to generate calibrated input data prior to being received at theprojector 104 and/or theprocessor 300. Thus, theprojector 104 will project the image or video with the color, luminance, geometric alignment and/or other attributes adjusted to compensate for variations and imperfections in thesurface 112 to allow the projection to better approximate the original input data. - It will be appreciated that variations on the above method are also contemplated. For example, if a sufficient number of colors are employed, and/or if the test pattern includes sufficiently few blobs and/or patches, the patch addresses may be encoded simply based on the colors of the additional blobs in the patch, rather than based on an ordered list of the colors of the additional blobs in the patch.
- In some examples, rather than employing color blobs, the test pattern may include greyscale blobs including a predefined number of grey levels (e.g., 3 grey levels). In such examples, the grey blobs surrounding the white base blob may still encode the patch address, based on unique (unordered) combinations of the eight greyscale blobs. Such a test pattern may be advantageous, for example for applying only a luminance correction when a radiometric color compensation has already been performed against another projector.
- In some examples, in order to best detect the
blobs 120 and obtain a most accurate representation of the projected test pattern, thecamera 116 may additionally include an automatic exposure and/or focus adjustment capabilities. For example, referring toFIG. 10 , a flowchart of anexample method 1000 of automatically adjusting camera parameters is depicted. - At
block 1005, theprojector 104 projects a test pattern, such as thetest pattern 108. - At
block 1010, thecamera 116 captures an image of the test pattern, at a first camera parameter. For example, thecamera 116 may select a first exposure and/or a first focus at which to capture the image. - At
block 1015, thecamera 116 selects a new camera parameter. For example, thecamera 116 may select a different exposure and/or focus at which to capture a subsequent image. The camera parameter may be selected for example from a predefined list of camera parameters to test. Preferably, thecamera 116 may only adjust one camera parameter at a time to better control the variables (i.e., only changing exposure or focus, but not both). - The
camera 116 may then return to block 1010 to capture a subsequent image of thetest pattern 108 at the new camera parameter. - If each of the camera parameters in the predefined list has been tested, the
method 1000 proceeds to block 1020. Atblock 1020, thecamera 116 and/or theprocessor 300 and/or another suitable computing device selects an optimal camera parameter. - For example, the focus may be computed by using a mean-squared gradient (MSG) technique to compute the strength of the edges within the test pattern. Advantageously, the
test pattern 108 may includeblobs 120 with high contrast at all edges with the negative space between theblobs 120, as based on the selection of primary, secondary, and tertiary colors of theblobs 120. Accordingly, the focus of thecamera 116 may be automatically selected based on the focus with the highest MSG. - The exposure of the
camera 116 may be computed based on the RGB component values. In particular, since thetest pattern 108 includes white blobs, the optimal exposure of thecamera 116 may be selected based on the RGB component values of the white blobs. For example, the target or optimal exposure may result in RGB component values of the white blobs within a range of 245 to 255. In other examples, other ranges of acceptable RGB component values for the white blobs may be used. - At
block 1025, after having selected the optimal camera parameters, thecamera 116 and/or theprocessor 300 may obtain an image under the selected optimal camera parameters. In some examples, when thecamera 116 has already captured an image of the test pattern under the selected optimal camera parameters, said image may simply be retrieved. In other examples, thecamera 116 may capture a new image of the test pattern under the selected optimal camera parameters. Thecamera 116 may therefore capture at least one image with optimized focus, exposure and/or other camera parameters. For example, themethod 1000 may be performed duringblock 405 of themethod 400 to allow the image with optimized focus and exposure to be used for the remainder of the calibration procedure. - In other examples, the test pattern may include other features to optimize camera exposure. For example, the test pattern may include varying intensities of white (e.g., within a single blob, the outer edge may have 100% intensity while the center of the blob has 10% intensity, different blobs may have different intensities, the test pattern may include a 0 to 100% ramp-shaded region, or the like). The exposure of the camera may then be computed based on the relative number of 100% intense pixels, and/or the actual intensity of a designated exposure value (e.g., if the middle of the ramp-shaded region is supposed to be 50% intensity, and it is either higher or lower intensity, the corresponding exposure of the camera may be computed), or similar.
- In some examples, in addition to optimizing the camera parameter(s), the
system 100 may additionally optimize the focus and/or other parameters of theprojector 104. For example,FIG. 11 a flowchart of an example method 110 of automatically adjusting the focus of a projector or display system is depicted. - At
block 1105, theprojector 104 projects a test pattern, such as thetest pattern 108, at a first focus. For example, theprojector 104 may select a first focus at which to project the test pattern. - At
block 1110, thecamera 116 captures an image of the test pattern. - At
block 1115, theprojector 104 selects a new focus and/or other projector parameter. For example, theprojector 104 may select a different focus at which to project the test pattern. The focus and/or other projector parameter may be selected from a predefined list of projector parameters to test. Preferably, theprojector 104 may only adjust one projector parameter at a time, if multiple projector parameters are being tested. - The
projector 104 may then return to block 1105 to project the test pattern at the new projector parameter. - If each of the projector parameters or focus levels in the predefined list has been tested, the
method 1100 proceeds to block 1120. Atblock 1120, theprojector 104 and/or another suitable computing device selects an optimal focus and/or other projector parameter. For example, the focus of the projector may similarly be computed using the MSG to determine the strength of the edges within the test pattern. As will be appreciated, thetest pattern 108 provides high contrast edges to allow the focus of theprojector 104 to be similarly optimized. - At
block 1125, after having selected the optimal focus and/or other projector parameter, thecamera 116 and/or theprocessor 300 may obtain an image with the selected optimal focus and/or projector parameter, for example, by retrieving such an image if it has already been captured, or by projecting, using theprojector 104, the test pattern with the optimal focus and/or other projected parameter, and capturing another image. Themethod 1100 may similarly be performed duringblock 405 of themethod 400 to allow the image used for the remainder of the calibration procedure to be optimized for projector focus. In other examples, themethod 1100 may be performed after performance of themethod 400, since the projector focus may not materially affect the determination of the calibration parameters as much. - As described above, an example system and method of calibrating projectors employs a test pattern which is organized into patches including a white blob, and red, green and blue blobs to allow calibration parameters, including radiometric or color compensation, luminance correction, and spatial alignment to be computed by projecting a single test pattern. In particular, in order to do so, the colors of the additional blobs in each patch define a patch address that allows the patch to be located within the test pattern. The location of each patch may then be used to compare the target attribute based on the input at the given location, with the detected attribute, to compute calibration parameters to calibrate the projector and allow the projector to compensate the projected image according to the target surface on which an image or video is projected.
- The scope of the claims should not be limited by the embodiments set forth in the above examples, but should be given the broadest interpretation consistent with the description as a whole.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/206,219 US20230319246A1 (en) | 2022-01-31 | 2023-06-06 | Systems and methods for calibrating display systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/589,122 US11711500B1 (en) | 2022-01-31 | 2022-01-31 | Systems and methods for calibrating display systems |
US18/206,219 US20230319246A1 (en) | 2022-01-31 | 2023-06-06 | Systems and methods for calibrating display systems |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/589,122 Continuation US11711500B1 (en) | 2022-01-31 | 2022-01-31 | Systems and methods for calibrating display systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230319246A1 true US20230319246A1 (en) | 2023-10-05 |
Family
ID=85150248
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/589,122 Active US11711500B1 (en) | 2022-01-31 | 2022-01-31 | Systems and methods for calibrating display systems |
US18/206,219 Pending US20230319246A1 (en) | 2022-01-31 | 2023-06-06 | Systems and methods for calibrating display systems |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/589,122 Active US11711500B1 (en) | 2022-01-31 | 2022-01-31 | Systems and methods for calibrating display systems |
Country Status (3)
Country | Link |
---|---|
US (2) | US11711500B1 (en) |
EP (1) | EP4220616A1 (en) |
CN (1) | CN116524832A (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7942530B2 (en) * | 2006-10-31 | 2011-05-17 | The Regents Of The University Of California | Apparatus and method for self-calibrating multi-projector displays via plug and play projectors |
US8687068B2 (en) | 2010-09-19 | 2014-04-01 | Hewlett-Packard Development Company, L.P. | Pattern of color codes |
US8620026B2 (en) * | 2011-04-13 | 2013-12-31 | International Business Machines Corporation | Video-based detection of multiple object types under varying poses |
US10057556B2 (en) * | 2016-01-28 | 2018-08-21 | Disney Enterprises, Inc. | Projector optimization method and system |
US11303864B2 (en) * | 2020-09-09 | 2022-04-12 | Christie Digital Systems Usa, Inc. | System and method for projector alignment using detected image features |
US11482007B2 (en) * | 2021-02-10 | 2022-10-25 | Ford Global Technologies, Llc | Event-based vehicle pose estimation using monochromatic imaging |
-
2022
- 2022-01-31 US US17/589,122 patent/US11711500B1/en active Active
-
2023
- 2023-01-18 CN CN202310085492.8A patent/CN116524832A/en active Pending
- 2023-01-31 EP EP23154212.7A patent/EP4220616A1/en active Pending
- 2023-06-06 US US18/206,219 patent/US20230319246A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US11711500B1 (en) | 2023-07-25 |
US20230247185A1 (en) | 2023-08-03 |
CN116524832A (en) | 2023-08-01 |
EP4220616A1 (en) | 2023-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112669394B (en) | Automatic calibration method for vision detection system | |
US8144975B2 (en) | Method for using image depth information | |
US8599290B2 (en) | Systems, methods, and apparatus for artifact evaluation of digital images | |
US20170070711A1 (en) | Intensity correction for projection systems | |
US9186470B2 (en) | Shape reflector and surface contour mapping | |
CN114495816B (en) | Display image adjustment method, terminal device and computer readable storage medium | |
JP5659623B2 (en) | Exposure attribute setting method and computer-readable storage medium | |
US8223230B2 (en) | Systems, methods, and apparatus for camera tuning and systems, methods, and apparatus for reference pattern generation | |
CN101770646A (en) | Edge detection method based on Bayer RGB images | |
CN103370598B (en) | For in optically scanning surfaces region or on the method at edge | |
US10721449B2 (en) | Image processing method and device for auto white balance | |
CN109194954A (en) | Fish-eye camera performance parameter test method, apparatus, equipment and can storage medium | |
CN106416242A (en) | Method of enhanced alignment of two means of projection | |
WO2019137396A1 (en) | Image processing method and device | |
US11257243B2 (en) | Target shooting system | |
KR101893823B1 (en) | Board inspection apparatus and method of compensating board distortion using the same | |
US11711500B1 (en) | Systems and methods for calibrating display systems | |
CN114187363B (en) | A method, device and mobile terminal for obtaining radial distortion parameter value | |
KR20180043463A (en) | Board inspection apparatus and board inspection method using the same | |
CN115423808B (en) | Quality detection method for speckle projector, electronic device, and storage medium | |
CN117475885A (en) | Correction coefficient determination method, apparatus, device, storage medium, and program product | |
CN112767472B (en) | Method for positioning lamp beads in display screen image, computing equipment and storage medium | |
EP4423719A1 (en) | System and method for projection mapping | |
US20250045894A1 (en) | Image quality evaluation system, image quality evaluation method and image calibration device | |
US20150053848A1 (en) | Identifying and Correcting Hogel and Hogel Beam Parameters |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CHRISTIE DIGITAL SYSTEMS USA, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:POST, MATTHEW;VAN EERD, PETER ANTHONY;SIGNING DATES FROM 20220426 TO 20220429;REEL/FRAME:063865/0736 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |