WO2008122145A1 - Système et procédé d'acquisition d'image adaptative - Google Patents

Système et procédé d'acquisition d'image adaptative Download PDF

Info

Publication number
WO2008122145A1
WO2008122145A1 PCT/CN2007/001112 CN2007001112W WO2008122145A1 WO 2008122145 A1 WO2008122145 A1 WO 2008122145A1 CN 2007001112 W CN2007001112 W CN 2007001112W WO 2008122145 A1 WO2008122145 A1 WO 2008122145A1
Authority
WO
WIPO (PCT)
Prior art keywords
output pixel
output
pixels
content
pixel
Prior art date
Application number
PCT/CN2007/001112
Other languages
English (en)
Inventor
Charles Chia-Ming Chuang
Qing Guo
John Dick Gilbert
Original Assignee
N-Lighten Technologies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by N-Lighten Technologies filed Critical N-Lighten Technologies
Priority to PCT/CN2007/001112 priority Critical patent/WO2008122145A1/fr
Publication of WO2008122145A1 publication Critical patent/WO2008122145A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"

Definitions

  • the present invention relates to image acquisition system and, in particular, but not exclusively, provides a system and method for adapting an output image to a high resolution still camera or a video camera.
  • the precision of optical components, and the precision of the optical system assembly must be improved and the optical distortions minimized.
  • the optical technology does not evolve as fast as the semiconductor technology. Precision optical parts with tight tolerances, especially the aspheric lenses, are expensive to make. The optical surface requirement is now at 10 micro meters or better. As the optical components are assembled to form the optical system, the tolerances stack up.
  • An object of the present invention is, therefore, to provide an image acquisition system with adaptive means to correct for optical distortions, including geometry and brightness and contrast variations in real time.
  • Another object of the present invention is to provide an image acquisition system with adaptive methods to correct for optical distortion in real time.
  • a further object of this invention is to provide a method of video content authentication based on the video geometry and brightness and contrast correction data secured in the adaptive process.
  • Embodiments of the invention provide a system and method that enables the inexpensive altering of video content to correction for optical distortions in real-time.
  • Embodiments do not require a frame buffer and there is no frame delay. Embodiments operate at the pixel clock rate and can be described as a pipeline for that reason. For every pixel in-there is a pixel out.
  • Embodiments of the invention work for up-sampling or down-sampling uniformly well. It does not assume a uniform spatial distribution of output pixels. Further, embodiments use only one significant mathematical operation, a divide. It does not use complex and expensive floating point calculations as do conventional image adaptation systems.
  • the method comprises: placing a test target in front of the camera, acquiring output pixel centroids for a plurality of output pixels; determining adjacent output pixels of a first output pixel from the plurality; determining an overlay of the first output pixel over virtual pixels corresponding to an input video based on the acquired output pixel centroids and the adjacent output pixels; determining content of the first output pixel based on content of the overlaid virtual pixels; and outputting the determined content to a display device.
  • the system comprises an output pixel centroids engine, an adjacent output pixel engine communicatively coupled to the output pixel centroids engine, and output pixel overlay engine communicatively coupled to the adjacent output pixel engine, and an output pixel content engine communicatively coupled to the output pixel overlay engine.
  • the adjacent output pixel engine determines adjacent output pixels of a first output pixel from the plurality.
  • the output pixel overlay engine determines an overlay of the first output pixel over virtual pixels corresponding to an input video based on the acquired output pixel centroids and the adjacent output pixels.
  • the output pixel content engine determines content of the first output pixel based on content of the overlaid virtual pixels and outputs the determined content to a video display device.
  • the method comprises: placing a test target in front of the camera, acquiring output pixel centroids for a plurality of output pixels. Embed the output pixel centroids data and brightness and contrast uniformity data within the video stream and transmit to a video display device. The pixel correction process is then executed at the video display device end.
  • the pixel centroids data and brightness uniformity data of the camera can be merged with the pixel centroids data and brightness uniformity data of the display output device, using only one set of hardware to perform the operation.
  • FIG.l is a block diagram of a prior art video image acquisition system
  • FIG. 2A is a block diagram illustrating an adaptive image acquisition system according to an embodiment of the invention.
  • FIG. 2B is a block diagram illustrating an adaptive image acquisition system according to another embodiment of the invention.
  • FIG. 3 A is an image taken from a prior art image acquisition system
  • FIG. 3B is an image taken with wide angle adaptive image acquisition system
  • FIG. 4A shows the checker board pattern in. front of a light box used for geometry and brightness correction
  • FIG. 4B shows the relative position of the camera in the calibration process
  • FIG. 4C shows a typical calibration setting where the checker board pattern positioning is not exactly perpendicular to the camera
  • FIG. 5 A shows the barrel effect exhibited by a typical camera/lens system
  • FIG. 5B shows the brightness fall off exhibited by a typical camera/lens system
  • FIG. 6 shows a representation of 4-pixel video data having red, green and blue contents, each having 8-bits;
  • FIG. 7 shows a representation of 4-pixel video data having red, green and blue contents, each having 8 bits, and additional two bit planes for the storage of brightness and contrast correction and geometry correction data;
  • FIG. 8 shows a block diagram illustrating an image processor
  • FIG. 9 shows a greatly defocused image of the checker board pattern and a graphical method of determining the intersection between two diagonally disposed black squares
  • FIG. 10 is a diagram illustrating the distorted image area and the corrected, no distortion display area
  • FIG. 11 is a diagram illustrating mapping of output pixels onto a virtual pixel grid of the image
  • FIG. 12 is a diagram illustrating centroid input from the calibration process
  • FIG. 13 is a diagram illustrating an output pixel corner calculation
  • FIG. 14 is a diagram illustrating pixel sub-division overlay approximation
  • FIG. 15 is a flowchart illustrating a method of adapting for optical distortions
  • FIG. 16 is a diagram illustrating mapping of display output pixels onto a virtual pixel grid of the display, then remapped to the virtual pixel grid of the image capture device.
  • FIG. 1 is a block diagram of a conventional camera.
  • FIG. 2A is a block diagram illustrating an adaptive image acquisition system 100 according to an embodiment of the invention. Every image acquisition system has a sensor 130 for capturing images.
  • Typical sensors are CCD or CMOS 2-dimensional sensor arrays found in digital cameras.
  • Line scan camera and image scanners use one-dimensional sensor array with optical lens, and also subject to optical distortions.
  • Other image sensors such as infrared, ultra-violet, or X-ray may not be visible to the naked eye, but have their own optical lens systems and have optical distortions that can benefit from embodiments of the current invention.
  • the output of the image sensors typically requires white balance correction, gamma correction, color processing, and various other manipulations to shape them into a fair representation of the images captured.
  • the image processing 140 is typically done with an ASIC, but can also be performed by a microprocessor or a microcontroller that has image processing capabilities.
  • an adaptive image processor 110 is then used to apply optical distortion correction and brightness and contrast correction to the images before sending them out.
  • This image adaptation invention is fast enough for real time continuous image processing, or video processing. Therefore, in this patent application, image processing and video processing is used interchangeably, and image output and video output is also used interchangeably.
  • a memory block 120 communicatively coupled to the adaptive image processor 110 is used to store the adaptive parameters for geometry and brightness corrections.
  • these parameters can be compressed first, then rely on the adaptive image processor to do the decompression before application.
  • the processed image is packaged by output formatter 160 into different output formats before they are shipped to the outside world.
  • the processed image is encoded into the proper analog format first.
  • the processed images are compressed via MPEG-2, MPEG-4, JPEG-2000, or various other commercially available compression algorithms first before formatting into Ethernet packets. And then the Ethernet packets are further packaged to fit the transmission protocols such as wireless 802.11a, 802.11b, 802.1 Ig, or wired IOOM Ethernet.
  • the processed images can also be packaged for transmission on USB, Bluetooth, IEEEl 394, Irda, HomePNA, HDMI, or other commercially available video transfer protocol standards.
  • Video output from the image acquisition system is fed into a typical display device 190, where the image is further formatted for specific display output device, such as CRT, LCD, or projection before it is physically shown on the screen.
  • a typical captured image may exhibit barrel distortions as shown in FIG. 10.
  • the centroids of the checker board intersections of the white and black blocks can be computed across the entire image space, the brightness of each block can be measured, and the resulting geometry and brightness/contrast distortion map is essentially a "finger print" of a .
  • specific image acquisition system taking into account the distortions from lens imperfections, assembly tolerances, coating differences on the substrate, passivation differences on the sensors and other fabrication/assembly induced errors.
  • the distortion centroids can be collected three times; one for red, one for green, and one for blue in order to properly adjusted for lateral color distortion, since light wavelength does affect the degree of distortions through an optical system.
  • a checker board pattern test target with a width of 25 inches shown in FIG. 4A can be fabricated with photolithography to a good precision. Accuracy of 10 micro-inches over a width of 25 inch total width is commercially available, which will give a dimensional accuracy of 0.00004%. For a 10 mega pixel camera with a linear dimension of 2500 pixels, the checker board accuracy can be expressed as 0.1% of a pixel. As shown in FIG. 4C, the checker board test pattern does not have to be positioned exactly perpendicular to the camera. Offset angles can be calculated from the two sides a/b directly with great accuracy and the camera offset angle removed from the calibration error. There is no requirement for precision mechanical alignment in the calibration process.
  • FIG. 9 shows a greatly defocused picture of the checker board pattern as captured by a camera under calibration and a graphical method of determining the intersection between two diagonally disposed black squares 905, and 906.
  • the sensor array 900 is superimposed on the image collected.
  • Line 901 is the right side edge of block 905. This edge can be determined either by calculating for the inflection point of the white to black transition, or by calculating the mid point between the white to black transition using linear extrapolation.
  • Line 902 is the left side edge of Block 906.
  • line 901 and line 902 should coincide.
  • the key feature of the checker board pattern is that even with imperfect optical system, with imperfect iris optimization or focus optimization, with imperfection of aligning optical axis perpendicular to the calibration plate the vertical transition line can be precisely calculated as a line equal distance and parallel to line 901 and line 902.
  • line 903 is the lower side edge of the block 905, and line 904 is the upper side edge of the block 906.
  • the edge of these two black blocks, 905 and 906, can be computed as the centroid of the square formed by lines 901, 902, 903, and 904 to a very precise manner. Camera calibration accuracy of 0.025 pixel or better can be achieved.
  • optical distortion This is the level of precision needed to characterize the optical distortion of the entire image capture system.
  • the characteristics of optical distortion is a smooth varying function, so checker board patterns of 40 to 100 blocks in one linear dimension is good enough to characterize the distortion of a 10 mega pixel camera with 2500 pixels in one dimension. Test patterns similar in shape to a checker board have the similar effect. For example, diamond shaped checker board pattern also can be used.
  • the checker board pattern test target can be fabricated on a Mylar film with black and transparent blocks using the same process for printed circuit boards. This test target can be mounted in front of a calibrated illumination source as shown in FIG. 4B.
  • colorimetry on each black and white square on the checker board test pattern can be measured using precision instruments.
  • An example of such instrument is a CS-IOOA colorimeter made by Konica Minolta Corporation of Japan. Typical commercial instruments can measure brightness tolerances down to 0.2%.
  • a typical captured image may exhibit brightness gradients as shown in Fig. 5B.When compared with the luminance readings from an instrument, the brightness and contrast distortion map across the sensors can be recorded. This is a "finger print" or signature of a specific image acquisition system in a different dimension than the geometry.
  • a preferred embodiment of the present invention is to embed signature information in the video stream, and to perform adaptive image correction at the display end.
  • Fig. 2B is a block diagram illustrating this preferred embodiment.
  • the adaptive image processor 111 in the image acquisition device will embed signatures in the video stream, and an adaptive image processor 181 within a display 191 will perform the optical distortion correction.
  • Fig. 6 shows a representation of 4 pixels video data having red, green and blue contents, each having 8 bits.
  • One preferred embodiment for embedding optical distortion signatures for both geometry and/or brightness is shown in Fig. 7. Both signatures can be represented by the distortion differences with its neighbors, this method will cut down on the storage requirement.
  • the target display device By inserting optical distortion signatures as brightness information as bottom two bits, the target display device, if not capable of performing optical distortion correction, will interpret them as video data of very low intensity, and the embedded signature will not be very visible on the display device.
  • the target display device For display device capable of performing optical distortion correction, it will transform the video back to virtually no distortions in both geometry and brightness dimensions. For security application, this is significant since object recognition can be performed more accurately and faster if all video images have no distortions. If the video information is transmitted without correction, it is also very difficult to tamper with, since both geometry and brightness will be changed before display, and any data modifications on the pre-corrected data will not fit the signature of the original image acquisition device and will stand out.
  • the entire optical signature must be embedded within each picture, or have been transmitted once before as the signature of that specific camera.
  • the optical signature in its entirety does not have to be transmitted all at once. There are many ways to break up the signatures to be transmitted over several video frames. There are also many methods to encode the optical signatures to make them even more difficult to be reversed. [Video Compression before transmission]
  • the image processor 110 maps an original input video frame to an output video frame by matching output pixels on a screen to virtual pixels that correspond with pixels of the original input video frame.
  • the image processor 110 uses the memory 120 for storage of pixel centroid information and/or any operations that require temporary storage.
  • the image processor 110 can be implemented as software or circuitry, such as an Application Specific Integrated Circuit (ASIC).
  • ASIC Application Specific Integrated Circuit
  • the memory 120 can include Flash memory or other memory format.
  • the system 100 can include a plurality of image processors 110, one for each color (red, green, blue) and/or other content (e.g., brightness) that operate in parallel to adapt an image for output.
  • FIG. 8 is a block diagram illustrating the image processor 110 (in FIG. 2A).
  • the image processor 110 comprises an output pixel centroid engine 210, an adjacent output pixel engine 220, an output pixel overlay engine 230, and an output pixel content engine 240.
  • the output pixel centroid engine 210 reads out centroid locations into FIFO memories (e.g., internal to the image processor or elsewhere) corresponding to relevant lines of the input video. Only two lines plus three additional centroids need to be stored at a time, thereby further reducing memory requirements.
  • the adjacent output pixel engine 220 determines which output pixels are diagonally adjacent to the output pixel of interest by looking at diagonal adjacent output pixel memory locations in the FIFOs.
  • the output pixel overlay engine 230 determines which virtual pixels are overlaid by the output pixel.
  • the output pixel content engine 240 determines the content (e.g., color, brightness, etc.) of the output pixel based on the content of the overlaid virtual pixels.
  • FIG. 10 is a diagram illustrating a corrected display area 730 and the video display of a camera prior to geometry correction 310.
  • the corrected viewing area 730 (also referred to herein as virtual pixel grid) comprises an x by y array of virtual pixels that . correspond to an input video frame (e.g., each line has x virtual pixels and there are y lines per frame).
  • the virtual pixels of the corrected viewing area 730 correspond exactly with the input video frame.
  • the viewing area can have a 16:9 aspect ratio with 1280 by 720 pixels or a 4:3 ratio with 640 by 480 pixels.
  • the number of actual output pixels matches that of the output resolution.
  • the number of virtual pixels matches the input resolution, i.e., the resolution of the input video frame, i.e., there is a 1:1 correspondence of virtual pixels to pixels of the input video frame.
  • at the corner of the viewing area 730 there may have several virtual pixels for every output pixel and at the center of the viewing area 730 there may be a 1:1 correspondence (or less) of virtual pixels to output pixels.
  • the spatial location and size of output pixels differs from virtual pixels in a non-linear fashion.
  • Embodiments of the invention have the virtual pixels look like the input video by mapping of the actual output pixels to the virtual pixels. This mapping is then used to resample the input video such that the display of the output pixels causes the virtual pixels to look identical to the input video pixels, i.e., to have the output video frame match the input video frame so as to view the same image.
  • FIG. 11 is a diagram illustrating mapping of output pixels onto a virtual pixel grid 730 of the image 310.
  • the output pixel mapping is expressed in terms (or units) of virtual pixels.
  • the virtual pixel array 730 can be considered a conceptual grid.
  • the location of any output pixel within this grid 730 can be expressed in terms of horizontal and vertical grid coordinates.
  • mapping description is independent of relative size differences, and can be specified to any amount of precision.
  • a first output pixel 410 is about four times as large as a second output pixel 420.
  • the first output pixel 410 mapping description can be x+2.5, y+1.5, which corresponds to the center of the first output pixel 410.
  • the mapping description of the output pixel 420 can be x+12.5, y+2.5.
  • the amount of information needed to locate output pixels within the virtual grid appears large. For example, if the virtual resolution is 1280x720, approximately 24 bits is needed to fully track each output pixel centroid. But, the scheme easily lends itself to significant compaction (e.g. one method might be to fully locate the first pixel in each output line, and then locate the rest via incremental change).
  • the operation to determine pixel centroids performed by the imaging device can provide a separate guide for each pixel color. This allows for lateral color correction during the image adaptation.
  • FIG. 12 is a diagram illustrating centroid input from the calibration process.
  • Centroid acquisition is performed real-time — each centroid being retrieved in a pre-calculated format from external storage, e.g., from the memory 120.
  • the engine 210 stores the centroids in a set of line buffers.
  • These line buffers also represent a continuous FIFO (with special insertions for boundary conditions), with each incoming centroid entering at the start of the first FIFO, and looping from the end of each FIFO to the start of the subsequent one.
  • the purpose of the line buffer oriented centroid FIFOs is to facilitate simple location of adjacent centroids for corner determination by the adjacent output pixel engine 220. With the addition of an extra 'corner holder' element off the end of line buffers preceding and succeeding the line being operated on, corner centroids are always found in the same FIFO locations relative to the centroid being acted upon.
  • FIG. 13 is a diagram illustrating an output pixel corner calculation. Embodiments of the image adaptation system and method are dependent on a few assumptions:
  • the corner points for any output pixel quadrilateral approximation can be calculated by the adjacent output pixel engine 220 on the fly as each output pixel is prepared for content. This is accomplished by locating the halfway point 610 to the centers of all diagonal output pixels, e.g., the output pixel 620.
  • the overlap with virtual pixels is established by the output pixel overlay engine 230. This in turn creates a direct (identical) overlap with the video input.
  • each upcoming output pixel's approximation corners could be calculated one or more pixel clocks ahead by the adjacent output pixel engine 220.
  • content determination can be calculated by the output pixel content engine 240 using well-established re-sampling techniques.
  • Variations in output pixel size/density across the viewing area 310 mean some regions will be up-sampled, and others down-sampled. This may require addition of filtering functions (e.g. smoothing, etc.). The filtering needed is dependent on the degree of optical distortion. The optical distortions introduced also provide some unique opportunities for improving the re-sampling. For example, in some regions of the screen 730, the output pixels will be sparse relative to the virtual pixels, while in others the relationship will be the other way around. This means that variations on the re-sampling algorithm(s) chosen are possible. The information is also present to easily calculate the actual area an output pixel covers within each virtual pixel (since the corners are known). Variations of the re-sampling algorithm(s) used could include weightings by 'virtual' pixel partial area coverage, as will be discussed further below.
  • FIG. 14 is a diagram illustrating pixel sub-division overlay approximation.
  • one possible algorithm for determining content is to approximate the area covered by an output pixel across applicable virtual pixels, calculating the content value of the output pixel based on weighted values associated with each virtual pixel overlap.
  • the output pixel overlay engine In order to simplify hardware implementation, the output pixel overlay engine
  • the 230 determines overlap through finite sub-division of the virtual pixel grid 310 (e.g., into a four by four subgrid, or any other sub-division, for each virtual pixel), and approximates the area covered by an output pixel by the number of sub-divisions overlaid.
  • Overlay calculations by the output pixel overlay engine 230 can be simplified by taking advantage of some sub-sampling properties, as follows:
  • the output pixel content engine 240 determines the content of the output pixel by multiplying the content of each virtual pixel by the number of associated sub-divisions overlaid, adding the results together, and then dividing by the total number of overlaid sub-divisions.
  • the output pixel content engine 240 than outputs the content determination to a light engine for displaying the content determination.
  • FIG. 15 is a flowchart illustrating a method 800 of adapting for optical distortions.
  • the image processor 110 implements the method 800.
  • the image processor 110 or a plurality of image processors 110 implement a plurality of instances of the method 800 (e.g., one for each color of red, green and blue).
  • output pixel centroids are acquired (810) by reading them from memory into FIFOs (e.g., three rows maximum at a time).
  • the diagonally adjacent output pixels to an output pixel of interest are determined (820) by looking at the diagonally adjacent memory locations in the FIFOs. The halfway point between diagonally adjacent pixels and the pixel of interest is then determined (830).
  • An overlay is then determined (840) of the output pixel over virtual pixels and output pixel content determined (850) based on the overlay.
  • the determined output pixel content can then be outputted to a light engine for projection onto a display.
  • the method 800 then repeats for additional output pixel until content for all output pixels are determined (850).
  • the pixel remapping process is a single pass process. Note also that the pixel remapping process does not require information on the location of the optical axis.
  • the 16 can incorporate display geometry correction of [X+3.5,Y+1.5] on top of the image acquisition geometry correction of [X+2.5,Y+1.5], and concatenate into [X+6,Y+3].
  • the final centroid is point 430.
  • Concatenated centroid map can be computed ahead of time.
  • brightness and contrast distortion correction map can also be concatenated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

Système et procédé de correction de distorsions optiques sur un système d'acquisition d'image par analyse et mappage de ce système et réglage du contenu des pixels de sortie. Ladite correction peut être accomplie soit à l'extrémité appareil de prise de vues soit à l'extrémité réception d'affichage.
PCT/CN2007/001112 2007-04-05 2007-04-05 Système et procédé d'acquisition d'image adaptative WO2008122145A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2007/001112 WO2008122145A1 (fr) 2007-04-05 2007-04-05 Système et procédé d'acquisition d'image adaptative

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2007/001112 WO2008122145A1 (fr) 2007-04-05 2007-04-05 Système et procédé d'acquisition d'image adaptative

Publications (1)

Publication Number Publication Date
WO2008122145A1 true WO2008122145A1 (fr) 2008-10-16

Family

ID=39830445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2007/001112 WO2008122145A1 (fr) 2007-04-05 2007-04-05 Système et procédé d'acquisition d'image adaptative

Country Status (1)

Country Link
WO (1) WO2008122145A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5818527A (en) * 1994-12-21 1998-10-06 Olympus Optical Co., Ltd. Image processor for correcting distortion of central portion of image and preventing marginal portion of the image from protruding
US20050207671A1 (en) * 2003-04-11 2005-09-22 Tadashi Saito Data presentation device
US20070030452A1 (en) * 2005-08-08 2007-02-08 N-Lighten Technologies Image adaptation system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5818527A (en) * 1994-12-21 1998-10-06 Olympus Optical Co., Ltd. Image processor for correcting distortion of central portion of image and preventing marginal portion of the image from protruding
US20050207671A1 (en) * 2003-04-11 2005-09-22 Tadashi Saito Data presentation device
US20070030452A1 (en) * 2005-08-08 2007-02-08 N-Lighten Technologies Image adaptation system and method

Similar Documents

Publication Publication Date Title
US20080002041A1 (en) Adaptive image acquisition system and method
US11570423B2 (en) System and methods for calibration of an array camera
EP2589226B1 (fr) Capture d'image au moyen de capteurs de luminance et de chrominance
JP4699995B2 (ja) 複眼撮像装置及び撮像方法
JP5535431B2 (ja) 表示の形状及び色の自動較正及び修正のためのシステム及び方法
JP5241700B2 (ja) 画質が改良された撮像装置
US20080062164A1 (en) System and method for automated calibration and correction of display geometry and color
CN102227746A (zh) 立体图像处理装置、方法、记录介质和立体成像设备
KR20100051056A (ko) 화상 처리 장치, 화상 처리 방법, 프로그램 및 촬상 장치
TWI599809B (zh) 鏡頭模組陣列、影像感測裝置與數位縮放影像融合方法
WO2022100668A1 (fr) Procédé, appareil et système de mesure de température, support de stockage et produit-programme
TW200841702A (en) Adaptive image acquisition system and method
JP2010272957A (ja) 画像補正装置及びそのプログラム
WO2008122145A1 (fr) Système et procédé d'acquisition d'image adaptative
JP2004007213A (ja) ディジタル3次元モデル撮像機器
JP4542821B2 (ja) 画像処理方法、画像処理装置、および画像処理プログラム
KR20070070669A (ko) 이미지 프로세서, 렌즈 셰이딩 보정 장치 및 그 방법
JP2022042408A (ja) 情報処理装置
JP2000172846A (ja) 画像処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07720685

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07720685

Country of ref document: EP

Kind code of ref document: A1