WO2019225255A1 - Image correction device, image correction method, and image correction program - Google Patents

Image correction device, image correction method, and image correction program Download PDF

Info

Publication number
WO2019225255A1
WO2019225255A1 PCT/JP2019/016999 JP2019016999W WO2019225255A1 WO 2019225255 A1 WO2019225255 A1 WO 2019225255A1 JP 2019016999 W JP2019016999 W JP 2019016999W WO 2019225255 A1 WO2019225255 A1 WO 2019225255A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
moving image
image data
imaging device
data obtained
Prior art date
Application number
PCT/JP2019/016999
Other languages
French (fr)
Japanese (ja)
Inventor
入江 公祐
田中 康一
内田 亮宏
慎也 藤原
伸一郎 藤木
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Priority to JP2020521117A priority Critical patent/JP6833110B2/en
Publication of WO2019225255A1 publication Critical patent/WO2019225255A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to an image correction apparatus, an image correction method, and an image correction program.
  • the imaging device serving as a master unit for processing to link the imaging conditions of each imaging device is changed to the state of each imaging device related to imaging of a common subject.
  • a technique for selecting based on this is disclosed (see Patent Document 1).
  • imaging conditions of a plurality of imaging devices are determined based on parent device related information acquired when an imaging device serving as a selected parent device captures a common subject.
  • the imaging timing of the main imaging device is acquired from the main imaging device that captures the subject by the operator and generates the captured image, and the captured image is generated by capturing the subject based on the acquired imaging timing.
  • An auxiliary imaging device is disclosed (see Patent Document 2).
  • Patent Document 3 a technique for determining a shooting condition specified by a shooting parameter based on a target noise amount that is a target noise amount is disclosed (see Patent Document 3).
  • a common subject is imaged by each of a plurality of imaging devices, and moving images obtained by imaging by each imaging device are combined into one moving image.
  • the plurality of imaging devices may have different optical performances of the imaging lenses due to, for example, differences in age when each was manufactured.
  • one moving image obtained by combining moving images obtained by imaging by the respective imaging devices becomes a moving image having no sense of unity in the video.
  • Patent Documents 1 to 3 take into consideration the determination of imaging conditions during imaging, the difference in image quality is reduced for each of a plurality of imaging devices having different optical performances of imaging lenses. There are cases where such imaging conditions cannot be determined. In this case, the above problem cannot be solved.
  • the present disclosure has been made in view of the above circumstances, and an image correction apparatus, an image correction method, and an image correction apparatus that can reduce a difference in image quality between moving images obtained by imaging by each of a plurality of imaging devices. And an image correction program.
  • an image correction apparatus includes an acquisition unit that acquires moving image data obtained by imaging by each of a plurality of imaging devices, and the moving image data is acquired by imaging by the first imaging device.
  • a discriminating unit for discriminating whether the obtained moving image data or the moving image data obtained by imaging by the second imaging device; and the moving image data obtained by imaging by the second imaging device
  • the second imaging is performed using the lens information data regarding the optical performance of the imaging lens of the first imaging device and the lens information data regarding the optical performance of the imaging lens of the second imaging device.
  • a correction unit that corrects the moving image data obtained by the image pickup by the apparatus according to the image quality of the moving image data obtained by the image pickup by the first image pickup apparatus;
  • the lens information data may include at least one of peripheral light reduction, imaging lens resolution, chromatic aberration amount, and color fog amount.
  • the image correction apparatus has at least one of the peripheral light reduction, the resolution of the imaging lens, the chromatic aberration amount, and the color fog amount in the lens information data of the first imaging apparatus, as compared with the second imaging apparatus. May be low.
  • correction by the correction unit for moving image data obtained by imaging by the second imaging apparatus may be performed while a moving image is being captured.
  • the image correction apparatus may further include a display control unit that performs control to display the moving image indicated by the moving image data that has been corrected by the correction unit during imaging of the moving image on the display unit.
  • the correction unit when performing the trimming process on the moving image data obtained by the imaging by the second imaging device, the correction unit corrects the moving image data after the trimming process. May be.
  • the image correction method of the present disclosure acquires moving image data obtained by imaging by each of a plurality of imaging devices, and the moving image data is obtained by imaging by the first imaging device. It is determined whether it is the moving image data obtained or the moving image data obtained by imaging by the second imaging device, and the moving image data is the moving image data obtained by imaging by the second imaging device. If it is determined that there is, it is obtained by imaging with the second imaging device using lens information data regarding the optical performance of the imaging lens of the first imaging device and lens information data regarding the optical performance of the imaging lens of the second imaging device.
  • the computer executes a process for correcting the moving image data in accordance with the image quality of the moving image data obtained by the imaging by the first imaging device.
  • the image correction program of the present disclosure acquires moving image data obtained by imaging by each of a plurality of imaging devices, and the moving image data is obtained by imaging by the first imaging device. It is determined whether it is the moving image data obtained or the moving image data obtained by imaging by the second imaging device, and the moving image data is the moving image data obtained by imaging by the second imaging device. If it is determined that there is, it is obtained by imaging with the second imaging device using lens information data regarding the optical performance of the imaging lens of the first imaging device and lens information data regarding the optical performance of the imaging lens of the second imaging device. For causing the computer to execute a process for correcting the moving image data in accordance with the image quality of the moving image data obtained by imaging by the first imaging device. It is.
  • the imaging system 10 includes a first imaging device 12 and second imaging devices 14 ⁇ / b> A and 14 ⁇ / b> B that capture an image of a subject S, and an image correction device 16.
  • the first imaging device 12 and the second imaging device 14A are connected to the image correction device 16 by wire, and the second imaging device 14B is connected to the image correction device 16 by radio.
  • the moving image data obtained by imaging by each of the first imaging device 12 and the second imaging devices 14A and 14B is output to the image correction device 16.
  • the imaging lens of the first imaging device 12 is an older lens than the imaging lenses of the second imaging devices 14A and 14B, and is also referred to as an old lens.
  • the imaging lens of the first imaging device 12 is a lens manufactured before 1997.
  • the imaging lenses of the second imaging devices 14A and 14B are relatively new lenses.
  • lens information data L1 related to the optical performance of the imaging lens of the first imaging device 12 is stored in the storage unit of the first imaging device 12.
  • lens information data L2A and L2B relating to the optical performance of the imaging lenses of the second imaging devices 14A and 14B are stored in the storage units of the second imaging devices 14A and 14B.
  • the lens information data L1, L2A, and L2B each include peripheral light reduction as information regarding the optical performance of the corresponding imaging lens.
  • the peripheral light reduction in the lens information data L1 has lower optical performance than the peripheral light reduction in the lens information data L2A and L2B.
  • the peripheral light reduction in the lens information data L1 is larger than the peripheral light reduction in the lens information data L2A and L2B.
  • the first imaging device 12 is a main imaging device, and the second imaging devices 14A and 14B are sub imaging devices.
  • the second imaging devices 14A and 14B are collectively referred to as “second imaging device 14”
  • the lens information data L2A and L2B are collectively referred to as “lens information data L2”.
  • the image correction device 16 includes a CPU (Central Processing Unit) 20, a memory 21 as a temporary storage area, and a nonvolatile storage unit 22.
  • the image correction device 16 includes a display unit 23 such as a liquid crystal display and an input unit 24 such as a keyboard and a mouse.
  • the image correction device 16 includes an external I / F (InterFace) 25 to which the first imaging device 12 and the second imaging device 14A are connected, and a communication I / F 26 to which the second imaging device 14B is connected.
  • the CPU 20, the memory 21, the storage unit 22, the display unit 23, the input unit 24, the external I / F 25, and the communication I / F 26 are connected to the bus 27.
  • Examples of the image correction device 16 include a personal computer and a server computer.
  • the storage unit 22 is realized by an HDD (Hard Disk Drive), an SSD (Solid State Drive), a flash memory, or the like.
  • An image correction program 30 is stored in the storage unit 22 as a storage medium.
  • the CPU 20 reads the image correction program 30 from the storage unit 22 and expands it in the memory 21, and executes the expanded image correction program 30.
  • the optical performance of the imaging lens of the first imaging device 12 is lower than the optical performance of the imaging lenses of the second imaging devices 14A and 14B. Accordingly, as shown in FIG. 3 as an example, the moving image D1 obtained by imaging by the first imaging device 12 is more peripheral than the moving images D2 and D3 obtained by imaging by the second imaging devices 14A and 14B. Dimming will be noticeable. When these moving images are combined into one moving image, the moving image has no sense of unity.
  • the image correction device 16 matches the image quality of the moving image data obtained by the image pickup by the first image pickup device 12 with respect to the moving image data obtained by the image pickup by the second image pickup device 14. A function to perform correction is provided.
  • the image correction device 16 includes an acquisition unit 40, a determination unit 42, and a correction unit 44.
  • the CPU 20 executes the image correction program 30, the CPU 20 functions as an acquisition unit 40, a determination unit 42, and a correction unit 44.
  • the acquisition unit 40 acquires moving image data obtained by imaging by the first imaging device 12 from the first imaging device 12. In addition, the acquisition unit 40 acquires the moving image data obtained by imaging by the second imaging device 14A from the second imaging device 14A. In addition, the acquisition unit 40 acquires moving image data obtained by imaging by the second imaging device 14B from the second imaging device 14B.
  • the acquisition unit 40 acquires lens information data L1 from the first imaging device 12.
  • the acquisition unit 40 acquires lens information data L2A from the second imaging device 14A.
  • the acquisition unit 40 acquires lens information data L2B from the second imaging device 14B.
  • the discriminating unit 42 captures the moving image data acquired by the acquiring unit 40 by moving image data acquired by the first imaging device 12 or by the second imaging device 14A or 14B. It is discriminated whether it is the moving image data obtained by the above.
  • the correcting unit 44 uses the lens information data L1 and the lens information data L2A to perform the following when the determining unit 42 determines that the moving image data is moving image data obtained by imaging by the second imaging device 14A. Perform the correction shown in. That is, in this case, the correction unit 44 uses the lens information data L1 and the lens information data L2A to perform moving image data obtained by imaging by the second imaging device 14A by imaging by the first imaging device 12. Correction is performed in accordance with the image quality of the obtained moving image data.
  • the correction unit 44 performs the above correction by converting each pixel value of each frame of moving image data obtained by imaging by the second imaging device 14A according to the following equation (1).
  • (x, y) represents the coordinates of the pixel in the image
  • L ′ (x, y) represents the pixel value of the pixel at the coordinate (x, y) after correction by the correction unit 44.
  • L (x, y) represents a pixel value of a pixel at coordinates (x, y) of moving image data (that is, moving image data before correction) obtained by imaging by the second imaging device 14A.
  • SHDmain (x, y) in the expression (1) represents a light reduction amount of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1.
  • SHDsub (x, y) in the expression (1) represents the light reduction amount of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14A included in the lens information data L2A.
  • L ′ (x, y) L (x, y) ⁇ SHDmain (x, y) ⁇ SHDsub (x, y) ... (1)
  • the correction unit 44 obtains moving image data obtained by imaging by the second imaging device 14B by imaging by the first imaging device 12 using the lens information data L1 and the lens information data L2B. Correction to match the image quality of the moving image data.
  • the correction unit 44 performs the above correction on the RAW data, or performs the inverse gamma conversion when the correction is performed on the non-linear signal after the gamma conversion. Perform after returning to a linear signal.
  • the correction unit 44 performs the gamma conversion after the correction to return to the non-linear signal.
  • the correction unit 44 stores the moving image data obtained by the above correction in the storage unit 22.
  • a moving image with reduced image quality difference is generated as shown in FIG. 5 as an example.
  • the image correction process shown in FIG. 6 is executed.
  • the image correction process illustrated in FIG. 6 is performed when, for example, moving image data obtained by imaging by the first imaging device 12 and the second imaging devices 14A and 14B is input to the image correction device 16. Executed.
  • the determination unit 42 determines whether the moving image data acquired by the process of step S10 is moving image data obtained by imaging by the first imaging device 12, or the second imaging devices 14A and 14B. It is discriminated whether it is moving image data obtained by imaging with any of the above.
  • the determination unit 42 determines that the moving image data acquired by the process of step S10 is the moving image data obtained by imaging by the first imaging device 12
  • the determination of step S12 is affirmative and the process is The process proceeds to step S14. If the determination unit 42 determines that the moving image data acquired by the processing in step S10 is moving image data obtained by imaging by the second imaging device 14A or 14B, the determination in step S12 is negative. It becomes determination and a process transfers to step S16.
  • step S14 the acquisition unit 40 stores the moving image data acquired by the process of step S10 in the storage unit 22.
  • the image correction process ends.
  • step S ⁇ b> 16 the acquisition unit 40 acquires lens information data L ⁇ b> 1 from the first imaging device 12.
  • step S18 the acquisition unit 40 acquires lens information data L2 from the second imaging device 14 corresponding to the moving image data acquired by the process of step S10.
  • step S20 the correction unit 44 uses the lens information data L1 acquired by the process of step S16 and the lens information data L2 acquired by the process of step S18 as described above, according to the above equation (1). Correction is performed on the moving image data acquired by the process of step S10.
  • step S ⁇ b> 22 the correction unit 44 stores the moving image data corrected by the process in step S ⁇ b> 20 in the storage unit 22.
  • the image correction process ends.
  • the first imaging device 12 is used for moving image data obtained by imaging by the second imaging device 14 using the lens information data L1 and the lens information data L2.
  • the correction is performed in accordance with the image quality of the moving image data obtained by the imaging. Accordingly, it is possible to reduce the difference in image quality between moving images obtained by imaging by each of a plurality of imaging devices.
  • the lens information data L1, L2A, and L2B includes the peripheral light reduction amount as the information regarding the optical performance of the imaging lens
  • the lens information data L1, L2A, and L2B may be configured to include the resolution of the imaging lens as information related to the optical performance of the imaging lens.
  • the correction unit 44 exemplifies a mode of correcting the moving image data obtained by the imaging by the second imaging device 14 according to the following equation (2) instead of the equation (1).
  • Rmain (x, y) in equation (2) represents the resolution of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1.
  • Rsub (x, y) in the expression (2) represents the resolution of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14 included in the lens information data L2.
  • L ′ (x, y) L (x, y) ⁇ Rmain (x, y) ⁇ Rsub (x, y) (2)
  • the lens information data L1, L2A, and L2B may include a color fog amount as information regarding the optical performance of the imaging lens.
  • the correction unit 44 exemplifies a mode of correcting the moving image data obtained by the imaging by the second imaging device 14 according to the following equation (3) instead of the above equation (1).
  • CFmain (x, y) in the expression (3) represents the color fog amount of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1.
  • CFsub (x, y) in the expression (3) represents the color fog amount of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14 included in the lens information data L2.
  • L ′ (x, y) L (x, y) ⁇ CFmain (x, y) ⁇ CFsub (x, y) (3)
  • the lens information data L1, L2A, and L2B may include a chromatic aberration amount as information related to the optical performance of the imaging lens.
  • the correction unit 44 corrects the moving image data obtained by imaging by the second imaging device 14 according to the following equations (4-1) and (4-2) instead of the above equation (1).
  • CAmain (x, y) in equation (4-1) represents the amount of chromatic aberration of magnification of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1.
  • CAsub (x, y) in the equation (4-1) represents the amount of chromatic aberration of magnification of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14 included in the lens information data L2.
  • CAmain (x, y, l, d) in the equation (4-2) is the axial chromatic aberration of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1. Represents an amount.
  • CAsub (x, y, l, d) in the equation (4-2) is the axial chromatic aberration of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14 included in the lens information data L2. Represents an amount.
  • l represents the distance from the second imaging device 14 to the focused surface
  • d represents the distance from the focused surface to the subject.
  • L ′ (x, y) L (x, y) ⁇ CAmain (x, y) ⁇ CAsub (x, y) (4-1)
  • L ′ (x, y) L (x, y) ⁇ CAmain (x, y, l, d) ⁇ CAsub (x, y, l, d) (4-2)
  • the correction unit 44 corrects the moving image data obtained by the imaging by the second imaging device 14 using two or more of the peripheral light reduction, the imaging lens resolution, the chromatic aberration amount, and the color fogging amount. It is good also as a form which performs.
  • trimming processing may be performed on moving image data obtained by imaging by the second imaging device 14.
  • An example of a flowchart of the image correction process in this embodiment is shown in FIG.
  • the same step number is attached
  • step S30 in FIG. 7 the correction unit 44 determines whether or not to perform trimming processing on the moving image data obtained by the processing in step S10. If this determination is negative, the process proceeds to step S20. If the determination is affirmative, the process proceeds to step S32.
  • step S32 the correction unit 44 performs a trimming process on the moving image data obtained by the process of step S10.
  • step S34 the correction unit 44 performs an enlargement process on the moving image data obtained by the trimming process so that the moving image data has the same size as the original moving image. In this case, in the next step S20, correction is performed on the moving image data that has undergone the trimming process and the enlargement process.
  • correction by the correction unit 44 is performed on the moving image data after the trimming process. Therefore, as shown on the right side of FIG. 8, the difference in image quality between moving images obtained by imaging by each of the plurality of imaging devices is further reduced as compared with the case where trimming processing is performed after correction by the correction unit 44. can do.
  • the correction by the correction unit 44 may be performed on the image of each frame in real time during moving image capturing. Further, in this case, a mode in which images of frames that have been corrected by the correction unit 44 are sequentially displayed on the display unit 23 is exemplified.
  • the CPU 20 functions as a display control unit that performs control to display the moving image indicated by the moving image data corrected by the correction unit 44 on the display unit 23.
  • all the imaging devices contained in the imaging system 10 may be provided with comparatively new imaging lenses, such as the newest lens.
  • amendment part 44 with respect to the moving image data obtained by imaging with all the imaging devices is illustrated.
  • any of the second imaging devices 14A and 14B may execute the image correction processing executed by the image correction device 16 in the above embodiment.
  • various processors other than the CPU may execute various processes executed by the CPU executing software (programs) in the above embodiment.
  • a processor in this case, in order to execute specific processing such as PLD (Programmable Logic Device) and ASIC (Application Specific Integrated Circuit) whose circuit configuration can be changed after manufacturing FPGA (Field-Programmable Gate Array) or the like.
  • a dedicated electric circuit which is a processor having a circuit configuration designed exclusively, is exemplified.
  • the various processes may be executed by one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs and CPUs and FPGAs). Etc.).
  • the hardware structure of these various processors is more specifically an electric circuit in which circuit elements such as semiconductor elements are combined.
  • the image correction program 30 was previously stored (installed) in the storage unit 22 in the above embodiment, the present invention is not limited to this.
  • the image correction program 30 is provided in a form recorded in a recording medium such as a CD-ROM (Compact Disk Read Only Memory), a DVD-ROM (Digital Versatile Disk Read Only Memory), and a USB (Universal Serial Bus) memory. Also good.
  • the image correction program 30 may be downloaded from an external device via a network.
  • Imaging system 10 Imaging system 12 First imaging device 14A, 14B Second imaging device 16 Image correction device 20 CPU 21 memory 22 Memory unit 23 Display 24 Input section 25 External I / F 26 Communication I / F 27 Bus 30 Image correction program 40 Acquisition Department 42 Discrimination part 44 Correction part D1, D2, D3 video L1, L2A, L2B Lens information data S Subject

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The purpose of the present invention is to obtain an image correction device, an image correction method, and an image correction program with which differences can be reduced in image quality among moving pictures respectively obtained by imaging by multiple imaging devices. The image correction device 16 is provided with: an acquisition unit 40 for acquiring moving picture data obtained by imaging by each of multiple imaging devices; a determination unit 42 for determining whether or not moving picture data is moving picture data obtained by imaging by a second imaging device; and a correction unit 44 for, when the moving picture data is determined as being the moving picture data obtained by imaging by the second imaging device, correcting the moving picture data obtained by imaging by the second imaging device so as to match the image quality of moving picture data obtained by imaging by a first imaging device by using lens information data pertaining to optical performances of respective imaging lenses of the first imaging device and the second imaging device.

Description

画像補正装置、画像補正方法、及び画像補正プログラムImage correction apparatus, image correction method, and image correction program

 本開示は、画像補正装置、画像補正方法、及び画像補正プログラムに関する。

The present disclosure relates to an image correction apparatus, an image correction method, and an image correction program.

 従来、共通の被写体を同期して撮像する複数の撮像装置の中で、各撮像装置の撮像条件を連携させる処理の親機となる撮像装置を、共通の被写体の撮像に関する各撮像装置の状態に基づいて選択する技術が開示されている(特許文献1参照)。この技術では、選択された親機となる撮像装置による共通の被写体の撮像に際して取得される親機関連情報に基づいて、複数の撮像装置の撮像条件を決定する。

Conventionally, among a plurality of imaging devices that capture images of a common subject in synchronization, the imaging device serving as a master unit for processing to link the imaging conditions of each imaging device is changed to the state of each imaging device related to imaging of a common subject. A technique for selecting based on this is disclosed (see Patent Document 1). In this technique, imaging conditions of a plurality of imaging devices are determined based on parent device related information acquired when an imaging device serving as a selected parent device captures a common subject.

 また、撮像者の操作により被写体を撮像して撮像画像を生成する主撮像装置から、主撮像装置の撮像タイミングを取得し、取得した撮像タイミングに基づいて、被写体を撮像して撮像画像を生成する補助撮像装置が開示されている(特許文献2参照)。

In addition, the imaging timing of the main imaging device is acquired from the main imaging device that captures the subject by the operator and generates the captured image, and the captured image is generated by capturing the subject based on the acquired imaging timing. An auxiliary imaging device is disclosed (see Patent Document 2).

 また、目標のノイズ量であるノイズ目標量に基づいて、撮影パラメータにより特定される撮影条件を決定する技術が開示されている(特許文献3参照)。

Further, a technique for determining a shooting condition specified by a shooting parameter based on a target noise amount that is a target noise amount is disclosed (see Patent Document 3).

特開2016-39620号公報JP 2016-39620 A 特開2014-90373号公報JP 2014-90373 A 特開2015-97382号公報Japanese Patent Laying-Open No. 2015-97382

 ところで、共通の被写体を複数の撮像装置の各々によって撮像し、各撮像装置による撮像により得られた動画像を1つの動画像にまとめることが一般的に行われている。複数の撮像装置は、例えば、それぞれが製造された年代の違い等に起因して、撮像レンズの光学性能が異なる場合がある。この場合、各撮像装置による撮像により得られた動画像間の画質の差異が比較的大きくなってしまう、という問題点があった。また、この結果、各撮像装置による撮像により得られた動画像をまとめることによって得られた1つの動画像は、映像に統一感のない動画像になってしまう。

By the way, generally, a common subject is imaged by each of a plurality of imaging devices, and moving images obtained by imaging by each imaging device are combined into one moving image. The plurality of imaging devices may have different optical performances of the imaging lenses due to, for example, differences in age when each was manufactured. In this case, there is a problem that a difference in image quality between moving images obtained by imaging by each imaging device becomes relatively large. As a result, one moving image obtained by combining moving images obtained by imaging by the respective imaging devices becomes a moving image having no sense of unity in the video.

 特許文献1~3に記載の技術は、撮像時に撮像条件を決定することについては考慮されているものの、撮像レンズの光学性能が異なる複数の撮像装置の各々に対し、画質の差異が低減されるような撮像条件が決定できない場合がある。この場合、上記問題点を解決することはできない。

Although the techniques described in Patent Documents 1 to 3 take into consideration the determination of imaging conditions during imaging, the difference in image quality is reduced for each of a plurality of imaging devices having different optical performances of imaging lenses. There are cases where such imaging conditions cannot be determined. In this case, the above problem cannot be solved.

 本開示は、以上の事情を鑑みて成されたものであり、複数の撮像装置の各々による撮像により得られた動画像間の画質の差異を低減することができる画像補正装置、画像補正方法、及び画像補正プログラムを提供することを目的とする。

The present disclosure has been made in view of the above circumstances, and an image correction apparatus, an image correction method, and an image correction apparatus that can reduce a difference in image quality between moving images obtained by imaging by each of a plurality of imaging devices. And an image correction program.

 上記目的を達成するために、本開示の画像補正装置は、複数の撮像装置の各々による撮像により得られた動画像データを取得する取得部と、動画像データが第1の撮像装置による撮像により得られた動画像データであるか、又は第2の撮像装置による撮像により得られた動画像データであるかを判別する判別部と、動画像データが第2の撮像装置による撮像により得られた動画像データであると判別された場合、第1の撮像装置の撮像レンズの光学性能に関するレンズ情報データ及び第2の撮像装置の撮像レンズの光学性能に関するレンズ情報データを用いて、第2の撮像装置による撮像により得られた動画像データに対し、第1の撮像装置による撮像により得られた動画像データの画質に合わせる補正を行う補正部と、を備えている。

In order to achieve the above object, an image correction apparatus according to the present disclosure includes an acquisition unit that acquires moving image data obtained by imaging by each of a plurality of imaging devices, and the moving image data is acquired by imaging by the first imaging device. A discriminating unit for discriminating whether the obtained moving image data or the moving image data obtained by imaging by the second imaging device; and the moving image data obtained by imaging by the second imaging device When it is determined that the image data is moving image data, the second imaging is performed using the lens information data regarding the optical performance of the imaging lens of the first imaging device and the lens information data regarding the optical performance of the imaging lens of the second imaging device. A correction unit that corrects the moving image data obtained by the image pickup by the apparatus according to the image quality of the moving image data obtained by the image pickup by the first image pickup apparatus;

 なお、本開示の画像補正装置は、レンズ情報データが、周辺減光量、撮像レンズの解像度、色収差量、及び色かぶり量の少なくとも1つを含んでもよい。

In the image correction apparatus according to the present disclosure, the lens information data may include at least one of peripheral light reduction, imaging lens resolution, chromatic aberration amount, and color fog amount.

 また、本開示の画像補正装置は、第1の撮像装置のレンズ情報データにおける周辺減光量、撮像レンズの解像度、色収差量、及び色かぶり量の少なくとも1つが、第2の撮像装置よりも光学性能が低くてもよい。

In addition, the image correction apparatus according to the present disclosure has at least one of the peripheral light reduction, the resolution of the imaging lens, the chromatic aberration amount, and the color fog amount in the lens information data of the first imaging apparatus, as compared with the second imaging apparatus. May be low.

 また、本開示の画像補正装置は、第2の撮像装置による撮像により得られた動画像データに対する補正部による補正が、動画の撮像中に行われてもよい。

In the image correction apparatus according to the present disclosure, correction by the correction unit for moving image data obtained by imaging by the second imaging apparatus may be performed while a moving image is being captured.

 また、本開示の画像補正装置は、動画の撮像中に、補正部による補正を経た動画像データが示す動画を表示部に表示する制御を行う表示制御部を更に備えてもよい。

In addition, the image correction apparatus according to the present disclosure may further include a display control unit that performs control to display the moving image indicated by the moving image data that has been corrected by the correction unit during imaging of the moving image on the display unit.

 また、本開示の画像補正装置は、第2の撮像装置による撮像により得られた動画像データに対してトリミング処理を行う場合、補正部が、トリミング処理後の動画像データに対して補正を行ってもよい。

In addition, in the image correction device according to the present disclosure, when performing the trimming process on the moving image data obtained by the imaging by the second imaging device, the correction unit corrects the moving image data after the trimming process. May be.

 一方、上記目的を達成するために、本開示の画像補正方法は、複数の撮像装置の各々による撮像により得られた動画像データを取得し、動画像データが第1の撮像装置による撮像により得られた動画像データであるか、又は第2の撮像装置による撮像により得られた動画像データであるかを判別し、動画像データが第2の撮像装置による撮像により得られた動画像データであると判別した場合、第1の撮像装置の撮像レンズの光学性能に関するレンズ情報データ及び第2の撮像装置の撮像レンズの光学性能に関するレンズ情報データを用いて、第2の撮像装置による撮像により得られた動画像データに対し、第1の撮像装置による撮像により得られた動画像データの画質に合わせる補正を行う処理をコンピュータが実行するものである。

On the other hand, in order to achieve the above object, the image correction method of the present disclosure acquires moving image data obtained by imaging by each of a plurality of imaging devices, and the moving image data is obtained by imaging by the first imaging device. It is determined whether it is the moving image data obtained or the moving image data obtained by imaging by the second imaging device, and the moving image data is the moving image data obtained by imaging by the second imaging device. If it is determined that there is, it is obtained by imaging with the second imaging device using lens information data regarding the optical performance of the imaging lens of the first imaging device and lens information data regarding the optical performance of the imaging lens of the second imaging device. The computer executes a process for correcting the moving image data in accordance with the image quality of the moving image data obtained by the imaging by the first imaging device.

 また、上記目的を達成するために、本開示の画像補正プログラムは、複数の撮像装置の各々による撮像により得られた動画像データを取得し、動画像データが第1の撮像装置による撮像により得られた動画像データであるか、又は第2の撮像装置による撮像により得られた動画像データであるかを判別し、動画像データが第2の撮像装置による撮像により得られた動画像データであると判別した場合、第1の撮像装置の撮像レンズの光学性能に関するレンズ情報データ及び第2の撮像装置の撮像レンズの光学性能に関するレンズ情報データを用いて、第2の撮像装置による撮像により得られた動画像データに対し、第1の撮像装置による撮像により得られた動画像データの画質に合わせる補正を行う処理をコンピュータに実行させるためのものである。

In order to achieve the above object, the image correction program of the present disclosure acquires moving image data obtained by imaging by each of a plurality of imaging devices, and the moving image data is obtained by imaging by the first imaging device. It is determined whether it is the moving image data obtained or the moving image data obtained by imaging by the second imaging device, and the moving image data is the moving image data obtained by imaging by the second imaging device. If it is determined that there is, it is obtained by imaging with the second imaging device using lens information data regarding the optical performance of the imaging lens of the first imaging device and lens information data regarding the optical performance of the imaging lens of the second imaging device. For causing the computer to execute a process for correcting the moving image data in accordance with the image quality of the moving image data obtained by imaging by the first imaging device. It is.

 本開示によれば、複数の撮像装置の各々による撮像により得られた動画像間の画質の差異を低減することができる。

According to the present disclosure, it is possible to reduce a difference in image quality between moving images obtained by imaging by each of a plurality of imaging devices.

実施形態に係る撮像システムの構成の一例を示す平面図である。It is a top view which shows an example of a structure of the imaging system which concerns on embodiment. 実施形態に係る画像補正装置のハードウェア構成の一例を示すブロック図である。It is a block diagram which shows an example of the hardware constitutions of the image correction apparatus which concerns on embodiment. 実施形態に係る複数の撮像装置の各々による撮像により得られた動画像の一例を示す図である。It is a figure which shows an example of the moving image obtained by the imaging by each of the some imaging device which concerns on embodiment. 実施形態に係る画像補正装置の機能的な構成の一例を示すブロック図である。It is a block diagram which shows an example of a functional structure of the image correction apparatus which concerns on embodiment. 実施形態に係る画像補正装置による補正後の動画像の一例を示す図である。It is a figure which shows an example of the moving image after the correction | amendment by the image correction apparatus which concerns on embodiment. 実施形態に係る画像補正処理の一例を示すフローチャートである。It is a flowchart which shows an example of the image correction process which concerns on embodiment. 変形例に係る画像補正処理の一例を示すフローチャートである。It is a flowchart which shows an example of the image correction process which concerns on a modification. 変形例に係るトリミング処理を説明するための図である。It is a figure for demonstrating the trimming process which concerns on a modification.

 以下、図面を参照して、本開示の技術を実施するための形態例を詳細に説明する。

Hereinafter, exemplary embodiments for carrying out the technology of the present disclosure will be described in detail with reference to the drawings.

 まず、図1を参照して、本実施形態に係る撮像システム10の構成を説明する。図1に示すように、撮像システム10は、被写体Sを撮像する第1の撮像装置12及び第2の撮像装置14A、14Bと、画像補正装置16とを含む。第1の撮像装置12及び第2の撮像装置14Aは、有線によって画像補正装置16に接続され、第2の撮像装置14Bは、無線によって画像補正装置16に接続されている。第1の撮像装置12、及び第2の撮像装置14A、14Bの各々による撮像により得られた動画像データは、画像補正装置16に出力される。

First, the configuration of the imaging system 10 according to the present embodiment will be described with reference to FIG. As illustrated in FIG. 1, the imaging system 10 includes a first imaging device 12 and second imaging devices 14 </ b> A and 14 </ b> B that capture an image of a subject S, and an image correction device 16. The first imaging device 12 and the second imaging device 14A are connected to the image correction device 16 by wire, and the second imaging device 14B is connected to the image correction device 16 by radio. The moving image data obtained by imaging by each of the first imaging device 12 and the second imaging devices 14A and 14B is output to the image correction device 16.

 本実施形態では、第1の撮像装置12の撮像レンズは、第2の撮像装置14A、14Bの撮像レンズよりも古いレンズであり、オールドレンズとも称される。具体的には、第1の撮像装置12の撮像レンズは、1997年以前に製造されたレンズである。これに対し、第2の撮像装置14A、14Bの撮像レンズは、比較的新しいレンズである。

In the present embodiment, the imaging lens of the first imaging device 12 is an older lens than the imaging lenses of the second imaging devices 14A and 14B, and is also referred to as an old lens. Specifically, the imaging lens of the first imaging device 12 is a lens manufactured before 1997. On the other hand, the imaging lenses of the second imaging devices 14A and 14B are relatively new lenses.

 また、第1の撮像装置12の記憶部には、第1の撮像装置12の撮像レンズの光学性能に関するレンズ情報データL1が記憶されている。また、第2の撮像装置14A、14Bの記憶部には、第2の撮像装置14A、14Bの撮像レンズの光学性能に関するレンズ情報データL2A、L2Bが記憶されている。レンズ情報データL1、L2A、L2Bは、それぞれ対応する撮像レンズの光学性能に関する情報として、周辺減光量を含む。レンズ情報データL1における周辺減光量は、レンズ情報データL2A、L2Bにおける周辺減光量よりも光学性能が低い。具体的には、レンズ情報データL1における周辺減光量は、レンズ情報データL2A、L2Bにおける周辺減光量よりも大きい。

In addition, lens information data L1 related to the optical performance of the imaging lens of the first imaging device 12 is stored in the storage unit of the first imaging device 12. In addition, lens information data L2A and L2B relating to the optical performance of the imaging lenses of the second imaging devices 14A and 14B are stored in the storage units of the second imaging devices 14A and 14B. The lens information data L1, L2A, and L2B each include peripheral light reduction as information regarding the optical performance of the corresponding imaging lens. The peripheral light reduction in the lens information data L1 has lower optical performance than the peripheral light reduction in the lens information data L2A and L2B. Specifically, the peripheral light reduction in the lens information data L1 is larger than the peripheral light reduction in the lens information data L2A and L2B.

 また、本実施形態では、第1の撮像装置12がメインの撮像装置であり、第2の撮像装置14A、14Bがサブの撮像装置である。なお、以下では、第2の撮像装置14A、14Bを総称する場合は、「第2の撮像装置14」といい、レンズ情報データL2A、L2Bを総称する場合は、「レンズ情報データL2」という。

In the present embodiment, the first imaging device 12 is a main imaging device, and the second imaging devices 14A and 14B are sub imaging devices. Hereinafter, the second imaging devices 14A and 14B are collectively referred to as “second imaging device 14”, and the lens information data L2A and L2B are collectively referred to as “lens information data L2”.

 次に、図2を参照して、本実施形態に係る画像補正装置16のハードウェア構成を説明する。図2に示すように、画像補正装置16は、CPU(Central Processing Unit)20、一時記憶領域としてのメモリ21、及び不揮発性の記憶部22を含む。また、画像補正装置16は、液晶ディスプレイ等の表示部23、キーボードとマウス等の入力部24を含む。また、画像補正装置16は、第1の撮像装置12と第2の撮像装置14Aとが接続される外部I/F(InterFace)25、及び第2の撮像装置14Bが接続される通信I/F26を含む。CPU20、メモリ21、記憶部22、表示部23、入力部24、外部I/F25、及び通信I/F26は、バス27に接続される。画像補正装置16の例としては、パーソナルコンピュータ及びサーバコンピュータ等が挙げられる。

Next, the hardware configuration of the image correction apparatus 16 according to the present embodiment will be described with reference to FIG. As shown in FIG. 2, the image correction device 16 includes a CPU (Central Processing Unit) 20, a memory 21 as a temporary storage area, and a nonvolatile storage unit 22. The image correction device 16 includes a display unit 23 such as a liquid crystal display and an input unit 24 such as a keyboard and a mouse. The image correction device 16 includes an external I / F (InterFace) 25 to which the first imaging device 12 and the second imaging device 14A are connected, and a communication I / F 26 to which the second imaging device 14B is connected. including. The CPU 20, the memory 21, the storage unit 22, the display unit 23, the input unit 24, the external I / F 25, and the communication I / F 26 are connected to the bus 27. Examples of the image correction device 16 include a personal computer and a server computer.

 記憶部22は、HDD(Hard Disk Drive)、SSD(Solid State Drive)、及びフラッシュメモリ等によって実現される。記憶媒体としての記憶部22には、画像補正プログラム30が記憶される。CPU20は、記憶部22から画像補正プログラム30を読み出してからメモリ21に展開し、展開した画像補正プログラム30を実行する。

The storage unit 22 is realized by an HDD (Hard Disk Drive), an SSD (Solid State Drive), a flash memory, or the like. An image correction program 30 is stored in the storage unit 22 as a storage medium. The CPU 20 reads the image correction program 30 from the storage unit 22 and expands it in the memory 21, and executes the expanded image correction program 30.

 ところで、前述したように、第1の撮像装置12の撮像レンズの光学性能は、第2の撮像装置14A、14Bの撮像レンズの光学性能よりも低い。従って、一例として図3に示すように、第1の撮像装置12による撮像により得られた動画像D1は、第2の撮像装置14A、14Bによる撮像により得られた動画像D2、D3よりも周辺減光が目立つものとなる。これらの動画像をまとめることによって1つの動画像とした場合、その動画像は、映像に統一感が無いものとなってしまう。

By the way, as described above, the optical performance of the imaging lens of the first imaging device 12 is lower than the optical performance of the imaging lenses of the second imaging devices 14A and 14B. Accordingly, as shown in FIG. 3 as an example, the moving image D1 obtained by imaging by the first imaging device 12 is more peripheral than the moving images D2 and D3 obtained by imaging by the second imaging devices 14A and 14B. Dimming will be noticeable. When these moving images are combined into one moving image, the moving image has no sense of unity.

 一方で、ユーザは、独特の趣がある動画像を得るために、第1の撮像装置12による撮像により得られた動画像の画質に合わせた動画像を生成したいこともある。そこで、本実施形態に係る画像補正装置16は、第2の撮像装置14による撮像により得られた動画像データに対し、第1の撮像装置12による撮像により得られた動画像データの画質に合わせる補正を行う機能を備えている。

On the other hand, in order to obtain a moving image with a unique taste, the user may want to generate a moving image that matches the image quality of the moving image obtained by imaging by the first imaging device 12. Therefore, the image correction device 16 according to the present embodiment matches the image quality of the moving image data obtained by the image pickup by the first image pickup device 12 with respect to the moving image data obtained by the image pickup by the second image pickup device 14. A function to perform correction is provided.

 次に、図4を参照して、本実施形態に係る画像補正装置16の機能的な構成について説明する。図4に示すように、画像補正装置16は、取得部40、判別部42、及び補正部44を含む。CPU20が画像補正プログラム30を実行することで、取得部40、判別部42、及び補正部44として機能する。

Next, a functional configuration of the image correction apparatus 16 according to the present embodiment will be described with reference to FIG. As illustrated in FIG. 4, the image correction device 16 includes an acquisition unit 40, a determination unit 42, and a correction unit 44. When the CPU 20 executes the image correction program 30, the CPU 20 functions as an acquisition unit 40, a determination unit 42, and a correction unit 44.

 取得部40は、第1の撮像装置12による撮像により得られた動画像データを第1の撮像装置12から取得する。また、取得部40は、第2の撮像装置14Aによる撮像により得られた動画像データを第2の撮像装置14Aから取得する。また、取得部40は、第2の撮像装置14Bによる撮像により得られた動画像データを第2の撮像装置14Bから取得する。

The acquisition unit 40 acquires moving image data obtained by imaging by the first imaging device 12 from the first imaging device 12. In addition, the acquisition unit 40 acquires the moving image data obtained by imaging by the second imaging device 14A from the second imaging device 14A. In addition, the acquisition unit 40 acquires moving image data obtained by imaging by the second imaging device 14B from the second imaging device 14B.

 また、取得部40は、第1の撮像装置12からレンズ情報データL1を取得する。また、取得部40は、第2の撮像装置14Aからレンズ情報データL2Aを取得する。また、取得部40は、第2の撮像装置14Bからレンズ情報データL2Bを取得する。

The acquisition unit 40 acquires lens information data L1 from the first imaging device 12. The acquisition unit 40 acquires lens information data L2A from the second imaging device 14A. The acquisition unit 40 acquires lens information data L2B from the second imaging device 14B.

 判別部42は、取得部40により取得された動画像データが、第1の撮像装置12による撮像により得られた動画像データであるか、又は第2の撮像装置14A、14Bの何れかによる撮像により得られた動画像データであるかを判別する。

The discriminating unit 42 captures the moving image data acquired by the acquiring unit 40 by moving image data acquired by the first imaging device 12 or by the second imaging device 14A or 14B. It is discriminated whether it is the moving image data obtained by the above.

 補正部44は、判別部42により動画像データが第2の撮像装置14Aによる撮像により得られた動画像データであると判別された場合、レンズ情報データL1及びレンズ情報データL2Aを用いて、以下に示す補正を行う。すなわち、この場合、補正部44は、レンズ情報データL1及びレンズ情報データL2Aを用いて、第2の撮像装置14Aによる撮像により得られた動画像データに対し、第1の撮像装置12による撮像により得られた動画像データの画質に合わせる補正を行う。

The correcting unit 44 uses the lens information data L1 and the lens information data L2A to perform the following when the determining unit 42 determines that the moving image data is moving image data obtained by imaging by the second imaging device 14A. Perform the correction shown in. That is, in this case, the correction unit 44 uses the lens information data L1 and the lens information data L2A to perform moving image data obtained by imaging by the second imaging device 14A by imaging by the first imaging device 12. Correction is performed in accordance with the image quality of the obtained moving image data.

 具体的には、補正部44は、以下の(1)式に従って、第2の撮像装置14Aによる撮像により得られた動画像データの各フレームの各画素値を変換することによって、上記補正を行う。なお、(1)式における(x、y)は、画像内の画素の座標を表し、L’(x、y)は、補正部44による補正後の座標(x、y)の画素の画素値を表す。また、L(x、y)は、第2の撮像装置14Aによる撮像により得られた動画像データ(すなわち、補正前の動画像データ)の座標(x、y)の画素の画素値を表す。また、(1)式におけるSHDmain(x、y)は、レンズ情報データL1に含まれる第1の撮像装置12の撮像レンズにおける座標(x、y)の画素の減光量を表す。また、(1)式におけるSHDsub(x、y)は、レンズ情報データL2Aに含まれる第2の撮像装置14Aの撮像レンズにおける座標(x、y)の画素の減光量を表す。

 L’(x、y)=L(x、y)×SHDmain(x、y)÷SHDsub(x、y)

・・・(1)

Specifically, the correction unit 44 performs the above correction by converting each pixel value of each frame of moving image data obtained by imaging by the second imaging device 14A according to the following equation (1). . In the expression (1), (x, y) represents the coordinates of the pixel in the image, and L ′ (x, y) represents the pixel value of the pixel at the coordinate (x, y) after correction by the correction unit 44. Represents. L (x, y) represents a pixel value of a pixel at coordinates (x, y) of moving image data (that is, moving image data before correction) obtained by imaging by the second imaging device 14A. Further, SHDmain (x, y) in the expression (1) represents a light reduction amount of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1. Further, SHDsub (x, y) in the expression (1) represents the light reduction amount of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14A included in the lens information data L2A.

L ′ (x, y) = L (x, y) × SHDmain (x, y) ÷ SHDsub (x, y)

... (1)

 また、補正部44は、第2の撮像装置14Bによる撮像により得られた動画像データについても同様に、レンズ情報データL1及びレンズ情報データL2Bを用いて第1の撮像装置12による撮像により得られた動画像データの画質に合わせる補正を行う。なお、本実施形態では、補正部44は、上記の補正をRAWデータに対して行うか、もしくは、上記補正をガンマ変換後のノンリニアな信号に対して行う場合は、逆ガンマ変換を行うことによりリニアな信号に戻してから行う。また、補正部44は、リニアな信号に戻してから上記補正を行った場合は、補正後にガンマ変換を行うことによって、ノンリニアな信号に戻す。

Similarly, the correction unit 44 obtains moving image data obtained by imaging by the second imaging device 14B by imaging by the first imaging device 12 using the lens information data L1 and the lens information data L2B. Correction to match the image quality of the moving image data. In this embodiment, the correction unit 44 performs the above correction on the RAW data, or performs the inverse gamma conversion when the correction is performed on the non-linear signal after the gamma conversion. Perform after returning to a linear signal. In addition, when the correction unit 44 performs the above correction after returning to the linear signal, the correction unit 44 performs the gamma conversion after the correction to return to the non-linear signal.

 そして、補正部44は、上記の補正により得られた動画像データを記憶部22に記憶する。以上説明した補正部44による補正により、一例として図5に示すように、画質の差異が低減された動画像が生成される。

Then, the correction unit 44 stores the moving image data obtained by the above correction in the storage unit 22. By the correction by the correction unit 44 described above, a moving image with reduced image quality difference is generated as shown in FIG. 5 as an example.

 次に、図6を参照して、本実施形態に係る画像補正装置16の作用を説明する。CPU20が画像補正プログラム30を実行することによって、図6に示す画像補正処理が実行される。図6に示す画像補正処理は、例えば、第1の撮像装置12、及び第2の撮像装置14A、14Bの何れかによる撮像により得られた動画像データが画像補正装置16に入力された場合に実行される。

Next, the operation of the image correction apparatus 16 according to the present embodiment will be described with reference to FIG. When the CPU 20 executes the image correction program 30, the image correction process shown in FIG. 6 is executed. The image correction process illustrated in FIG. 6 is performed when, for example, moving image data obtained by imaging by the first imaging device 12 and the second imaging devices 14A and 14B is input to the image correction device 16. Executed.

 図6のステップS10で、取得部40は、画像補正装置16に入力された動画像データを取得する。ステップS12で、判別部42は、ステップS10の処理により取得された動画像データが、第1の撮像装置12による撮像により得られた動画像データであるか、又は第2の撮像装置14A、14Bの何れかによる撮像により得られた動画像データであるかを判別する。判別部42が、ステップS10の処理により取得された動画像データが第1の撮像装置12による撮像により得られた動画像データであると判別した場合、ステップS12の判定が肯定判定となり、処理はステップS14に移行する。判別部42が、ステップS10の処理により取得された動画像データが第2の撮像装置14A、14Bの何れかによる撮像により得られた動画像データであると判別した場合、ステップS12の判定が否定判定となり、処理はステップS16に移行する。

In step S <b> 10 of FIG. 6, the acquisition unit 40 acquires moving image data input to the image correction device 16. In step S12, the determination unit 42 determines whether the moving image data acquired by the process of step S10 is moving image data obtained by imaging by the first imaging device 12, or the second imaging devices 14A and 14B. It is discriminated whether it is moving image data obtained by imaging with any of the above. When the determination unit 42 determines that the moving image data acquired by the process of step S10 is the moving image data obtained by imaging by the first imaging device 12, the determination of step S12 is affirmative and the process is The process proceeds to step S14. If the determination unit 42 determines that the moving image data acquired by the processing in step S10 is moving image data obtained by imaging by the second imaging device 14A or 14B, the determination in step S12 is negative. It becomes determination and a process transfers to step S16.

 ステップS14で、取得部40は、ステップS10の処理により取得された動画像データを記憶部22に記憶する。ステップS14の処理が終了すると、画像補正処理が終了する。

In step S14, the acquisition unit 40 stores the moving image data acquired by the process of step S10 in the storage unit 22. When the process of step S14 ends, the image correction process ends.

 一方、ステップS16で、取得部40は、第1の撮像装置12からレンズ情報データL1を取得する。ステップS18で、取得部40は、ステップS10の処理により取得された動画像データに対応する第2の撮像装置14からレンズ情報データL2を取得する。

On the other hand, in step S <b> 16, the acquisition unit 40 acquires lens information data L <b> 1 from the first imaging device 12. In step S18, the acquisition unit 40 acquires lens information data L2 from the second imaging device 14 corresponding to the moving image data acquired by the process of step S10.

 ステップS20で、補正部44は、前述したように、ステップS16の処理により取得されたレンズ情報データL1及びステップS18の処理により取得されたレンズ情報データL2を用いて、上記(1)式に従って、ステップS10の処理により取得された動画像データに対する補正を行う。ステップS22で、補正部44は、ステップS20の処理による補正後の動画像データを記憶部22に記憶する。ステップS22の処理が終了すると、画像補正処理が終了する。

In step S20, the correction unit 44 uses the lens information data L1 acquired by the process of step S16 and the lens information data L2 acquired by the process of step S18 as described above, according to the above equation (1). Correction is performed on the moving image data acquired by the process of step S10. In step S <b> 22, the correction unit 44 stores the moving image data corrected by the process in step S <b> 20 in the storage unit 22. When the process of step S22 ends, the image correction process ends.

 以上説明したように、本実施形態によれば、レンズ情報データL1及びレンズ情報データL2を用いて、第2の撮像装置14による撮像により得られた動画像データに対し、第1の撮像装置12による撮像により得られた動画像データの画質に合わせる補正を行っている。従って、複数の撮像装置の各々による撮像により得られた動画像間の画質の差異を低減することができる。

As described above, according to the present embodiment, the first imaging device 12 is used for moving image data obtained by imaging by the second imaging device 14 using the lens information data L1 and the lens information data L2. The correction is performed in accordance with the image quality of the moving image data obtained by the imaging. Accordingly, it is possible to reduce the difference in image quality between moving images obtained by imaging by each of a plurality of imaging devices.

 なお、上記実施形態では、レンズ情報データL1、L2A、L2Bに、撮像レンズの光学性能に関する情報として、周辺減光量が含まれる場合について説明したが、これに限定されない。例えば、レンズ情報データL1、L2A、L2Bに、撮像レンズの光学性能に関する情報として、撮像レンズの解像度が含まれる形態としてもよい。この場合、補正部44は、上記(1)式に代えて、以下の(2)式に従って、第2の撮像装置14による撮像により得られた動画像データに対する補正を行う形態が例示される。なお、(2)式におけるRmain(x、y)は、レンズ情報データL1に含まれる第1の撮像装置12の撮像レンズにおける座標(x、y)の画素の解像度を表す。また、(2)式におけるRsub(x、y)は、レンズ情報データL2に含まれる第2の撮像装置14の撮像レンズにおける座標(x、y)の画素の解像度を表す。

 L’(x、y)=L(x、y)×Rmain(x、y)÷Rsub(x、y)・・・(2)

In the above-described embodiment, the case where the lens information data L1, L2A, and L2B includes the peripheral light reduction amount as the information regarding the optical performance of the imaging lens has been described. For example, the lens information data L1, L2A, and L2B may be configured to include the resolution of the imaging lens as information related to the optical performance of the imaging lens. In this case, the correction unit 44 exemplifies a mode of correcting the moving image data obtained by the imaging by the second imaging device 14 according to the following equation (2) instead of the equation (1). Note that Rmain (x, y) in equation (2) represents the resolution of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1. Also, Rsub (x, y) in the expression (2) represents the resolution of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14 included in the lens information data L2.

L ′ (x, y) = L (x, y) × Rmain (x, y) ÷ Rsub (x, y) (2)

 また、例えば、レンズ情報データL1、L2A、L2Bに、撮像レンズの光学性能に関する情報として、色かぶり量が含まれる形態としてもよい。この場合、補正部44は、上記(1)式に代えて、以下の(3)式に従って、第2の撮像装置14による撮像により得られた動画像データに対する補正を行う形態が例示される。なお、(3)式におけるCFmain(x、y)は、レンズ情報データL1に含まれる第1の撮像装置12の撮像レンズにおける座標(x、y)の画素の色かぶり量を表す。また、(3)式におけるCFsub(x、y)は、レンズ情報データL2に含まれる第2の撮像装置14の撮像レンズにおける座標(x、y)の画素の色かぶり量を表す。

 L’(x、y)=L(x、y)×CFmain(x、y)÷CFsub(x、y)・・・(3)

In addition, for example, the lens information data L1, L2A, and L2B may include a color fog amount as information regarding the optical performance of the imaging lens. In this case, the correction unit 44 exemplifies a mode of correcting the moving image data obtained by the imaging by the second imaging device 14 according to the following equation (3) instead of the above equation (1). Note that CFmain (x, y) in the expression (3) represents the color fog amount of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1. Further, CFsub (x, y) in the expression (3) represents the color fog amount of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14 included in the lens information data L2.

L ′ (x, y) = L (x, y) × CFmain (x, y) ÷ CFsub (x, y) (3)

 また、例えば、レンズ情報データL1、L2A、L2Bに、撮像レンズの光学性能に関する情報として、色収差量が含まれる形態としてもよい。この場合、補正部44は、上記(1)式に代えて、以下の(4-1)、(4-2)式に従って、第2の撮像装置14による撮像により得られた動画像データに対する補正を行う形態が例示される。なお、(4-1)式におけるCAmain(x、y)は、レンズ情報データL1に含まれる第1の撮像装置12の撮像レンズにおける座標(x、y)の画素の倍率色収差量を表す。また、(4-1)式におけるCAsub(x、y)は、レンズ情報データL2に含まれる第2の撮像装置14の撮像レンズにおける座標(x、y)の画素の倍率色収差量を表す。また、(4-2)式におけるCAmain(x、y、l、d)は、レンズ情報データL1に含まれる第1の撮像装置12の撮像レンズにおける座標(x、y)の画素の軸上色収差量を表す。

また、(4-2)式におけるCAsub(x、y、l、d)は、レンズ情報データL2に含まれる第2の撮像装置14の撮像レンズにおける座標(x、y)の画素の軸上色収差量を表す。また、(4-2)式におけるlは、第2の撮像装置14から合焦している面までの距離を表し、dは、合焦している面から被写体までの距離を表す。

 L’(x、y)=L(x、y)×CAmain(x、y)÷CAsub(x、y)・・・(4-1)

 L’(x、y)=L(x、y)×CAmain(x、y、l、d)÷CAsub(x、y、l、d)・・・(4-2)

For example, the lens information data L1, L2A, and L2B may include a chromatic aberration amount as information related to the optical performance of the imaging lens. In this case, the correction unit 44 corrects the moving image data obtained by imaging by the second imaging device 14 according to the following equations (4-1) and (4-2) instead of the above equation (1). The form which performs is illustrated. Note that CAmain (x, y) in equation (4-1) represents the amount of chromatic aberration of magnification of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1. Further, CAsub (x, y) in the equation (4-1) represents the amount of chromatic aberration of magnification of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14 included in the lens information data L2. Further, CAmain (x, y, l, d) in the equation (4-2) is the axial chromatic aberration of the pixel at the coordinates (x, y) in the imaging lens of the first imaging device 12 included in the lens information data L1. Represents an amount.

Further, CAsub (x, y, l, d) in the equation (4-2) is the axial chromatic aberration of the pixel at the coordinates (x, y) in the imaging lens of the second imaging device 14 included in the lens information data L2. Represents an amount. In the equation (4-2), l represents the distance from the second imaging device 14 to the focused surface, and d represents the distance from the focused surface to the subject.

L ′ (x, y) = L (x, y) × CAmain (x, y) ÷ CAsub (x, y) (4-1)

L ′ (x, y) = L (x, y) × CAmain (x, y, l, d) ÷ CAsub (x, y, l, d) (4-2)

 また、補正部44は、前述した周辺減光量、撮像レンズの解像度、色収差量、及び色かぶり量の2つ以上を用いて、第2の撮像装置14による撮像により得られた動画像データに対する補正を行う形態としてもよい。

In addition, the correction unit 44 corrects the moving image data obtained by the imaging by the second imaging device 14 using two or more of the peripheral light reduction, the imaging lens resolution, the chromatic aberration amount, and the color fogging amount. It is good also as a form which performs.

 また、上記実施形態において、第2の撮像装置14による撮像により得られた動画像データに対してトリミング処理を行う形態としてもよい。この形態例における画像補正処理のフローチャートの一例を図7に示す。なお、図7における図6と同一の処理を実行するステップについては、同一のステップ番号を付して説明を省略する。

In the above embodiment, trimming processing may be performed on moving image data obtained by imaging by the second imaging device 14. An example of a flowchart of the image correction process in this embodiment is shown in FIG. In addition, about the step which performs the process same as FIG. 6 in FIG. 7, the same step number is attached | subjected and description is abbreviate | omitted.

 図7のステップS30で、補正部44は、ステップS10の処理により得られた動画像データに対してトリミング処理を実行するか否かを判定する。この判定が否定判定となった場合は、処理はステップS20に移行し、肯定判定となった場合は、処理はステップS32に移行する。

In step S30 in FIG. 7, the correction unit 44 determines whether or not to perform trimming processing on the moving image data obtained by the processing in step S10. If this determination is negative, the process proceeds to step S20. If the determination is affirmative, the process proceeds to step S32.

 ステップS32で、補正部44は、ステップS10の処理により得られた動画像データに対してトリミング処理を実行する。ステップS34で、補正部44は、トリミング処理により得られた動画像データに対し、元の動画像のサイズと同じサイズにするための拡大処理を行う。この場合、次のステップS20では、トリミング処理及び拡大処理を経た動画像データに対して補正が行われる。

In step S32, the correction unit 44 performs a trimming process on the moving image data obtained by the process of step S10. In step S34, the correction unit 44 performs an enlargement process on the moving image data obtained by the trimming process so that the moving image data has the same size as the original moving image. In this case, in the next step S20, correction is performed on the moving image data that has undergone the trimming process and the enlargement process.

 この形態例では、一例として図8の左側に示すように、トリミング処理後の動画像データに対して補正部44による補正が行われる。従って、図8の右側に示すように、補正部44による補正後にトリミング処理が行われる場合に比較して、複数の撮像装置の各々による撮像により得られた動画像間の画質の差異をより低減することができる。

In this embodiment, as an example, as shown on the left side of FIG. 8, correction by the correction unit 44 is performed on the moving image data after the trimming process. Therefore, as shown on the right side of FIG. 8, the difference in image quality between moving images obtained by imaging by each of the plurality of imaging devices is further reduced as compared with the case where trimming processing is performed after correction by the correction unit 44. can do.

 また、上記実施形態において、動画の撮像中に、リアルタイムに各フレームの画像に対して上記の補正部44による補正を行ってもよい。また、この場合、上記の補正部44による補正が完了したフレームの画像を順次表示部23に表示する形態が例示される。この形態例では、CPU20が補正部44による補正を経た動画像データが示す動画を表示部23に表示する制御を行う表示制御部として機能する。

Further, in the above-described embodiment, the correction by the correction unit 44 may be performed on the image of each frame in real time during moving image capturing. Further, in this case, a mode in which images of frames that have been corrected by the correction unit 44 are sequentially displayed on the display unit 23 is exemplified. In this embodiment, the CPU 20 functions as a display control unit that performs control to display the moving image indicated by the moving image data corrected by the correction unit 44 on the display unit 23.

 また、上記実施形態において、撮像システム10に含まれる全ての撮像装置が、最新レンズ等の比較的新しい撮像レンズを備えてもよい。この場合、全ての撮像装置による撮像により得られた動画像データに対し、上記の補正部44による補正を行う形態が例示される。

Moreover, in the said embodiment, all the imaging devices contained in the imaging system 10 may be provided with comparatively new imaging lenses, such as the newest lens. In this case, the form which correct | amends by the said correction | amendment part 44 with respect to the moving image data obtained by imaging with all the imaging devices is illustrated.

 また、上記実施形態において画像補正装置16が実行した画像補正処理を第2の撮像装置14A、14Bの何れかが実行してもよい。

In addition, any of the second imaging devices 14A and 14B may execute the image correction processing executed by the image correction device 16 in the above embodiment.

 また、上記実施形態でCPUがソフトウェア(プログラム)を実行することにより実行した各種処理を、CPU以外の各種のプロセッサが実行してもよい。この場合のプロセッサとしては、FPGA(Field-Programmable Gate Array)等の製造後に回路構成を変更可能なPLD(Programmable Logic Device)、及びASIC(Application Specific Integrated Circuit)等の特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路等が例示される。また、上記各種処理を、これらの各種のプロセッサのうちの1つで実行してもよいし、同種又は異種の2つ以上のプロセッサの組み合わせ(例えば、複数のFPGA、及びCPUとFPGAとの組み合わせ等)で実行してもよい。また、これらの各種のプロセッサのハードウェア的な構造は、より具体的には、半導体素子等の回路素子を組み合わせた電気回路である。

In addition, various processors other than the CPU may execute various processes executed by the CPU executing software (programs) in the above embodiment. As a processor in this case, in order to execute specific processing such as PLD (Programmable Logic Device) and ASIC (Application Specific Integrated Circuit) whose circuit configuration can be changed after manufacturing FPGA (Field-Programmable Gate Array) or the like. A dedicated electric circuit, which is a processor having a circuit configuration designed exclusively, is exemplified. The various processes may be executed by one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs and CPUs and FPGAs). Etc.). Further, the hardware structure of these various processors is more specifically an electric circuit in which circuit elements such as semiconductor elements are combined.

 また、上記実施形態では、画像補正プログラム30が記憶部22に予め記憶(インストール)されている態様を説明したが、これに限定されない。画像補正プログラム30は、CD-ROM(Compact Disk Read Only Memory)、DVD-ROM(Digital Versatile Disk Read Only Memory)、及びUSB(Universal Serial Bus)メモリ等の記録媒体に記録された形態で提供されてもよい。また、画像補正プログラム30は、ネットワークを介して外部装置からダウンロードされる形態としてもよい。

Moreover, although the image correction program 30 was previously stored (installed) in the storage unit 22 in the above embodiment, the present invention is not limited to this. The image correction program 30 is provided in a form recorded in a recording medium such as a CD-ROM (Compact Disk Read Only Memory), a DVD-ROM (Digital Versatile Disk Read Only Memory), and a USB (Universal Serial Bus) memory. Also good. The image correction program 30 may be downloaded from an external device via a network.

10 撮像システム

12 第1の撮像装置

14A、14B 第2の撮像装置

16 画像補正装置

20 CPU

21 メモリ

22 記憶部

23 表示部

24 入力部

25 外部I/F

26 通信I/F

27 バス

30 画像補正プログラム

40 取得部

42 判別部

44 補正部

D1、D2、D3 動画像

L1、L2A、L2B レンズ情報データ

S 被写体

10 Imaging system

12 First imaging device

14A, 14B Second imaging device

16 Image correction device

20 CPU

21 memory

22 Memory unit

23 Display

24 Input section

25 External I / F

26 Communication I / F

27 Bus

30 Image correction program

40 Acquisition Department

42 Discrimination part

44 Correction part

D1, D2, D3 video

L1, L2A, L2B Lens information data

S Subject

Claims (8)


  1.  複数の撮像装置の各々による撮像により得られた動画像データを取得する取得部と、

     前記動画像データが第1の撮像装置による撮像により得られた動画像データであるか、又は第2の撮像装置による撮像により得られた動画像データであるかを判別する判別部と、

     前記動画像データが前記第2の撮像装置による撮像により得られた動画像データであると判別された場合、前記第1の撮像装置の撮像レンズの光学性能に関するレンズ情報データ及び前記第2の撮像装置の撮像レンズの光学性能に関するレンズ情報データを用いて、前記第2の撮像装置による撮像により得られた動画像データに対し、前記第1の撮像装置による撮像により得られた動画像データの画質に合わせる補正を行う補正部と、

     を備えた画像補正装置。

    An acquisition unit for acquiring moving image data obtained by imaging by each of a plurality of imaging devices;

    A discriminator for discriminating whether the moving image data is moving image data obtained by imaging by the first imaging device or moving image data obtained by imaging by the second imaging device;

    When it is determined that the moving image data is moving image data obtained by imaging by the second imaging device, lens information data regarding the optical performance of the imaging lens of the first imaging device and the second imaging Image quality of moving image data obtained by imaging with the first imaging device with respect to moving image data obtained by imaging with the second imaging device using lens information data relating to the optical performance of the imaging lens of the device A correction unit that performs correction to match

    An image correction apparatus comprising:

  2.  前記レンズ情報データは、周辺減光量、撮像レンズの解像度、色収差量、及び色かぶり量の少なくとも1つを含む

     請求項1に記載の画像補正装置。

    The lens information data includes at least one of peripheral light reduction, imaging lens resolution, chromatic aberration amount, and color fog amount.

    The image correction apparatus according to claim 1.

  3.  前記第1の撮像装置のレンズ情報データにおける周辺減光量、撮像レンズの解像度、色収差量、及び色かぶり量の少なくとも1つは、前記第2の撮像装置よりも光学性能が低い

     請求項2に記載の画像補正装置。

    At least one of the peripheral light reduction, the resolution of the imaging lens, the chromatic aberration amount, and the color fog amount in the lens information data of the first imaging device has lower optical performance than the second imaging device.

    The image correction apparatus according to claim 2.

  4.  前記第2の撮像装置による撮像により得られた動画像データに対する前記補正部による補正は、動画の撮像中に行われる

     請求項1から請求項3の何れか1項に記載の画像補正装置。

    Correction by the correction unit for moving image data obtained by imaging by the second imaging device is performed during imaging of a moving image.

    The image correction apparatus according to any one of claims 1 to 3.

  5.  前記動画の撮像中に、前記補正部による補正を経た動画像データが示す動画を表示部に表示する制御を行う表示制御部を更に備えた

     請求項4に記載の画像補正装置。

    A display control unit that performs control to display the moving image indicated by the moving image data that has undergone correction by the correction unit during imaging of the moving image is displayed.

    The image correction apparatus according to claim 4.

  6.  前記第2の撮像装置による撮像により得られた動画像データに対してトリミング処理を行う場合、前記補正部は、トリミング処理後の動画像データに対して前記補正を行う

     請求項1から請求項5の何れか1項に記載の画像補正装置。

    When performing trimming processing on moving image data obtained by imaging by the second imaging device, the correction unit performs the correction on moving image data after trimming processing.

    The image correction apparatus according to any one of claims 1 to 5.

  7.  複数の撮像装置の各々による撮像により得られた動画像データを取得し、

     前記動画像データが第1の撮像装置による撮像により得られた動画像データであるか、又は第2の撮像装置による撮像により得られた動画像データであるかを判別し、

     前記動画像データが前記第2の撮像装置による撮像により得られた動画像データであると判別した場合、前記第1の撮像装置の撮像レンズの光学性能に関するレンズ情報データ及び前記第2の撮像装置の撮像レンズの光学性能に関するレンズ情報データを用いて、前記第2の撮像装置による撮像により得られた動画像データに対し、前記第1の撮像装置による撮像により得られた動画像データの画質に合わせる補正を行う

     処理をコンピュータが実行する画像補正方法。

    Obtaining moving image data obtained by imaging by each of a plurality of imaging devices;

    Determining whether the moving image data is moving image data obtained by imaging by the first imaging device or moving image data obtained by imaging by the second imaging device;

    When it is determined that the moving image data is moving image data obtained by imaging by the second imaging device, lens information data relating to the optical performance of the imaging lens of the first imaging device and the second imaging device Using the lens information data relating to the optical performance of the image pickup lens, the image quality of the moving image data obtained by the image pickup by the first image pickup device is compared with the moving image data obtained by the image pickup by the second image pickup device. Align the correction

    An image correction method in which processing is executed by a computer.

  8.  複数の撮像装置の各々による撮像により得られた動画像データを取得し、

     前記動画像データが第1の撮像装置による撮像により得られた動画像データであるか、又は第2の撮像装置による撮像により得られた動画像データであるかを判別し、

     前記動画像データが前記第2の撮像装置による撮像により得られた動画像データであると判別した場合、前記第1の撮像装置の撮像レンズの光学性能に関するレンズ情報データ及び前記第2の撮像装置の撮像レンズの光学性能に関するレンズ情報データを用いて、前記第2の撮像装置による撮像により得られた動画像データに対し、前記第1の撮像装置による撮像により得られた動画像データの画質に合わせる補正を行う

     処理をコンピュータに実行させるための画像補正プログラム。

    Obtaining moving image data obtained by imaging by each of a plurality of imaging devices;

    Determining whether the moving image data is moving image data obtained by imaging by the first imaging device or moving image data obtained by imaging by the second imaging device;

    When it is determined that the moving image data is moving image data obtained by imaging by the second imaging device, lens information data regarding the optical performance of the imaging lens of the first imaging device and the second imaging device Using the lens information data relating to the optical performance of the image pickup lens, the image quality of the moving image data obtained by the image pickup by the first image pickup device is compared with the moving image data obtained by the image pickup by the second image pickup device. Align the correction

    An image correction program for causing a computer to execute processing.
PCT/JP2019/016999 2018-05-21 2019-04-22 Image correction device, image correction method, and image correction program WO2019225255A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020521117A JP6833110B2 (en) 2018-05-21 2019-04-22 Image correction device, image correction method, and image correction program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-096912 2018-05-21
JP2018096912 2018-05-21

Publications (1)

Publication Number Publication Date
WO2019225255A1 true WO2019225255A1 (en) 2019-11-28

Family

ID=68615696

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/016999 WO2019225255A1 (en) 2018-05-21 2019-04-22 Image correction device, image correction method, and image correction program

Country Status (2)

Country Link
JP (1) JP6833110B2 (en)
WO (1) WO2019225255A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008060827A (en) * 2006-08-30 2008-03-13 Ricoh Co Ltd Image processing method, imaging device, image processing device and program
WO2011152166A1 (en) * 2010-05-31 2011-12-08 株式会社Pfu Overhead scanner apparatus, image processing method, and program
WO2012153748A1 (en) * 2011-05-12 2012-11-15 オリンパス株式会社 Image transmission device and imaging display system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9900505B2 (en) * 2014-07-23 2018-02-20 Disney Enterprises, Inc. Panoramic video from unstructured camera arrays with globally consistent parallax removal
WO2018087856A1 (en) * 2016-11-10 2018-05-17 三菱電機株式会社 Image synthesis device and image synthesis method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008060827A (en) * 2006-08-30 2008-03-13 Ricoh Co Ltd Image processing method, imaging device, image processing device and program
WO2011152166A1 (en) * 2010-05-31 2011-12-08 株式会社Pfu Overhead scanner apparatus, image processing method, and program
WO2012153748A1 (en) * 2011-05-12 2012-11-15 オリンパス株式会社 Image transmission device and imaging display system

Also Published As

Publication number Publication date
JPWO2019225255A1 (en) 2021-02-18
JP6833110B2 (en) 2021-02-24

Similar Documents

Publication Publication Date Title
US8675980B2 (en) Method and system for reducing update frequency of image-processing means
US10726539B2 (en) Image processing apparatus, image processing method and storage medium
US8442347B2 (en) Information processing apparatus, information processing method, program, and imaging apparatus including optical microscope
JP7212554B2 (en) Information processing method, information processing device, and program
US11922598B2 (en) Image processing apparatus, image processing method, and storage medium
JP2006050497A (en) Image photographing apparatus and image processing system
US10091415B2 (en) Image processing apparatus, method for controlling image processing apparatus, image pickup apparatus, method for controlling image pickup apparatus, and recording medium
JP6415063B2 (en) Image processing apparatus, image processing method, control program, and recording medium
US10122939B2 (en) Image processing device for processing image data and map data with regard to depth distribution of a subject, image processing system, imaging apparatus, image processing method, and recording medium
JP2010035177A (en) Image photographing apparatus and image processing system
US10861194B2 (en) Image processing apparatus, image processing method, and storage medium
JP2018107532A (en) Projection control device, projection control method, and program
WO2019225255A1 (en) Image correction device, image correction method, and image correction program
US10304168B2 (en) Image processing apparatus, image pickup apparatus, image processing method, and non-transitory computer-readable storage medium for correcting deterioration of image
JP2021047827A (en) Device, system, control method, and program
US11659303B2 (en) Imaging apparatus, control method of imaging apparatus, imaging system, and storage medium
JP6521763B2 (en) Image processing apparatus, imaging apparatus, image processing method, program, and recording medium
US9648232B2 (en) Image processing apparatus, image capturing apparatus, control method and recording medium
JP6468751B2 (en) Image processing apparatus, image processing method, and program
US20240155210A1 (en) Data generation apparatus and control method
JP2014119941A (en) Image processing apparatus, image processing method, and imaging apparatus
US20160094784A1 (en) Image processing apparatus, image processing method, computer program and imaging apparatus
JP6639120B2 (en) Image processing apparatus, image processing method, and program
JP2013127819A (en) Image processing apparatus and method thereof
JP2022054793A (en) Crack detection method, crack detection device, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19807706

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020521117

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19807706

Country of ref document: EP

Kind code of ref document: A1